AMD says Ryzen CPUs are performing as expected after launch

In the wake of AMD's Ryzen launch, intrepid builders, hackers, and reviewers at PC hardware sites around the web have shared data that purport to explain performance deltas between Ryzen CPUs and their competition from Intel in some applications. Those explorations have raised concerns about Ryzen's simultaneous multi-threading (SMT) implementation, interactions between Ryzen's core layout and the Windows 10 scheduler, and to a lesser degree, the chip's Turbo behavior. That growing clamor has had many, myself included, waiting for some kind of official response, and now we have one.

Just a little under two weeks from launch, AMD says that Ryzen is working just fine for the most part, and that no major changes should be expected in Windows or elsewhere to correct perceived performance issues—especially those observed by some testers in Windows 7 versus Windows 10. Instead, the company says it's working with developers to deliver "targeted optimizations" for software that "can better utilize the topology and capabilities of [AMD's] new CPU."

The company also says it's aware of "many small changes that can improve Ryzen performance in certain applications," and that it's investigating some games that appear to exhibit performance regressions when Ryzen's SMT is turned on. AMD thinks those outliers can be fixed with "simple changes that can improve a game's understanding of the 'Zen' core/cache topology," although those fixes don't have a definite ETA. That attitude is consistent with PC Perspective's take that there will be "no silver bullet" for improving Ryzen performance.

AMD's most concrete advice for Ryzen owners experiencing what they perceive as less-than-ideal performance is to click over to Windows' power settings and to enable the High Performance power plan, which the company says will reduce core-parking behavior and allow for faster frequency response when apps require it. The company says that optimizations for the default Balanced profile that are better suited to Ryzen desktop CPUs will be made available through an AMD-provided update by the first week of April.

Not to be too smug about it, but AMD's statements about Ryzen CPUs' performance bolsters my confidence that we delivered sound numbers about these chips' performance from day one. We didn't see anything out of the ordinary from our test numbers at the time of publication, and our only complaints about the platform stemmed from some teething issues with motherboard firmware that Gigabyte has addressed with frequent updates since. Should AMD point out a specific application in our test suite from which we should expect significant performance changes, we'll retest it. Until that point, however, our basic conclusion—that Ryzen is a superb value for heavy-duty productivity and a solid gaming chip, to boot—will stand.

Comments closed
    • Bensam123
    • 3 years ago

    Hey look… Core parking strikes again… Weird, too bad people don’t just disable that to help improve responsiveness and reduce microstuttering caused by it.

    • guruMarkB
    • 3 years ago

    Thanks Jeff for the post. This clears up a lot of the confusion on Ryzen running Windows 10.

    • jensend
    • 3 years ago

    Even though I’m not a native German speaker, one of the first places I look for details when they’re not provided by TR is computerbase.de. [url=https://www.computerbase.de/2017-03/ryzen-windows-7-benchmark-core-parking/<]their coverage[/url<] of this is helpful. Scheduling isn't the problem. W10 core parking etc are holding Ryzen back by around 5%, which will be fixed by a MS patch in April. Getting devs to provide Ryzen-optimized code paths will be the name of the game from there on out.

      • ultima_trev
      • 3 years ago

      Tom’s also stated on their day one review to expect better performance in most situations going from Balanced to High Performance.

      However in some instances I can imagine performance will be worse since with High Performance mode there is no core parking, thus peak turbo/XFR which only works on two cores will be effectively disabled.

      • chuckula
      • 3 years ago

      [quote<]Getting devs to provide Ryzen-optimized code paths will be the name of the game from there on out.[/quote<] Good luck with that. As we saw from TR's own benchmarks, a large number of applications won't even optimize for 4 year old Intel processors since they won't activate AVX (and that's in applications like CineBench where AVX is a perfect fit for the workload). If they aren't doing that for Intel parts that have been dominating the market for 4 years, they aren't rewriting their code for RyZen that just launched in 2017.

        • LostCat
        • 3 years ago

        A large number of applications barely optimized for SSE2 until recently, so there’s that.

        • derFunkenstein
        • 3 years ago

        It’s about getting the software into as many hands as possible. As long as Pentiums lack AVX (at least, [url=https://ark.intel.com/products/97453/Intel-Pentium-Processor-G4600-3M-Cache-3_60-GHz<]according to ARK[/url<], where i3 and better [url=http://ark.intel.com/products/97455/Intel-Core-i3-7100-Processor-3M-Cache-3_90-GHz<]support it[/url<]), you won't see software require it. And it's a shame, because aside from the lack of AVX, they're perfectly capable CPUs. If Intel had given every Haswell-and-later CPU AVX2, you could bet that we'd be seeing it. It'd be a competitive advantage for any app that wants to speed things along, and once everyone was forced into it, it'd be expected.

          • chuckula
          • 3 years ago

          Bear in mind that if the talk about the “CCX’s” is true then most of the AMD parts on the market.. the APUs.. won’t have any problems with these issue since they will only ever have one CCX.

          Meaning that all these potential optimizations won’t help most of the parts that AMD makes over the long run.

            • derFunkenstein
            • 3 years ago

            If the current parts are gimped, then it will bring the current parts up to the performance of their single-CCX brethren (assuming lightly threaded loads, anyway).

            I don’t think 8-core CPUs are going away for AMD once the APUs come out, either. I don’t expect there’s much overlap between “APU buyer” and “hardcore gamer”, and if there is it’s only because AMD has outclassed Intel on the lower end and those CPUs are something you’d actually want to game with.

      • tipoo
      • 3 years ago

      Does this indicate a particularly high core reengagement latency with Ryzen vs the *lakes?

        • tay
        • 3 years ago

        It’s got to be the L3 caches that are per core complex. If you’re switching between CCX then your L3 hit rate will go to shit.
        Edit : well that’s not it [url<]https://www.pcper.com/reviews/Processors/AMD-Ryzen-and-Windows-10-Scheduler-No-Silver-Bullet[/url<] ¯\_(ツ)_/¯

    • ronch
    • 3 years ago

    When Ryzen runs well on Windows 7 but not so much on Windows 10, what does that tell you? Why are we pointing our fingers at Ryzen? And why is AMD pointing their fingers at the devs? And why didn’t they catch this before launch? Or did they?

    Look AMD, this just goes to show that you should officially support Windows 7. But given that 7 isn’t supported, why are Windows 7 drivers available on AM4 motherboard websites? AFAICT support meant drivers, primarily.

      • tipoo
      • 3 years ago

      If Windows 10 was the problem, we would assume that AMD would be the first to say so and say future gains could be had by working with Microsoft. Instead in this very article they’re averting the blame from Windows and pointing to per-app optimizations. It’s interesting for sure that 7 is always a few frames ahead of 10 on Ryzen, but how does it look on Intel?

        • ronch
        • 3 years ago

        If we were talking about a tried and true design like Haswell here and Windows 7 ran games better than Windows 10, would anyone think Haswell is the problem? No, we’d blame Windows 10. We know games run well on Haswell and Windows 7. Using the same CPU, we would expect games to run just as well on Windows 10 unless Windows 10 is doing something wrong. Same with Ryzen. If Windows 7 that doesn’t even support Ryzen officially and contains no optimizations (for Ryzen) runs games better than Windows 10, we might want to look at Windows 10 to see if it’s the problem.

          • Jeff Kampman
          • 3 years ago

          Outside of two games, Windows 10 isn’t the problem so long as you enable the high performance power plan for now: [url<]https://www.computerbase.de/2017-03/ryzen-windows-7-benchmark-core-parking/[/url<]

            • ultima_trev
            • 3 years ago

            Tom’s also stated on their day one review to expect better performance in most situations going from Balanced to High Performance.

            However in some instances I can imagine performance will be worse since with High Performance mode there is no core parking, thus peak turbo/XFR which only works on two cores will be effectively disabled.

            • RAGEPRO
            • 3 years ago

            Intel chips still hit single-core turbo speeds with Core Parking disabled. I can’t imagine why Ryzen wouldn’t.

    • ronch
    • 3 years ago

    Why is it that AMD always seems to blame software when they roll out a new architecture and it falls short of expectations? Yeah I kinda got why that was applicable to Bulldozer but Zen is supposed to be strong in terms of single thread performance. Or maybe it just proves that some parts of Zen that are critical to games really are weak compared to Intel? Branch prediction? FP throughput? Cache performance? SMT implementation (this is their first shot at SMT so yeah)?

      • cynan
      • 3 years ago

      [i<] AMD says that Ryzen is working just fine for the most part, and that no major changes should be expected in Windows or elsewhere to correct perceived performance issues[/i<] How do you get that AMD is blaming software to any large degree? Besides, given differences in architecture/instruction sets and compilers that cater to these, it's only natural for some software, especially anything that is recent and optimized at all, to not run 100% optimally on a CPU from a company that hasn't really released a competitive (market and performance-wise) part for years. Hopefully AMD stays the course and is able to close the gap increasingly over the next couple product cycles. If they can't, then it's time to complain.

        • Anonymous Coward
        • 3 years ago

        Focusing on compilers is probably the wrong place. From what I’ve read, this has more to do with NUMA concerns, the L3 cache, and the connection between the two groups of cores.

        • derFunkenstein
        • 3 years ago

        Well, AMD is saying that “easy-to-do optimizations will fix it”, so that’s *kind of*blaming the software. It’s the apps, not the OS. And if the work can be split up, then great. if it can’t, that’s not so great.

      • Shouefref
      • 3 years ago

      It’s a misconception that AMD Is blaming sofware. Intel does ‘the same thing’, blaming or not blaming.
      Processors are pieces of hardware, and software and hardware has to suit each other.
      Windows 10 could be made to suite Intel’s processors, because they’re already on the market. Ryzen was not, so it will take some time to suite W10 to Ryzen.

        • ronch
        • 3 years ago

        Except Intel hasn’t been blaming devs for the past 10 years because there’s not much to complain about given that they were out front.

        AMD is saying it’s it their fault and says devs should optimize. This kinda like blaming devs, isn’t it?

          • LostCat
          • 3 years ago

          Hard to blame people for not optimizing for an arch that didn’t exist.

            • Redocbew
            • 3 years ago

            Like Chuckula said in another thread here it’s always been notoriously difficult to move the needle in this regard. If anything it’s probably easier today to push the industry towards supporting an extension like AVX than it used to be, and it still takes quite a while for something like that to be widely supported.

    • ronch
    • 3 years ago

    Everyone knows how much I root for AMD and I’ll most probably end up with Ryzen on my next platform upgrade anyway but when they say software developers need to patch or optimize their code for a new CPU architecture to shine, well, isn’t that a little scary? Not everyone’s gonna fix their code, especially old software, so I’d much rather have a Windows scheduler bug for which Microsoft can issue a fix that fixes performance issues with everything instead of waiting for every developer to roll out their patches one by one, if they ever will. And we expect new chips to run our stuff faster without modification the same way a 486 runs Space Quest 4 faster than a 386 does with no patch from Sierra.

      • Meadows
      • 3 years ago

      The thing I don’t get is, how could developers even optimise for a given CPU architecture? Isn’t that the compiler’s job?

        • ronch
        • 3 years ago

        Because compilers need to be aware of the underlying processor architecture to optimize well for it and it’s the devs that use these compilers to compile their code.

        • Klimax
        • 3 years ago

        You can instruct compilers to assume some things. Microsoft’s C/C++ compiler in x64 mode can use switch /favor to target both or one of them (Intel, AMD) and also there is option for Atom. (Both x86/x64)

        GCC can be instructed to target specific uArchs.

        I case of ICC it is obvious, although based on older test, it seems to be very good even for AMD CPUs…

        Also it is possible to use specific instructions using intrinsics and then a lot of responsibility falls on programmer. And it is easy to screw up badly…

        • Redocbew
        • 3 years ago

        That’s my understanding. It’s more common to target features than specific architectures. If someone needs AVX2, they tell the compiler. If they need hardware virtualization support, they tell the compiler, and so on. The compiler doesn’t know what CPU is intended for the application unless a developer tells it, and they wouldn’t necessarily need to tell it.

        That’s for applications in userland. System libraries, drivers, schedulers, and the other stuff further down the stack may need to know more specific details, but in general a lot of that is going to be transparent to the applications which use them.

        • lycium
        • 3 years ago

        Most programmers are not used to dealing with NUMA, since we usually don’t have dual-socket computers to test and develop with. So it’ll take a while for this added complexity to make its way into “normal” programming , the way we all had to learn multithreading when the free lunch of ever-increasing clock speeds ended.

      • Shouefref
      • 3 years ago

      Intel is doing the same thing, you know. Only AMD is a bit late on the market with its Ryzen, so it’s a bit later to adapt Windows to Ryzen.

      • freebird
      • 3 years ago

      Well, if you are going from AMD’s last CPU (PileDriver) it IS Quite an improvement WITHOUT patching. Besides, it seems Zen was designed 1st for Server Loads and 2nd for PC in mind…I think it does quite well at both for the 1st iteration of a new CPU arch. I personally like being about to get a R1700 capable of 4GHz running 8 cores/16 threads for the price of $329, so I can game, troll, read email, troll, read internet conspiracy theories and transcode a several TBs of old WTV files… faster than a speeding bullxxxx (or Intel equivalent)

    • Amien
    • 3 years ago

    What’s the word on ECC support? Has anyone tested this and confirmed it works? The best piece I could find on ryzen’s memory performance was the legit reviews article but they didn’t use ECC…

      • JosiahBradley
      • 3 years ago

      Seconded! Someone test this out. Asrock clearly states they support ECC on their boards but need to see a reviewer actually try it.

      • AnotherReader
      • 3 years ago

      Hear hear! I am waiting for confirmation of ECC support before I buy into Ryzen

      • ptsant
      • 3 years ago

      I found this:
      [url<]https://translate.google.com/translate?depth=1&rurl=google.com&sl=auto&sp=nmt4&tl=en&u=https://www.hardwareluxx.de/community/f11/wieder-mal-typisch-ryzen-kann-ecc-aber-aktuell-kein-mainboard-mit-ecc-support-1154800-7.html#post25385390[/url<] Some guy actually tested a module with the Asus Prime and pushed to it failure, at which point Linux reported an ECC error. This is by no means validated, but it is very encouraging. At least the wires for 72-bit data path are physically present on the board...

    • gerryg
    • 3 years ago

    Bottom line up front (BLUF): good job, AMD, welcome back.

    Long story (not really): The fact that AMD has reached parity in it’s targeted price/performance zones and people are debating the pro/con of Intel vs. AMD instead of which Intel processor to get is absolutely HUGE people. Intel felt the need to do significant price drops across a fairly wide swatch of chips, which tells you a lot right there. Regardless of which “team” you’re rooting for, with AMD producing a solid contender everybody wins and the competition is on again. Yay us!

    In my personal case, I’m with derFunk: Ryzen “Gaming performance is beyond adequate for me”. It’s nice to know I can’t go wrong with either AMD or Intel, and that I’ll get a lot for my dollar.

      • psuedonymous
      • 3 years ago

      [quote<]Intel felt the need to do significant price drops across a fairly wide swatch of chips[/quote<]MSRPs have remained static. One retailer dropped prices for a limited time.

        • gerryg
        • 3 years ago

        Hmmm, I must have read fake news about the price drops. I haven’t been going to retailer sites for actual prices – I won’t be in the market until summertime at the earliest.

        [url<]http://www.digitaltrends.com/computing/intel-cpu-prices-drop-ryzen-launch/[/url<]

      • Shouefref
      • 3 years ago

      Fact: is there is only one team out there, the buyers always loose.

    • DoomGuy64
    • 3 years ago

    This is a standard AMD PR Denial of issue. Short answer: It is not a “bug”.

    Long answer:
    Microsoft’s Scheduler and power settings are not optimized for Ryzen’s CCX architecture, and AMD doesn’t want to admit there is a problem. This is not a “bug” per se, but the OS is specifically tuned for Intel, and Ryzen does not work in the same manner. The CCX issue is pretty similar to Bulldozer’s module issue. You can work around the problems with tweaks, or Microsoft could optimize it’s scheduler better. We don’t have a timetable of any update, so AMD is publicly denying the problem while telling users and developers to work around the “non-existent” problem.

    Windows 7 is not as aggressive with it’s power savings, as well as supported older CPUs that were similarly modular like the Core 2 Quad. Ryzen works better on 7, because it is kinda like those old Core CPUs having two quad chips slapped together, and the scheduler doesn’t constantly bounce threads between the two modules.

    [url<]http://www.hardware.fr/articles/956-24/retour-sous-systeme-memoire-suite.html[/url<] Battlefield 1 loses 20% performance from a 4-0 configuration vs a 2-2 configuration. [quote<]the communication between the CCX at a cost, and depending on the application it is not necessarily trivial.[/quote<] [quote<]This is not to say that things will not change for Ryzen in the future. The most obvious solution would be a patch for the scheduler Windows to limit the movement of threads in a CCX to another. AMD and Microsoft have collaborated for a patch for Dozer at the time, one can imagine again a patch for Ryzen. AMD remains were very cautious and did not want to confirm or not to work on the issue. Another change that could prove beneficial for games is the arrival of the "Game Mode" in Windows 10, one of whose features is precisely here, too, under move threads. Still, even clinging to a heart, communication between threads will remain just as expensive when using shared data. On the way to the other, the impact of the Game Mode, if one, will inevitably be variable.[/quote<] So there is an issue that has to be worked around with the CCX modules. [url<]http://www.phonandroid.com/amd-ryzen-microsoft-confirme-problemes-windows-10-patch-approche.html[/url<] Microsoft may actually be working on a patch though, so manual tweaking may not be necessary in the future.

      • Waco
      • 3 years ago

      AKA…Windows, as usual, is completely ignorant of NUMA domains.

        • torquer
        • 3 years ago

        Ryzen does not have NUMA domains. The core clusters do not have segregated memory controllers or memory pools like a multi socket system has.

          • Waco
          • 3 years ago

          If memory access from a core isn’t similar to another core…that’s a NUMA domain.

          If it isn’t presented that way, then AMD has failed. The CCXes have different memory latency and bandwidth numbers, and they have different cache domains. That’s basically the definition of NUMA…and the reason the whole concept became useful for scheduling.

          Ignoring NUMA domains is a sure way to get very inconsistent performance.

            • jts888
            • 3 years ago

            If you look at the die, you can see both DDR4 PHYs next to each other, and it’s known that inter-CCX bandwidth scales with DDR4 frequency, so I’ve been assuming that both CCXs share equal access to both memory channels on an interlink closely tied to both.

            I suspect the CCX separation really is just limited to cache and not to main memory, leaving the issue a little more blurred.

            • Waco
            • 3 years ago

            I’d still consider that a NUMA domain if traversing them causes a difference in behavior. NUMA isn’t restricted to main memory access IMO. If you map it out (with a purely ignorance method) you’d be easily able to identify the various cores and their locations to each other. You’d get effective NUMA domains of a CCX cluster (on a 1P box) and a socket (in a 2P box).

            Tier 0: CCX Cluster
            Tier 1: Local CCX Cluster
            Tier 2: Remote CCX Cluster / Socket (this may be equal or not depending on connectivity)

            Adding just a little bit of extra logic into the scheduler can cause surprisingly large performance gains for lightly threaded workloads as well as surprisingly large gains for NUMA-aware codes.

            • the
            • 3 years ago

            There is only a single NUMA domain on the Ryzen chips.

            The thing with cache is that it is difficult to partition it like a NUMA node as the contents are all purely managed by the hardware. While abstracted through several layers (virtual memory, address randomization etc.), memory is directly addressable and thus its contents can be directly controlled.

            There are two cache coherency domains with Ryzen doing some on-die snooping between them.. The result is that the L3 cache in the remote CCX is more akin to a L4 cache with a different latency tier. It appears that the L3 cache in each CCX can contain duplicate data but only if it has been evicted from the L2 cache of a core in each CCX. This isn’t a bad thing as the L3 cache can be warmed with data to prevent a cache miss. The really bad news is that as a thread thrashes between CCX, the L2 miss rate should be rather high.

            • DoomGuy64
            • 3 years ago

            L4 cache is probably a good analogy to make when swapping between CCX modules. It works, but can be pretty suboptimal depending on the scenario, especially when the link is only 22GB/s.

            hardware.fr’s tests showed that some cases had minimal performance differences, but battlefield 1 didn’t like that configuration at all and lost 20% of the framerate. This is likely because workstation loads can be partitioned into independent subtasks, while gaming has a lot of dependent threads. Ryzen may not be able to fully scale it’s 8 cores gaming because of that. At least not with dependent threads.

            Windows may be treating it as a single NUMA domain like you said, but something needs to be done to avoid CCX swapping so you don’t get these worst case performance scenario situations in games. It’s not quite NUMA, but it’s close enough and should be treated as such.

            • jts888
            • 3 years ago

            The bandwidth between CCXs is actually 32B at half the nominal DDR4 interface clock rate, e.g., DDR4 2400 = 38.4 GB/s in each direction.

            Some early benchmarks have shown moderate performance gains up to around DDR4 4000 / 64GB/s territory, so it’s not like additional bandwidth hurts, but the chip isn’t exactly choking either.

        • Klimax
        • 3 years ago

        Windows support correctly NUMA for a very long time. (https://msdn.microsoft.com/en-us/library/windows/desktop/aa363804.aspx

        And this is not NUMA case anyway.

      • Pholostan
      • 3 years ago

      Would somebody please explain how you “optimize” the scheduler for a split L3 cache and the two CCX thar Ryzen has? How would the OS have a clue what kind of logic your app is running? Is it something akin to Blender or Cinebench that absolutely wants as many cores as possible and are not hurt by the limited inter CCX communication? Or has it plenty if inter thread communication, making the Data Fabric a bottleneck? How much is too much communication? Does plenty of Memory access play a role? Plenty of IO (GPU) access? Etc.

      This is not something you easily fix in the scheduler, it is probably pretty much impossible. It is something that needs to be done on the application level. Every program needs to optimize for this, then we’re getting somewhere. And ofc not all apps needs to, most things seems to run just fine.

      OS schedulers are very complicated animals. Changing them isn’t something you do just like that.

        • Klimax
        • 3 years ago

        Reporting it as two NUMA clusters could be one of mitigations.

          • Pholostan
          • 3 years ago

          I don’t think so. NUMA doesn’t work like Ryzen works. It looks like there is only one memory controller. Also all desktop apps that aren’t NUMA aware would never be able to run on more than four cores. I don’t think any of the devs doing typical desktop apps are interested in putting NUMA infrastructure into their apps.

          I think this can’t be solved on the OS level. Needs to be solved on the application level. In linux this is exactly what is happening. The Linux scheduler won’t have any special code for Ryzen in it, the OS however will provide the tools to optimize your app for Ryzen. Next LLVM and Clang will have complete support I hear.

    • f0d
    • 3 years ago

    the most disappointing thing for me was ryzens overclocking ability (i overclock everything)
    if i could have easily clocked it to 4.6ghz+ its standard performance wouldnt have mattered to me as its overclocked performance would have made it so much better
    most reviews are saying it barely gets over 4.0ghz…..

    i even sold my old x79 system so i could buy a ryzen 🙁
    for now im just going to use my htpc (5.0ghz 2500k) until i decide what to get

      • Firestarter
      • 3 years ago

      if AMD could have clocked it higher, they would have. There was never much of a chance of the top chip being an overclocking wonder somehow

      • ronch
      • 3 years ago

      Hope you aren’t one of those who pre-ordered. Why is everyone jumping on the pre-order bandwagon?

        • f0d
        • 3 years ago

        naw pre-ordering is for sheeple
        im probably going to wait till the end of the year, hopefully the GF process would improve by then

        if not then i guess ill be getting a 6900k

          • Anonymous Coward
          • 3 years ago

          In the past, its true that AMD would polish up their manufacturing and eventually produce substantially better performing product, but I wonder if they have turned the corner on that era. Its think its astonishing that they arrived at such good speed and efficiency on day one, also perhaps yields. This is a new AMD.

          I would not be surprised if the core was designed to reach a certain speed reliably and efficiently, but it was also well enough balanced for that objective that there isn’t a lot of low hanging fruit left to clean up. I’m thinking they can reduce the volts over time, but clocks are going to be more difficult.

      • Shouefref
      • 3 years ago

      Ryzen 7 1700 overclocks tremendously well from 3 Ghz to 4 Ghz. That’s + 33.33% !

      • freebird
      • 3 years ago

      That’s why I bought the R1700 base clock is 3.0, but seems to OC fine at 4.0Ghz …

      To me, my R1700 seemss pretty impressive at 4.0Ghz. So what would 4.4-4.6 Ghz be at most 10-15% (and probably 200+ watts!!) if perfect scaling was involved and gaming, probably nothing much. Same goes for memory speeds over 2933 Mhz not much at all, especially when you have to start bumping up the latency…

      On the CPU-z test is was getting scores of 2330 single core and 20000 fully threaded.

      Same with Kaby Lake would be my guess I doubt you’d notice or see much of a performance difference in Kaby Lake running at 4.4 or 5.0 Ghz, at least gaming. Maybe in video conversion, but then you’d be better off with 8-cores and 16 threads…which is what i intend to put it to work on when I’m not gaming and might even try it while I am… with certain core assignments in place, which can be done in Windows 10.

    • Bumper
    • 3 years ago

    So ryzen basically has ivy bridge level gaming performance and somewhere around haswell and up to above broadwell-e multithreading depending on the app?

    that’s a huge improvement. Other than some quirks in certain games at low resolutions basically seems like a much cheaper 5960x with better multithreaded performance and slightly worse gaming performance. or a much cheaper 6900k with equal multithread performance and 20%avg less gaming performance. I’m sure gaming performance will improve enough to make-5960x and ryzen equal and lessen the broadwell-e gap to 10%avg.

    • jts888
    • 3 years ago

    PCPer’s analysis is lacking IMO.

    They were convincing in their dismissal of SMT scheduling concerns, but they went off the deep end regarding inter-CCX IPC latency, when there is a [i<]much[/i<] simpler potential explanation in thread migration. I'm confident that the bimodal frame time distributions found last week by TR will be ultimately found to be caused by this, since there's no simple IPC/cache line size state coarse enough in a CPU to induce ~15% performance variations in largely consistent 5-15 ms computational tasks. On modern smaller Intel CPUs where every core sits on the same L3 ring, re-warming L2 cache for a core is quick due to moderate L2 sizes (256 kB) and the wide memory paths (64B). Before even considering multiple CCXs, Ryzen has larger L2s (512 kB) with narrower channels from L3 (32B), making cache rewarming take several times longer. Having to rewarm an L2 from a remote CCX's L3 (such as would happen from a trans-CCX thread migration) would be obviously even worse. I don't believe the Ryzen will magicly be [i<]the[/i<] premiere gaming CPU under the control of a more considerate scheduler, but heavier de-bouncing weighting will almost certainly help every long-lived compute-bound task on the platform.

      • Redocbew
      • 3 years ago

      Wouldn’t that still be a scheduler problem? If non-uniform access is the culprit, and it’s this easy for threads to have that happen, then the scheduler should be aware of that, no?

        • NoOne ButMe
        • 3 years ago

        Vaguely I recall hearing someone either at AMD, or a game developer ‘in’ with them saying something like developers have gotten lazy/used to doing things specifically to a core.

        It’s my most-broken memory, so may just be fantasy

    • DavidC1
    • 3 years ago

    There’s no such thing as a silver bullet. I guess movies and video games condition people to think that there is? You always have a hero that is capable of achieving immense feats not just once, but the whole time. Fixing problems are also extremely simple in games, while often in the real world it’s hard to come by or doesn’t exist at all. Also, the rapid development in computers caused us to have distorted views. Those days are over.

    It tells us CPU development is complex as the world around them. With Moore’s Law dead the gains will come only with much effort. Moore’s Law wasn’t easy either but at least it was something.

    • not@home
    • 3 years ago

    I found this to be very interesting.
    [url<]https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/[/url<]

    • torquer
    • 3 years ago

    Nothing would have lived up to the idiotic hype train and fanboyism pre launch.

    The takeaway is that Ryzen is a “Cores4Less” CPU that is good at gaming but not as good as similarly priced Intel CPUs.

    It’s great for content creators who can’t or won’t use GPU acceleration though.

    • ibnarabi
    • 3 years ago

    This freeware is useful, if you’d like to kill core parking;
    [url<]https://bitsum.com/parkcontrol/[/url<] "ParkControl Free – Tweak CPU Core Parking in Real-Time" downvoted? for what??? you guys.

      • moshpit
      • 3 years ago

      People can be trolls. Who knows why they did it, your post seemed helpful. But then being douche bags seems to get some people off nowadays.

        • chuckula
        • 3 years ago

        Yeah like the original poster.
        Who was the troll.
        Core parking homeopathy was [b<]Bulldozer's[/b<] snake oil. Get with the times.

          • Krogoth
          • 3 years ago

          Core Parking is hardly homeopathy. It is just a way to control thread scheduling more. In theory, it shouldn’t be necessary at all but the OS and software in question don’t always play nicely with multiple threads.

            • Redocbew
            • 3 years ago

            Core parking isn’t, but apps that promise amazing things due to the evils of core parking are really not necessary. Usually it’s best just to stay out of way of the OS and let it handle things on its own, and if there’s an issue around parked cores it’s usually a symptom of some other problem.

            Edit: Cork parking? Wait, what?

            • derFunkenstein
            • 3 years ago

            Park a cork in it!

            I had thought that these “core parking” apps basically just kept the CPU barely busy on all cores. Seemed like a waste, adn that it’d be competing for resources.

            • Pholostan
            • 3 years ago

            To my knowledge, core parking is not active in Windows 10 on the Balanced or High Perf Power profile. Need to change things manually or set the Power Save profile.

          • crystall
          • 3 years ago

          Actually it seems that Windows 10 core parking settings are currently suboptimal for Ryzen, see the very detailed analysis @hardware.fr

          [url<]http://www.hardware.fr/articles/956-8/retour-smt-mode-high-performance.html[/url<] (sorry for the French, but the graphs tell the whole story) Naturally, it doesn't [b<]dramatically[/b<] change Ryzen performance, it's just a few % on a few games, but it shows that the whole platform is still having some minor teething issues.

          • Meadows
          • 3 years ago

          It *should* be homeopathy, and it would be for any processor where one “core” is equal to any other, but in recent years we’ve seen more and more of this SMT/module/whatnot business.

          If it *were* homeopathy, we wouldn’t see a difference upwards of 10% in certain applications simply depending on what power profile you pick.

    • Krogoth
    • 3 years ago

    Ryzen is more then sufficient for overwhelming majority of gamers out there. The real issue is if AMD can deliver a mainstream-tier Zen chip to compete against Intel in portable market in terms of cost and energy efficiency.

    I have no doubt that Naples will fare well in the enterprise and HPC markets.

      • ronch
      • 3 years ago

      Some folks here say Ryzen is great especially for the price and it plays games well enough and it’s amazing how it finally puts AMD back into the game (pun intended), etc. etc etc. No one’s questioning those things. However, the question being asked is why is Ryzen’s non-gaming performance so strong but gaming performance is a different story.

        • K-L-Waster
        • 3 years ago

        [quote<]However, the question being asked is why is Ryzen's non-gaming performance so strong but gaming performance is a different story.[/quote<] How about because RyZen gets a great deal of its performance in productivity apps by leveraging multiple cores, but gaming by and large still is governed by single core performance?

          • Bumper
          • 3 years ago

          lol. exactly. for the rational person, these ryzens are not the chips to get if you’re just a gamer. it’s not that complicated. get a 7600k, overclock it and call it a day. it wasn’t that long ago that the haswell Pentium g came out for 75 dollars. that thing was a gaming beast and could be overclocked like mad.

          • ultima_trev
          • 3 years ago

          As computerbase.de demonstrated, enabling High Performance profile in the Windows power settings (which I always do anyway since core parking=meh) essentially puts the 1800X on par with 7700K in gaming performance. So saying Ryzen sucks at gaming performance is admitting the 7700K sucks as well.

          • ronch
          • 3 years ago

          We’re talking 1800X vs 6900K. Why does Ryzen beat the 6900K when you’re running all cores and fall behind when you’re using maybe just 4 cores? If each core in Ryzen is weaker, does their sum suddenly become greater than the sum of the 6900K, which has individually stronger cores? Remember, we’re talking the same number of cores.

            • Redocbew
            • 3 years ago

            The same question, and yet you seem to be expecting a different answer. Han shot first, no?

            Edit: I am nerd. Hear me… snort?

      • Bumper
      • 3 years ago

      Ryzen does seem to be fairly efficient. About 20% less power than the comparable broadwell-e. Will be interesting to see what happens when the frequency is scaled down and the cores halved to fit in portables.
      edit: [url<]http://www.tomshardware.co.uk/intel-core-i7-broadwell-e-6950x-6900k-6850k-6800k,review-33569-9.html[/url<] [url<]http://www.tomshardware.com/reviews/amd-ryzen-7-1800x-cpu,4951-11.html[/url<] looks like 51.6% less power at idle and 20.5% less power under load, both measurements taken at stock clocks.

    • RedBearArmy
    • 3 years ago

    All this new data clears the picture. AMD is right, optimized software is needed to optimally use given CPU arch.
    In real world though most of old code won’t be optimized so Intel still has a huge lead with gamers. (Paradox still believes in dual-cores only)

    Looking at cross core latency alone we have:
    Intel SMT > AMD SMT
    Ring bus > cross CCX link
    4C4T/4C8T AMD =>Intel
    6C?T/8C?T Intel> AMD

    Regardless of what they say WinX scheduler is not optimized for ryZen.
    They have a CPU to sell, PR rulez, engi’s are on NDA. Won’t be first time.
    We will get Game Mode in 1703 update which will help with games that use upto 4C/8T. Apps/games that use more 4C will need to be optimized manually.

    • derFunkenstein
    • 3 years ago

    Gaming performance is beyond adequate for me, and if it’s really just an optimization issue then maybe Ryzen will age gracefully in the future.

    I’m more interested in a board that POSTs in under 30 seconds. Here’s lookin’ at you, MSI. For an uncomfortably long time.

      • LostCat
      • 3 years ago

      The DX12 results I was most disappointed by…I hope that’s something that can be improved on the OS side.

        • derFunkenstein
        • 3 years ago

        I’m interested in seeing what changes in the most popular engines (Unreal and Unity) now that Ryzen is out and about. They talked an awful lot about forward rendering at the Vega thing a couple weeks ago, but not much about what can/will happen on the CPU side of things.

      • MOSFET
      • 3 years ago

      I’m on my first Intel build in a long time, like since dual Pentium Pros turned into dual Celerons. My one disappointment is UEFI load time. Asus Z270G taking at least 30 seconds longer than Asus 990FX board before. Very similar features list, except for more fluff that’s turned off like WiFi, bluetooth, RGBs.

        • psuedonymous
        • 3 years ago

        There’s a big difference in boot time between UEFI fast boot (or ultra-fast, which skips the ‘enter UEFI’ keystroke prompt) turned on and legacy/CSM turned off, and having Legacy/CSM on and fast-boot off.

          • derFunkenstein
          • 3 years ago

          Indeed. And in my case, I have legacy turned off, fast boot on, and it still takes an uncomfortably long time to get the board to light up my monitor and display anything, let alone load Windows.

          I can turn on my PC, immediately after turn on my Skylake box (which has fast boot enabled) and be at he Windows desktop on the Skylake system before the Ryzen system even shows the MSI logo screen. Same is true with my work-assigned Mac.

          • MOSFET
          • 3 years ago

          UEFI FastBoot enabled and CSM disabled in both cases.

      • juzz86
      • 3 years ago

      Same, I hope Ryzen is the Radeon of CPUs. Grow old gracefully.

        • LostCat
        • 3 years ago

        yeah this 290 has been amazing. Sure it uses power but I have never had any other complaints.

      • ColeLT1
      • 3 years ago

      Wow, 30 seconds, for real?
      My new build goes from power button to desktop in 9 seconds from a full shut down (not hybrid sleep).

        • derFunkenstein
        • 3 years ago

        Yeah, it’s brutal. All the diagnostic LEDs periodically moving has kept me from pulling out my hair. Now I’m used to it.

      • Shouefref
      • 3 years ago

      Bottomline is that computers are not made for gaming. Gaming is a bonus use. Computers are made for administration and science. Gaming is an add-on. Learn to live with it, bro.

        • derFunkenstein
        • 3 years ago

        That’s, like, just your opinion, man.

        It’s also being willfully ignorant of the market. [url<]https://techreport.com/news/31313/report-pc-gaming-hardware-market-expands-to-an-all-time-high[/url<]

          • Krogoth
          • 3 years ago

          Non-gaming usage still dominates the computing market though. It is where the big money is.

            • derFunkenstein
            • 3 years ago

            Seriously, are you reading TR for the latest in $600 Latitudes and Inspirons?

            • anotherengineer
            • 3 years ago

            lolz

            I think he was referring to things with higher margins such as
            [url<]http://www.cray.com/[/url<]

            • evilpaul
            • 3 years ago

            I thought he was being sarcastic. If he was serious that’s incredibly stupid.

        • Gasaraki
        • 3 years ago

        ROFL. This is a gem. I wonder why gaming consoles now all use PC cpus?

      • Welch
      • 3 years ago

      30 seconds to be approximate………..

      • ultima_trev
      • 3 years ago

      As computerbase.de demonstrated, enabling High Performance profile in the Windows power settings (which I always do anyway since core parking=meh) essentially puts the 1800X on par with 7700K in gaming performance. So saying Ryzen sucks at gaming performance is admitting the 7700K sucks as well.

        • derFunkenstein
        • 3 years ago

        Well, I’m going by my own personal experience of turning the screws on a 1700, and my gaming performance has been more than fine on a 1440p display at 60Hz.

      • Litzner
      • 3 years ago

      I think the gaming performance anomalies will work themselves out as things move forward. If AMD get a Intel like performance SATA controller I will make the move to AMD over my current Devil’s Canyon i7. But currently SATA performance is really holding their platform back.

    • EndlessWaves
    • 3 years ago

    [quote<]AMD's most concrete advice for Ryzen owners experiencing what they perceive as less-than-ideal performance is to click over to Windows' power settings and to enable the High Performance power plan, which the company says will reduce core-parking behavior and allow for faster frequency response when apps require it.[/quote<] Although if you reduce core parking I suspect you also reduce the chance of the single core turbo kicking in resulting in lower performance for lightly threaded workloads. If it is an inter-CCX communication limitation then I wonder how it'll affect the hexacores. Will fewer cores mean more available bandwidth per core or will only three cores per CCX mean this limitation shows up in more programs?

      • Redocbew
      • 3 years ago

      Good question. Right now it looks like it’s more an issue of latency than bandwidth, and with fewer cores per CCX I’d expect it to happen more often, but I could be wrong. Something to keep in mind if your use case is gaming.

    • chuckula
    • 3 years ago

    You know, when we launched this thing we weren’t expecting a kind of Fanboy Inquisition.

      • Redocbew
      • 3 years ago

      Hush heretic, or we’ll burn you.

      • DrDominodog51
      • 3 years ago

      Nobody ever expects the Spanish Inquisition!

        • KeillRandor
        • 3 years ago

        YES WE DID!

        • Neutronbeam
        • 3 years ago

        Bring out the comfy pillows!

    • JosiahBradley
    • 3 years ago

    I just don’t get how 2 apps that both can use 8 threads show such different deltas between Ryzen and the 6900K. In workstation style apps the are very comparable, but in gaming in highly threaded games like BF1 or Doom the performance is very different. This is odd because it shows the IPC and core count clearly are capable but for some reasons games just function differently??

    I already canceled my incoming gaming Ryzen build because it is a side-grade for me. However I am happy recommending the 1700(x) over any Intel CPU for workstation builds. It’s just an odd discrepancy.

      • brucethemoose
      • 3 years ago

      Higher thread interdependency? Ryzen looks like a 2P server on a chip to me, making that kind of workload a weak point.

      But that’s just speculation. The only people on the internet who would really know are engineers under NDA, and most of them would rather not get fired or sued.

      • guardianl
      • 3 years ago

      Ryzens cache hiearchy is weaker than broadwells. It has equal or better execution resources though. When the core stalls waiting for cache misses it can work on the second thread. Hence the great SMT boost compared to intel

        • DancinJack
        • 3 years ago

        That’s….not really true either.

          • NoOne ButMe
          • 3 years ago

          Unless perhaps it’s held back by L1 bandwidth.. which as an “off” idea seems unlikely.

            • Klimax
            • 3 years ago

            Only L2 is better then Intel’s. L1 is much weaker with only 16 byte ports (Haswell has 32 byte) and L3 and interconnect are not that good either. It has few more execution resources but those are 128-bit only and schedulers are separate which will torpedo mixed workloads. (Local maximum is not necessarily global maximum) And game often have that. And if games use AVX for data processing then RyZen is toast. (L1 + narrow execution resources = bad performance)

            As always, such questions have very complex answer and often require for certainty heavy profiling. Unfortunately there are not many ways to do that. and not sure if AMD has same performance counters as Intel.

    • Tristan
    • 3 years ago

    But AMD fanboys still want Ryzen to demolish Intel, and are still searching way to unlock ‘true speed’

      • xeridea
      • 3 years ago

      It does demolish Intel on price/performance on almost every multithreaded application. It is a bit behind in some games, but the performance is good enough that there shouldn’t be any major hitches like you could get with the bulldozer line. The eight core Intel CPUs aren’t as fast as the 7700 K either, so it isn’t really a drawback of Ryzen, it is just a compromise that is made for packing eight cores into a reasonable power envelope.

        • DancinJack
        • 3 years ago

        Don’t let Tristan goad you.

    • south side sammy
    • 3 years ago

    I guess we can put the fanboyism and the fanaticism and the craziness aside now and agree it is what it is.

      • LostCat
      • 3 years ago

      You do know you’re on the internet, right?

        • south side sammy
        • 3 years ago

        Yeah, cool huh?

    • thedosbox
    • 3 years ago

    Heh, the comments on PC perspective’s stories about this are a wonder to behold. I’m amazed that they still allow anonymous comments without a verified email address.

Pin It on Pinterest

Share This