AMD crests Summit Ridge with Ryzen CPUs

As you read this, AMD’s New Horizon event will be kicking off to reveal more details about the company’s next generation of high-end CPUs for servers and desktops. We’re way ahead of you, though. We got an early sneak peek at Ryzen silicon last week at the AMD Tech Summit in Sonoma, California. Yes, the Zen name that has shepherded this high-end chip through its nascency is no more. Instead, shipping Zen parts for the desktop—formerly code-name Summit Ridge—will carry the name Ryzen. Like the name of the recent ReLive software update, you can pronounce Ryzen a couple different ways, though AMD favored “rye-zen.” Like a phoenix, or something.

Before we get into some of the nitty-gritty of Ryzen, we should first take a look at some of the new details AMD is sharing about its baby. The company confirmed that the highest-end Ryzen part will have eight cores and 16 threads running at a base clock of 3.4 GHz. Those cores will have 4MB of L2 and 16MB of L3 cache to play with, and the whole package will have an impressive 95W TDP. AMD wasn’t ready to disclose boost clocks for Ryzen just yet, but it seemed confident that there was plenty of headroom in the tank.

We also got to see a sort of check-up on the health of Ryzen silicon. As it did during its preview event at IDF, AMD showed an eight-core, 16-thread Ryzen running a typical desktop workload—in this case, the Handbrake video-transcoding tool. This time, the company set up its Ryzen engineering sample to run at 3.4 GHz with no boost against an unhobbled Core i7-6900K. Recall that that the last time AMD ran a head-to-head test like this, it was against an i7-6900K limited to 3 GHz. It’s also fun to note that the i7-6900K is a 140W TDP CPU, even if TDP isn’t a universal or cross-comparable figure.

Ryzen’s peak power consumption during Blender CPU rendering

Although we don’t know the precise details of either test system, the Ryzen PC finished AMD’s sample workload a couple seconds ahead of the i7-6900K. Perhaps more encouragingly, AMD showed some power-draw numbers for this Ryzen sample under a full Blender load, and they were about on par with those of the Broadwell-E chip. That performance suggests that Ryzen’s speed won’t come with a high power bill attached, and that’s heartening news.

AMD also revealed some interesting details about the under-the-hood features of Ryzen. Like other recent AMD chips, Ryzen CPUs will have a network of thermal and voltage sensors scattered across the die that provide a central processor with real-time information about the chip’s operating conditions. Both Bristol Ridge APUs and Polaris GPUs already came with these sensor networks on board, but for easy reference, AMD now calls this network of monitoring hardware “SenseMI.”

SenseMI will let a given Ryzen chip run at its most optimal point on the dynamic-voltage-and-frequency-scaling curve instead of baking in a predetermined safety margin that doesn’t account for chip-to-chip variation. That adaptive tech could allow a chip to run at a given frequency with less voltage, improving efficiency—a feature that AMD will now call “Pure Power.” SenseMI could also let a given chip extract its full potential frequency overhead when it dials in boost clocks—something that AMD will refer to as “Precision Boost.”

SenseMI also underpins an intriguing new feature called “Extended Frequency Range,” or XFR. SenseMI will monitor the effectiveness of the cooling solution that a builder installs on a Ryzen CPU using the Precision Boost feedback loop. Presumably, if one installs a Wraith cooler or similar heatsink, Ryzen chips will be able to hit their standard boost range. Put a monster tower cooler or a closed-loop liquid cooler atop a Ryzen CPU, though, and the chip can automatically exploit the extra thermal headroom to boost above its specified range. The more potent your cooling solution, the higher Precision Boost can push. Simple enough.

In an unusual step for a modern high-performance CPU architecture, AMD also discussed some intimate details of the Ryzen branch predictor. The company says it’s using a neural-network-powered prediction algorithm in its latest CPUs. While that description may sound like marketing fluff—effective branch predictors are already learning systems, and neural networks are a hot topic right now—there may be more to it than latching onto a trendy term. AMD senior fellow Mike Clark told The Register that Ryzen uses a hashed-perceptron algorithm at the Hot Chips conference earlier this year. While a perceptron may be a basic neural network, it’s still a neural network.

A quick Google suggests the idea of a perceptron-powered branch predictor is nothing new in chip design, but those types of predictors do appear to deliver extremely accurate performance. That’s good news for any CPU. My conversations with AMD employees suggest we’ll learn more about this topic in future briefings, so we can probably stand down with the pitchforks for the moment. AMD also touts Zen’s “smart” data prefetcher, although the company didn’t give us any hints as to what it’s doing to improve this critical component of CPU performance. I suppose we’ll need to wait for further briefings on that, as well.

If Ryzen can deliver on these promises, AMD thinks it has the potential to surf on some favorable trends in the world of gaming PCs. The company projects that the market for gaming hardware is in the midst of a 25% growth spurt from 2015 to 2018, and it expects the market for VR PCs specifically to grow from less than a million this year to over 10 million in 2020. The growth in popularity of eSports titles like Dota 2 and League of Legends, along with the exploding popularity of Twitch streaming, all suggest that new and existing gamers could be looking to upgrade to some new hardware. If those PCs are built around Ryzen CPUs and Radeon graphics cards, AMD could enjoy a much-needed shot in the arm for revenue growth.

Even though we have a little bit of a wait left before we can get our hands on Ryzen hardware, AMD continues to give us reasons for optimism regarding this CPU family and its performance. The company’s demonstrations last week showed that Ryzen parts will most likely be competitive with Broadwell-E chips from both a performance and a performance-per-watt standpoint. The fact that AMD achieved that performance with engineering samples that are only running at their 3.4 GHz base clock suggests there may be even more performance yet to be tapped from these chips in less-multithreaded workloads, too. Features like XFR promise perhaps even greater performance rewards for enthusiasts that plan on using Ryzen with potent coolers, too—all without the headache of manual overclocking. We’re eager to see just how all these promises play out when Ryzen CPUs debut sometime in the first quarter of next year.

Comments closed
    • anotherengineer
    • 3 years ago

    Over double the post of skaby lake review!!!!!!!!!!!!!!!

    • Thbbft
    • 3 years ago

    While Ryzen + Vega @14nm gets AMD back in the game, Ryzen 2.0 + Navi @7nm is where the real action takes place.

    As a brand new architecture Ryzen has lots of performance upside Intel doesn’t have which AMD will exploit to exceed Intel performance for the next few years and Navi is a single chip designed to exploit the fattest part of the 7nm fabrication yield curve and then scale up using that chip.

    That will be a very potent cost/performance combination in future market share battles with Intel and Nvidia.

    • mcarson09
    • 3 years ago

    Real world independent tests please! AMD has used marketing to BS before. Beating an Intel 6600k would not be as good as beating a 5960X or 6950X. The AMD fanboys are strong again, but I still remember their past failures. By the way where’s AMD’s 10 core CPU?

      • Mr Bill
      • 3 years ago

      Apparently, AMD did not bother with a 10-core. [url=https://en.wikipedia.org/wiki/List_of_AMD_Opteron_microprocessors#Opteron_8400-series_.22Istanbul.22_.2845_nm.29<]But you can see the 4, 8, 12, and 16 core 8300 series if you scroll to the bottom.[/url<]

      • Ninjitsu
      • 3 years ago

      [quote<]By the way where's AMD's 10 core CPU?[/quote<] One thing at a time...

    • Shobai
    • 3 years ago

    Well, I’ve got a brand new CPU to show,
    D’you wanna come and see?
    Compares pretty well in Blender, don’t you know,
    Configured “properly”.

    • Mr Bill
    • 3 years ago

    How do they rise up?

    All the little angels rise up, RyZen up.
    All the little angels RyZen up high!
    How do they rise up, rise up, rise up?
    How do they rise up, RyZen up high?
    They rise heads up, heads up, heads up, they RyZen heads up, heads up high!

    This is repeated with hands, arms, knees, and finally arse up.

    Apologies to Terry P.

    • alphadogg
    • 3 years ago

    Ryzen’s up, soon on the street
    Did their time, took their chances
    Went the distance, now they’re back on their feet
    Just a company and it’s will to survive…

    • KaosMike
    • 3 years ago

    redid the test with 100, 150, and 200 sample rate, with a new downloaded file, not sure of what my original download was tested at, at first.

    100 samples = just under 69s
    150 samples = 101s
    200 samples = 134s

    I guess original test downloaded must of been set at 200, as it was roughly 135 seconds.

    specs:

    i7 960 @ 3.2ghz
    16gb ram
    hd7950
    sabertooth x58

    so not as impressive then…unless Im missing something 🙂

      • Waco
      • 3 years ago

      I get 55 seconds with my 6700K at stock. That’s nearly 100% faster than your 960, I wouldn’t say progress has been standing still. 😛

      EDIT: The Titan X (Maxwell) does it in 11 seconds though. Ha.

        • chuckula
        • 3 years ago

        Rendering is the type of task where GPUs excel.

        If Intel was actually interested in making all that IGP real estate on its consumer products be useful, they’d figure out how to get both the CPU & IGP working in tandem for workloads like rendering. I guess OpenCL to use the IGP is possible, but really getting both the CPU and the GPU working together would be a real step up.

          • the
          • 3 years ago

          From the looks of [url=https://software.intel.com/en-us/articles/opencl-drivers<]this[/url<] there are OpenCL drivers availible for both CPU and GPU. The OpenCL application does have to be coded to utilize more than one piece of hardware simultaneously but it is not inherently forbidden.

      • mesyn191
      • 3 years ago

      You must also think the i7 7700K unimpressive too then.

      At the same clocks it won’t do much better than Zen or Broadwell.

      [url<]https://www.pugetsystems.com/labs/articles/Haswell-vs-Skylake-S-i7-4790K-vs-i7-6700K-641/#Conclusion[/url<] Skylake has the same IPC has Kabylake so the scores won't budge much at all at the same clocks as the benches shown in the link above. Now Kabylake will have higher default clocks than Skylake or Zen which isn't worth nothing but if you're overclocking, and if you go to a enthusiast site like TR you almost certainly are, you won't care about that.

        • KaosMike
        • 3 years ago

        I suppose higher wattage versions will come that will squeeze more out of the zen ones,

        On the intel side, maybe they are already reaching thermal limits in the latest incarnations, pushing the chips to the max to match perf/efficiency of whats coming down the pipe from AMD.

        Well that just my take, its now down to price anyways 🙂

    • evilpaul
    • 3 years ago

    So assuming from what we’ve seen so far that Ryzen is a bit faster than a 6900K does it have the platform to back it up? With half as many RAM channels I’d think the memory bandwidth and maximum capacity are going to be a lot lower. Does DDR4 3200 run reliably with four modules on X99 boards? If the AMD chips can handle fast DDR4 then they can reduce the bandwidth difference quite a bit. But people needing 16 threads probably can use >64GB of RAM pretty often.

    If the chip we’re seeing demoed ends up costing $500-600, which I don’t think is unreasonable, then it’s nice if the CPU performance can keep up with Intel for the first time in a decade, but how many people care? Is the reason enthusiasts aren’t snapping up 6900Ks really just the price?

      • mesyn191
      • 3 years ago

      We won’t know until the mobos have been independently reviewed if the platform holds up well to Intel’s.

      Based on leaked details so far they’re sounding pretty close if not identical to Intel’s 200 series platform in terms of features though.

      Half the RAM channels won’t matter much for any sort of non-HPC/enterprise work load even for 8C/16T systems. So Zen will lose the memory bandwidth synthetic tests but those don’t matter at all anyways. Real world application tests are what matter. Memory capacity won’t limited either. 2-4 slots is all anyone uses in the consumer space anyways what with 8-16GB+ DIMM’s being affordable or even cheap now.

      Realistically threads and core counts having nothing to do with RAM usage. What matters is the work load you run. The desktop Zen’s will be able to support 64GB+ if need be but if you really need 64GB+ the server versions are what you should be buying and if you really need that much RAM you’re probably running a server and not gaming or running Handbrake occasionally. Right now for common desktop workloads 8GB is OK, 16GB is good (sweet spot in terms of bang vs buck), and 32GB is more than enough and verging on overkill. 64GB of RAM in a desktop is overkill to the point of nearly a waste of money IMO unless you have some specific need that actually uses it.

      I’d say the average person would be better served spending the money on some silly clocked 4000Mhz+ 32GB RAM than have 3200Mhz 64GB RAM if it came down to it. But realistically those dollars would be better spent on 3200Mhz 32GB RAM and put the rest towards a better GPU (get a 1080 instead of 1070) or more SSD capacity (1-2TB instead of 512GB-1TB) or a better monitor (more Hz bigger screen etc). Any one of those would get you a better computing experience than 64GB of RAM would at this point or even a few years down the road.

      Tons of enthusiasts have held back upgrading because they don’t think Intel has been offering much value for the price they want to charge. I’m one of them. Still have a i7 2600K. For the desktop market at large, which is shrinking, Zen will offer a decent upgrade in performance for a better price than what Intel based systems are charging. Avg. person doesn’t care about the CPU but they do care about the user experience and price.

        • Kougar
        • 3 years ago

        We won’t, but AMD will have to deliver both a powerful CPU and a strong, capable platform. Enthusiasts are not going to tolerate USB and SATA performance issues that plagued AMD ten years ago. Though it is a good sign they are promising good M.2 and peripheral support.

        It will be interesting to see just how many PCIe lanes are included on RyZen as those will into the size and cost of the chip.

      • Zizy
      • 3 years ago

      You need X GB of RAM to run simulation this big. Having twice the CPU performance would let you run stuff in half the time and not change memory requirements at all.
      If I had access to say 1k cores, I would probably still need only about 32GB RAM.

      And well, if you do need tons of RAM, AMD will have MCM solutions that cover those.

      • Krogoth
      • 3 years ago

      The quad-channel memory on Socket 2011 chips is really meant for multi-socket setups though.

        • mesyn191
        • 3 years ago

        Yup. Outside of boosting IGP performance quad channel memory hasn’t really made much sense and has mostly been a waste of money.

        Unfortunately quad channel memory was never used in cost effective systems low end gaming laptops or PC’s where IGP performance really matters so its really only been a waste of money so far.

        That will change eventually of course as the software changes, core counts increase further, main memory runs out of clock speed increases, and L3/L2 caches start to get die limited in size due to no more process shrinks but we’ve still got a ways to go at least for all that to really hit home I believe.

        10C/20T-12C/24T CPU’s seems to be about where dual channel memory starts to become a big issue holding back performance from what I understand but the context for that was with server style work loads and that may not hold true for desktop work loads.

          • the
          • 3 years ago

          In the server space, quad channel memory is about increasing memory capacity as it is memory bandwidth.

          As for the limits of dual channel memory Xeon D has dual channel and goes to 16 cores. At such high core counts, it does appear to be slightly bandwidth constrained. However, the effects are skewed as the larger core count chips have larger L3 caches to go with them.

          I also wouldn’t worry about L3 cache sizes too much. Going forward techniques like interposers/EMIB can be used to scale past die size/yield issues.

            • mesyn191
            • 3 years ago

            Sure but we’re talking about for desktop or HEDT products since that is what Zen and Broadwell-E are. Not servers.

            Servers you can go hog wild and throw 8 channels of RAM and it still might not be enough bandwidth or RAM slots for capacity.

            Yea multi die stuff will help when process development slows/stops but its going to be too expensive for a while yet I think to talk about seriously outside of for server chips.

    • ronch
    • 3 years ago

    Ok, here’s a crazy or not-so-crazy thought:

    Could Intel actually be working on a completely new core as we speak?

      • K-L-Waster
      • 3 years ago

      Are they working on something new? I expect so: I expect they are always working on something new for the next “tick” (even though they are now on a 3 stage cadence instead of the old Tick Tock, the Tick stage does still come around).

      Are they working on something completely new *that will be released in the very near future?* I wouldn’t hold your breath. We know about Kaby Lake and Cannon Lake coming after that, and presumably there will then be a “Tock” and a “Polish” following on after that.

      Are they working on something *radically crazily game changing different?* Answering that, my friend, would require going off into wild speculation land.

      • tipoo
      • 3 years ago

      Their roadmaps are laid out years in advance, so in general? Yes. As a ground-up turnaround to this announcement? No…

      And AMDs new ground-up design brings it closer to what Intel’s already were, so there’s some evolutionary convergence going on, I’m not sure Intel feels in need of revamping their architecture rather than keeping going with evolutionary iterations.

        • mesyn191
        • 3 years ago

        Cannonlake is coming late ’17 but they’re targeting low power/clockspeed/cost platforms like tablets so its probably going to be more of the same we’ve seen from Intel for the last few years: very little to no CPU per clock performance improvements in favor of better perf/watt and maybe a better iGPU.

        Coffeelake is coming mid ’18 but it looks to be mostly about adding a couple more cores to the top desktop product line and maybe a bit better perf/watt and/or iGPU for other desktop models.

        The biggest thing about any new Intel chips over the next couple of years at least appears to be the platform updates they’ll be doing. Particularly for HEDT. Purley looks pretty interesting but real expensive too.

        All that info. is based on ‘leaked’ roadmaps that have been publicly available for a while now. Its possible things could change but its not looking too likely.

        Realistically if Intel didn’t already start working on a new clean sheet CPU design that targeted performance improvements over perf/watt a couple of years ago (so 2014) they probably won’t have anything interesting performance wise in 2018 or 2019 either.

        edit: ewps replied to the wrong person, sorry. I’ll leave it since its still relevant I think.

      • the
      • 3 years ago

      The short answer is yes.

      The long answer is that we won’t see it until it beats there current offerings in performance per watt. Intel has been sitting on several IPC boosts that decrease overall performance/watt (i.e. energy consumption increase more than the performance boost).

      In the mean time, Intel does have different Sky Lake core planned for the next Xeon E5. It’ll have 512 KB L2 cache and AVX 512 support. I see this divergence between consumer and server designs continuing.

        • chuckula
        • 3 years ago

        Every “consumer” grade part that Intel has sold since 2011 is basically a notebook part first, with some of the SKUs being overclocked at the factory to produce products like the 2600K, 3770K, 4770K, 6700K, next year’s 7700K etc. etc.

        The Xeons are starting to diverge more from being simply larger versions of the consumer-grade products, but there is still a high degree of overlap going on.

      • Kougar
      • 3 years ago

      Yes, it’s called Coffee Lake. Because it’s the wake up call for Intel to redesign its uArch.

      Okay probably not really, but it’s nice to think think so.

        • Ninjitsu
        • 3 years ago

        Honestly don’t expect a uArch re-design until 2020 and they can’t shrink this stuff further on silicon.

    • atari030
    • 3 years ago

    I see the 310 comments (thus far, and growing) as pretty reflective of how thirsty this market is for some competition. Hopefully Ryzen will quench that and we can actually see some AMD CPUs on the TR System Guide again. It will be an interesting several months coming up…

      • ronch
      • 3 years ago

      I was actually gonna comment about this when there were still 307 comments here. IIRC the original Bulldozer review here got 307 comments. That was a review. This article is just a preview. When RYZEN comes out be prepared to see a thousand comments there.

        • chuckula
        • 3 years ago

        Hype is more fun than reality.

          • Waco
          • 3 years ago

          It’s nice to be optimistic in the face of reality. 🙂

    • gmskking
    • 3 years ago

    4C/8T for $150.00. I just don’t believe it. Hope I am wrong.

      • Ninjitsu
      • 3 years ago

      what?!

      • Jigar
      • 3 years ago

      You are wrong, they are going to give that for free, after all they are a cheapo company.

      • ronch
      • 3 years ago

      Business is business. They may price a bit lower but they won’t price the way they did with BD. And Lisa did say that AMD wants to slowly ditch their image as the cheaper alternative.

      • Magic Hate Ball
      • 3 years ago

      Remember, that’s 4C/8T without an iGPU. Waaay less die space = cheaper to make.

      I think all of Intel’s offerings under 6C/12T in the desktop space come with an iGPU that is a waste of space in many gamer rigs. That’s die space you paid to be manufactured!

    • ronch
    • 3 years ago

    I just remembered, today marks the 4th year since I bought my FX-8350. Four years, man. I still have the nice tin box, which is practically in the same good condition I got it in. Yeah we know how BD almost killed AMD but hey, my FX runs my old games really fast so I suppose it’s pretty amazing. I should start saving up for Zen but I imagine I’ll be sticking with my FX indefinitely.

      • gmskking
      • 3 years ago

      Didn’t realize it was that old. My buddy just bought one for a new build. I gotta give him some sh*t now that he bought a 4 year old processor.

    • AnotherReader
    • 3 years ago

    Real World Technologies has a [url=http://www.realworldtech.com/forum/?threadid=163466&curpostid=163466<]thread discussing RyZen[/url<].

    • rechicero
    • 3 years ago

    It seems promising. But I’d say it would be better to be cautious, I still think a medium, medium-high end competitive chip (in perf and perf/watt) should be considered a huge win for AMD.

    We’ll see when it’s launched. Of course these numbers are good, but it’s better not to hype too much.

    • Ninjitsu
    • 3 years ago

    Still sorta curious why they didn’t go with anything more standard like Cinbench, but I guess this was kinda cooler and got more people talking about it (or at least, the same people talking about it more).

    • Solean
    • 3 years ago

    I’m rollin’ with a X58 chipset.

    MB: Gigabyte UD5 X58
    CPU: i7 980x overclocked to 3.83 Ghz with HT and Turbo boost on
    GPU: nGreedia GTX 980Ti
    RAM: 24GB Corsair 1600 Mhz DDR3
    SSD: Samsung 830 pro and a PCIex Kingston HyperX
    Monitor: 2560x1440p @ 60Hz

    I get 60 FPS in Witcher 3 with all on ultra, except shadows on high and no skunkworks.
    I get between 47-60 FPS in Hitman 2016, DX12 mode with all on Ultra; on High in most scenes I’m at 60fps.

    Before yesterday my fingers were twitching to click on “Buy now” on Amazon of an i7 6900k with an Asus Rog MB, and DDR4 Ram. Price: 2400 euros (EK monoblock included for the cpu+mb).

    Think I’ll wait.

    If in April 2017 I’ll be able to buy a Zen that gives me 90% performance of a i7 6900k in games and a Vega that gives me 90% performance of a GTX 1080Ti for 2000 euros maximum, I’ll instant buy.

    • cpucrust
    • 3 years ago

    I’d like to order some red ryzens with my radeons please

    • gmskking
    • 3 years ago

    Waiting for benchmarks, but looks good so far.

    • travbrad
    • 3 years ago

    I will wait for more independent results running a wider variety of tests but it looks like the best thing AMD has had in many years by quite a wide margin. It helps that they are up against what is now a seemingly stationary competitor as well. Early results from Kaby Lake show it has exactly the same IPC as Skylake. Kaby just has better binning/optimization for power use and maybe can go a couple hundred mhz higher than Skylake.

    I think it’s still going to be hard to justify for a PC meant only for gaming compared to Kaby/Skylake, unless they have some nice budget offerings. At least their IPC isn’t an embarrassment anymore though and it could be a good option for people with some heavily multi-threaded workloads or a mix of gaming and other workloads.

    • chuckula
    • 3 years ago

    Regarding the Blender benchmark, the link below has some interesting information about the settings that were used.

    In brief, do [b<]NOT[/b<] simply download the render and run it with a default configuration of Blender if you want an apples-to-apples comparison for the settings that you saw on stage at that event. [url<]https://forums.anandtech.com/threads/post-your-ryzen-blender-demo-scores.2494600/[/url<]

      • Ninjitsu
      • 3 years ago

      This explains a lot. If it scales linearly they were probably using 144 samples.

      EDIT: Can confirm it appears to scale linearly, 100 samples finished in exactly half the time of 200. So they were using ~140 samples most probably (on stage with ~34s, not the press thing where they used 100 samples).

        • Mr Bill
        • 3 years ago

        100 samples according to the [url=https://forums.anandtech.com/threads/first-summit-ridge-zen-benchmarks.2482739/page-116#post-38629977<]thread at Anandtech[/url<].

          • Ninjitsu
          • 3 years ago

          Yeah it’s ~24s with 100 samples which was shown here to the press: [url<]https://youtu.be/7yxSFmEOkrA[/url<] ~34s on stage here with unknown* samples: [url<]https://youtu.be/4DEfj2MRLtA?t=1933[/url<] *136-144 depending on the exact number of seconds.

    • chuckula
    • 3 years ago

    Hrmm… once upon a time in December (of 2015) [url=https://techreport.com/forums/viewtopic.php?f=2&t=116871&hilit=Zen+2017+confirms#p1284646<]when Chuckula accurately predicted that Zen wouldn't be out until 2017[/url<], he wrote this: [quote<] 4. If Zen really is the miracle chip that AMD is promising, then we will see official or quasi-official "leaks" of engineering samples pop up around this time next year to build hype. AMD doesn't really have anything to lose in its existing product portfolio anyway and it might put off a few Kaby Lake purchases. If it is December 2016 and there are either no real leaks or the only leaks are obviously not AMD-approved and are not showing any miracles? Consider that to be AMD admitting that Zen isn't setting the world on fire.[/quote<] Given what we've seen, I'm going with Zen being in the upper half of that predicted performance spectrum, but not right at the top. It's definitely better than just some vague marketing promises, but at the same time AMD is keeping a very tight lid on letting third parties see what Zen can do before launch. Under ordinary circumstances that would be OK, but in this case 1. AMD has nothing to lose, even they have thrown their old stuff under the bus; and 2. Even if Kaby lake isn't that exciting, it's still going to be on sale before Zen. Why not spoil the launch with independent reviews before Kaby Lake even hits the market?

      • HisDivineOrder
      • 3 years ago

      Reasons include:

      1) They laid off their Marketing department and that explains why they don’t understand the fundamentals of pitching their products ahead of time without big snafus or missed opportunities.

      2) They are hiding something (performance-wise) they don’t want to get out there, so they’ll build up hype ahead of launch and hope they sell enough product the diehards who spent their money won’t own up to the fact they’re just painting a fence white.

      3) Their chip is so mindblowing that they want it to emerge from nowhere and wreak havoc upon Intel all across its lines.

      Given AMD’s history, I do not believe 3 is going to happen. I think 1 or 2 are infinitely more likely. I hope it’s 1. That means the chip could at least be competitive.

      • flip-mode
      • 3 years ago

      Dude, you’re breaking your arm patting yourself on the back for first grade level thinking skills. Everybody – EVERYBODY – knows that when AMD first adds a new CPU product to the road map, you AUTOMATICALLY add AT LEAST 12 months to the schedule. That’s conservative. 18 to 24 months is probably more accurate. But yeah, great job, man; very impressive. You amaze me.

    • vargis14
    • 3 years ago

    BTW I have to add i am soooo glad they are fabbing it on 16nm instead of GF 14nm disappointment, I am still stunned how low AMD’s 470/480 core clock are compared to NV’s I know design could be a big factor but I don’t think so….Just to prove it I would absolutely love to see GF would make a GTX 1080 just to see what kind of clock speeds it would attain on GF’s 14nm process. At least then we would know with absolute certainty that GF’s 14nm process sux LOL 🙂 Or have TSMC make a 480 one or the other…yes another edit

      • AnotherReader
      • 3 years ago

      Are you sure about it being fabbed by TSMC? The TSMSC Pascals seem to clock about 10% higher than the 1050 Ti though I think we would need a bigger sample to be definitive. Pascal reaches high clocks because of its design; even on Samsung’s 14nm, 1050 Ti can overclock to [url=http://www.hardocp.com/article/2016/10/25/msi_geforce_gtx_1050_ti_gaming_x_4g_review/4<]1900 MHz[/url<]. That 10% can be a big deal for CPUs.

    • vargis14
    • 3 years ago

    Just going to use the wait wait more and see approach….my 2600k can still game with a i76700k and 2 gtx 1080 still will not be bottlenecked much at all even in SLI PCIE 2.0 at 8x is underrated…definitely can run right with a i7 6700k with a 1080 or new Titan card at 3400mhz let alone at 4848mhz. Unless it is competitively priced and performs on par with just my 2600k i might consider it. But AMD has let me down for so long since my last decent amd CPU 4800×2 at 3.1ghz..before that it was a fx53 940 pin opteron rebadge at 2.8ghz and before that on the same K7s5a MB i ran a 900 tbrd then a 1400 tbird then a xp2000 cpu. Also I ran 9700pros x800xts x850xts and my biggest mistake was the crossfire x1800xts with the master and slave cards then a month later the x1900’s were out and like a month later x1950’s back in those day tech was changing on a monthly basis. i am so glade now that i got my 2600k when i did…the only other CPU that come close to lifespan to performance is the 1366 cpu;s like the 970 and 980x 6 core CPU’s but when it came to gaming the 2600k was better with better IPC and overclocking. IMHO….but i do hope it is very competitive maybe it will make intel drop prices since they have much lower cost per tiny chip, especially with the i7 4c 8t cpu’s

      • chuckula
      • 3 years ago

      Look, if AMD’s DOTA2 demo taught us anything, it’s that you absolutely CANNOT GAME with a quad core Intel part. Absolutely not possible.

        • vargis14
        • 3 years ago

        no experience with Dota2 so I cannot reply to that statement…BTW gave you a thumbs up since I did not know that…but have to see to believe:)

          • chuckula
          • 3 years ago

          It was kind of a joke based on a rather ridiculous setup AMD used yesterday. For perspective, you can typically play DOTA2 easily with practically any non-Atom or Kabini part made in the last 5 years or so.

            • RAGEPRO
            • 3 years ago

            Heh, Kabini plays DOTA2 just fine on its IGP even.

            • mesyn191
            • 3 years ago

            They were demo’ing DOTA2 + streaming though. Not just playing DOTA2.

            Its still a bit silly since anyone streaming while gaming would’ve used Intel’s built in decoder instead of just the CPU but not as ridiculous as you and others are making it out to be.

            At a minimum it shows that you can stream + game with a Zen with just the CPU alone and have no degradation in the game play experience.

        • vargis14
        • 3 years ago

        I really hope they are good enough that Intel will add at least 256mb to crystalwell memory to most of their CPU line….really does improve basic CPU performance and i do not mean graphics…a GB or more would be very nice to have sitting on your CPU as a huge cache. they definitely have plenty of space on the substrate under the IHS also I wish they would start soldering the CPUs to the IHS again…that would have allowed much better overclocking since it is a direct connection to the IHS and the tiny bit of extra solder down the side of the actual CPU silicone does increase its heat transfer dramatically…its just physics the more area touched by a conductive metal will cool better and more uniformly. Sorry another edit… wish you could order a CPU with extra solder to go down the whole side of the CPU silicone.
        Like otdering a cheesesteak with extra onions and cheese it is just better..thats a fact:) bad analogy but your smart enough to get what i am saying >:)

        • tipoo
        • 3 years ago

        I didn’t gather that’s what they were saying, it was that both could game, but gaming while also recording and uploading the stream in full HD choked one up. It still has scent of bull about it, but they certainly weren’t saying a quad core Intel can’t run DOTA.

        • Shobai
        • 3 years ago

        They explicitly made the point that the test was play+stream, and presenter specifically called the fact that the gameplay on the Intel system was at least as smooth as the AMD system’s, although the stream was choppy [for whatever, possibly contrived, reason].

        [url=https://www.youtube.com/watch?v=X9NNOqzTbKI<]The livestream[/url<] Relevant section starts at 30:50. 6900k system at 32:20. 6700k at 32:35. Relevant statement is around 32:47, presenter's callout at 32:50. I think you're barking up the wrong tree. [edit: originally posted by phone, have fixed URL]

          • freebird
          • 3 years ago

          Details, details… Chuckula never lets the details or fact get in the way of his “jokes” about AMD or how they ran a demo… 🙂

          • chuckula
          • 3 years ago

          Meh… Nvidia specifically stated that the “10X” improvement in the P100 was for training of deep convolutional neural networks.

          Didn’t prevent people from saying “OMG JEN-HSUN LIED MY GAMES AREN’T 10X FASTAR!!”

          B.S. from corporate spin doctors is B.S. and a whole bunch of “OMG READ THE FINE PRINT THEY TECHNICALLY DIDN’T LIE!!” while you know damn well they are being intentionally deceptive while not theoretically lying doesn’t change that.

            • Shobai
            • 3 years ago

            In this case, chuckula, this isn’t ‘people’: it’s /you/. Be better than that.

            [edit]

            I’m fine with the downvotes, chuckula, because I think that shows that I hit the nail on the head. Honestly, though, what do you think AMD should have done in their stream to explicitly state what they were showing?

            • cegras
            • 3 years ago

            Your triple downvote is a badge of honour – you poked chuckula and he leaked all over you. It’s not that hard to do, but you can join the rest of us on the other side where we simply take it as par for the course.

            • Waco
            • 3 years ago

            He’s poking at the fact that chuckula was intentionally being a bit obtuse for the sake of humor (since AMD was clearly being dishonest with the comparison) but acting like chuckula was being aggressive or disingenuous about it…

            It was easy to pick up, and I’m surprised anyone is giving him grief over it. The demo was utter BS, nobody would run the settings they were proposing [i<]when even AMD just announced hardware encoding for streaming on Radeon GPUs[/i<].

            • cegras
            • 3 years ago

            Nah, chuckula is running at windmills as usual. Because this:

            [quote<]Didn't prevent people from saying "OMG JEN-HSUN LIED MY GAMES AREN'T 10X FASTAR!!"[/quote<] Was totally all over TR right? (And if it was happening somewhere else, what's the point about complaining about it here?) [quote<]The demo was utter BS, nobody would run the settings they were proposing when even AMD just announced hardware encoding for streaming on Radeon GPUs.[/quote<] That's silly logic. Why run any demos when ASICs are available? It's a demo for showing that one chip is faster than the other at a certain task. Is it a great demo? That depends if Blender is a good showcase of what CPUs can do. It's certainly coded to take advantage of what they can do.

            • Waco
            • 3 years ago

            If that task is pointlessly wasteful, that’s where I draw the line. :shrug:

            • cegras
            • 3 years ago

            Aren’t there plenty of render farms out there that still use CPUs? I even found articles saying that Pixar has AMD processors.

            • Shobai
            • 3 years ago

            I’m going to have to agree with Cegras’ ‘windmills’ comment. If chuckula had wanted to point out, with humour or no, that AMD’s comparison was showing a workload that wouldn’t be seen in the real world then there are a myriad ways to do so.

            He didn’t though. He called them out as attempting to show that a user couldn’t game on a 6700k. Given the lengths that AMD went to to ensure that no viewer could make that mistake, what else could you possibly call his behaviour but disingenuous?

            Again, I don’t mind if he or others downvotes this comment, but let’s call a spade a spade.

            • Waco
            • 3 years ago

            I guess having a sense of humor is overrated.

            • Shobai
            • 3 years ago

            Not by me =) Where does that leave us?

            • Waco
            • 3 years ago

            Don’t take him seriously and it’s a sarcastic jab at AMD marketing, no?

            • Shobai
            • 3 years ago

            If it was sarcasm, his reply to me would have been along ‘bro, that’s what I was saying. Sup?’ lines. Instead, it’s ‘wah, they did it to me first!’, then calling out ‘marketing BS’ – which appears to specifically be what AMD attempted to avoid during their livestream.

            So, if it’s not sarcasm, is it still humour?

            • Waco
            • 3 years ago

            If you think AMD was avoiding BS then I’m not sure what to tell you. The MOBA demo was exactly that.

            • Shobai
            • 3 years ago

            OK, I think I see the problem here: you’re saying that AMD’s test is ‘BS’ because they used an encoding setup that doesn’t reflect how users would do the task in real life. I am not necessarily disagreeing with you; in fact I may even agree with you. I have no dog in that fight. That’s not what chuckula is saying, though.

            chuckula is saying that AMD is telling people they can’t play DOTA2, of all things, on a 6700k. That is wrong, and my reply to him shows this.

            The DOTA2 demo was not ‘exactly that’, because you appear to be arguing a different point than what chuckula argued. Does that make sense?

            • Waco
            • 3 years ago

            I don’t know what to tell you. To me, he’s clearly exaggerating to be humorous and poke fun at how stupid the demo was.

            • Shobai
            • 3 years ago

            I can see that, and it’s fine. I appreciate you trying.

            All I am trying to say is that if he wanted to raise an issue with the composition of the test, like you did below, then he could have done that. Instead he chose to put words in someone else’s mouth. When I gave him the opportunity to clarify his intent his reply showed that he wasn’t interested in being reasonable. That’s fine, it’s his prerogative, but I tried to give him the benefit of the doubt.

            • rechicero
            • 3 years ago

            In this same thread you can read how somebody (ultima_trev at #80) asks for running games at 640×480… with a high end CF/SLI setup. And nobody said anything about that being bull.

            And although I can imagine ppl using software encoding instead of hardware encoding (more quality per MB could be interesting for ppl with data caps), I cannot imagine a single scenario where anybody would play at 640×480 with a high end SLI or CF setup.

            So lest be real, it’s not BS. It’s a good test to show some how the CPU works in this scenario. That’s all.

        • Waco
        • 3 years ago

        It’s amazing how many people can’t figure out what sarcasm is.

        I’m also still amazed people fall for such contrived demos when AMD themselves just launched a GPU-accelerated streaming driver…

    • AnotherReader
    • 3 years ago

    We have known it for a while now, but it is good to see that RyZen has proper SMT. Intel and IBM were right and AMD was wrong. I don’t know why they didn’t add SMT to their processors earlier; it would have helped Barcelona stay more competitive with Nehalem and Westmere for server workloads.

      • mesyn191
      • 3 years ago

      SMT isn’t something you just cut n’ paste into a existing architecture. You have to redesign the whole chip to do it. And its pretty hard to do apparently which was part of the reason AMD was trying to avoid doing it.

        • AnotherReader
        • 3 years ago

        I am aware that it isn’t a cut and paste; it has to be part of the intial requirements. What I am wondering is why it wasn’t added to, say K10, when there was a lot of evidence that SMT was a big win for quite a few workloads. Of course, adding it to K10, for example, would have required increased associativity in the L1 caches from 2 way to at least 4 way.

        Edit: The first implementation of SMT was NorthWood: the 130 nm Pentium 4 though Williamette had SMT disabled because validation hadn’t been completed. The [url=http://www.realworldtech.com/alpha-ev8-wider/<]Alpha EV8[/url<] would have been the first, but it was cancelled. It would also have been the first with 4-way SMT.

          • mesyn191
          • 3 years ago

          Like I said before: they’d have to redesign the CPU. It requires lots more work than just increasing L1 cache associativity. They’d have to redesign the whole “front end” to make it work.

          It takes 3-5yr and a heap of cash to do that. You might as well wonder why they didn’t add a trace cache or slap in a much better FPU.

        • vargis14
        • 3 years ago

        Just being a smart ass but technically you can…copy the schematic from intel and paste it onto a existing AMD cpu add some magic golden fairy dust and wala SMT.
        J/K

          • ronch
          • 3 years ago

          Yeah but that wouldn’t run Crysis.

      • blastdoor
      • 3 years ago

      Maybe….

      But keep in mind that there is a distinction between an idea and implementation. The general idea of bulldozer might have been good even if the implementation was poor. There really are situations where total multithreaded throughput is more important than single threaded performance. That is — situations where “more cores” really might be the best approach.

        • AnotherReader
        • 3 years ago

        I agree that the general idea has merit. It had potential as a server processor, but was hobbled by its cache hierarchy. It would have made sense in a world where Bulldozer was reserved for servers and client PCs were served by another processor with a focus on single threaded performance.

          • blastdoor
          • 3 years ago

          Or perhaps it would make sense to have a heterogeneous core setup, with a couple of big Core-style cores surrounded by a bunch of little Xeon Phi style cores.

      • ronch
      • 3 years ago

      I remember when AMD put out this video on BD and Bobcat back in Aug. 2010 touting the advantages of BD’s approach to multit-threading compared to SMT. Well, look who’s doing SMT now. 🙂

    • rudimentary_lathe
    • 3 years ago

    That Blender test vs. the out-of-the-box 6900K has me cautiously optimistic. It potentially says some very, very positive things about Ryzen single-thread performance. There’s also the possibility, however, of that particular workload being biased to the Ryzen architecture, so we’ll have to wait and see.

    I must say though that Lisa Su’s body language was great. She was very positive and confident. That could bode extremely well for Ryzen products.

    I just upgraded this year to a 4C/8T part, but I could be tempted to sell it for a 6C/12T or 8C/16T part if the price is right! Yay for competition!

      • vargis14
      • 3 years ago

      Curious on the clock speed on the 6 core and 8 core models at 1st and the overclocking ability…heck the 6 core model could be much more overclockable without 2 extra cores you have to hope are stable enough for any given overclock…same goes for the 6 core model but less core equals less chance of a bum core that will hold you back….wonder if you will be able to totally disable a crappy core on the 6 or 8 core models to get a much better/faster overclock.
      I am kinda sure we can do that now but never attempted it since my Golden goose is fine and still laying eggs…knock on wood EDIT: I would happily sacrifice a whole core to get a extra 500mhz or better from the remaining cores for better single core performance.

      Am i wrong to think this is possible? thumb me down if you do not think so up if you do please:)

    • chuckula
    • 3 years ago

    Not directly on-topic but an interesting thing to consider: Optimizing software is important.

    This article from Phoronix shows one year’s worth of changes in Intel’s Clear Linux* in performance for various benchmarks on a single system: [url<]http://www.phoronix.com/scan.php?page=article&item=clear-linux-2016&num=1[/url<] The interesting thing is how much variation in performance you can have for the same basic software benchmarks running on the exact same piece of hardware. Obviously for the graphics benchmarks the drivers and Mesa make a big difference, but even many of the CPU benchmarks can show big changes. * Despite being from Intel, it uses the open-source GCC compiler to compile all of its packages instead of ICC.

      • AnotherReader
      • 3 years ago

      That is a very important point that you make. I wish I had more than one upvote to give you.

    • flip-mode
    • 3 years ago

    Wow. I really, really don’t want to go through another round of exaggerated claims and promises from AMD. So far, I’ve remained steadfastly pessimistic. But I’m starting to waver. It can’t be, though. There’s got to be a catch. I just don’t see how AMD could come from so far behind to pull up essentially even.

    As for cost concerns, I really don’t care. I’m not buying a CPU any time in the next few years. AMD needs to make some money on this chip, which means pricing it as high as they can without being too high. If it performs like an Intel chip then it seems completely reasonable to price it the same.

      • bhtooefr
      • 3 years ago

      I could see it, to be honest – Intel’s been stagnating seriously hard, and there’s even been indications here and there that they’re even struggling on [i<]process[/i<] technology, I'd argue. Apple's got a good match for a Core M... with a [i<]smartphone[/i<] SoC, fabbed on what's allegedly a worse process. But then, I have to remember that this is AMD benchmarketing...

        • mcarson09
        • 3 years ago

        Are you sure that they are stagnating? I see them milking the market while AMD flaps it’s arms trying to stay a float in the cpu market. Look how they don’t even bother to increase cpu clocks rates.

        Now if arm could make (Intel wouldn’t give them a license) x86 and x86-64 CPUs they would give intel a run for their money. I think AMD is trying to get the most out of not paying their CPU engineers any real money.

          • bhtooefr
          • 3 years ago

          Willful stagnation (“milking the market”) is still stagnation, and it’s still given an opening for AMD to catch up.

          If it’s willful, Intel will be able to crush AMD in about 1-3 years time, as they move back into a competition mode, as they did with Conroe.

          However, I’m not actually sure that it is willful – Intel is getting hammered from all sides. In the server market, POWER8 was competitive with Haswell-E, and it looks like POWER9 will be competitive with Broadwell-E. Zen may be bringing the fight to Broadwell-E in HEDT, if AMD’s benchmarketing is to be believed (and if it’s bringing the fight to Intel HEDT, while fitting into mainstream TDPs… what does that say for mainstream parts?) Reports are that Kaby Lake runs hotter than [i<]overclocked to the same clocks[/i<] Skylake, for mainstream enthusiast desktops. Apple's got an ARM part that, in [i<]phone[/i<] TDPs (so probably 2 watts or so), is fighting 4.5 watt TDP Skylake parts on performance, and I'm going to guess that the tablet version (probably 4ish watt TDP?) will be fighting 15 watt TDP Intel parts on performance. Cannonlake TDPs on the low TDP parts are actually climbing in the roadmaps. And, Intel had a very, very public failure to scale down to mobile, and the Apollo Lake parts resulting from their last attempt at it are at 6-10 watt TDP (and Cherry Trail, while claiming a 2 watt SDP, was probably running near the 6 watts that Braswell was running anyway). Oh, and where's Quark nowadays? Oh, and every one of those parts is fabbed on a process that's allegedly worse than Intel's. POWER9 is on GloFo 14HP, Zen is on GloFo 14LPP, A10 Fusion is on TSMC 16FF+. And, amusingly, Kaby Lake is on a process that's supposed to work better at higher clocks than Skylake's process... So, I'm not so sure that the willful stagnation narrative is actually correct, Intel's actually got competition now, and has had failures that make NetBurst look good for its time.

            • RAGEPRO
            • 3 years ago

            I’ve been trying to say this for a while, I just couldn’t put it into words. Thanks for that.

            Although I will say that Kaby Lake’s heat issues are mostly just due to regular old airgap-lottery on Intel chips since Ivy Bridge. You pop the top and it hits 5 GHz.

            • bhtooefr
            • 3 years ago

            Ah, fair enough, wasn’t aware it was just airgap lottery striking again. (I don’t overclock, so it doesn’t affect me much, and my i5-6600K at stock clocks has stayed pretty cool, at least…)

            Although, if they had good competition, would they be more likely to use soldered heatspreaders or a better thermal interface material, at least on the enthusiast parts ala Devil’s Canyon? I’d suspect yes…

      • vargis14
      • 3 years ago

      In my opinion AMD should have just Shrunk the 6 core Thuban and made a 6 core and 8 core thuban with a 3rd memory controller for both and some minor improvements instead of wasting so much time and R&D on the crap Bulldozer CPU. they would have probably used less silicone per chip adding to profits instead of the huge Bulldozer chips…as for the APU’s they really dropped the ball on those as well they should have just put a 1gb 6670 or better yet a 7750 or something like it on their motherboards instead of integrating it into the CPU itself…..laptops sure but desktops was a huge mistake IMHO
      They could of then still be competitive with price and performance as well as selling tons of CPU’s to properly fund its cpu R&D….in my opinion….also they should have copied intels direct connection to the CPU and PCIE slots in some way that did not infringe on patents. They did so much wrong…man i wish i was their CEO i might be talking out of my azz but i think i could have done a much better job to at least lower there losses year after year.

        • flip-mode
        • 3 years ago

        Well, I agree and I think a lot of people agree with that. Probably everyone involved with Bulldozer agrees with that at this point. Hindsight and such.

        • Anonymous Coward
        • 3 years ago

        We can be pretty sure that AMD made Bulldozer because they hoped to do better than just copying Intel.

          • BaronMatrix
          • 3 years ago

          The point of Bulldozer was full FMAC for the CPU.. Intel realized they couldn’t beat them and changed the whole direction to go brute force…

            • chuckula
            • 3 years ago

            Yes, Baron, to this day Intel has never really been able to keep up with Bulldozer.

        • ronch
        • 3 years ago

        Back when the BD project was started multi-core computing was thought to be the next big thing. Also, given how AMD trailed Intel’s process tech at that time for about 18 months, making the die as compact as possible seemed like a great idea to help with chips per wafer. And so the concept for BD was agreed upon and perhaps they were hoping to get the clock speeds they needed to pull alongside Intel despite the modest IPC target. Was it risky? Sure. But they had to take some big risks especially in light of K8 being merely tweaked to take the role of their next generation architecture. They knew they HAD to come up with something new. They knew K10 or K8L wasn’t gonna cut it forever.

        Take note at that time Asset Light was probably not in their minds yet, or was probably just something that only seemed like an idea but not one taken seriously. So when the fabs were sold they no longer had full control of the nodes and the Arabs knew nothing but suck oil. You know the rest.

      • Anonymous Coward
      • 3 years ago

      Well I’ve mentioned it before… but IBM’s success with Power8 suggests that Intel is not invincible any longer. It seems like the age of “one core to rule them all” is coming to an end.

        • ronch
        • 3 years ago

        Microprocessor design is much like washing machines and TVs. Sooner or later everyone catches up as more and more tricks in the book are applied. And when everyone has done everything there is to do, we see not much difference between them unless the implementation was poor.

          • Anonymous Coward
          • 3 years ago

          With an increasing part of the purchasing controlled by huge cloud players, who have the resources to devote to achieving optimal performance for their budget, I’m thinking that Intel will struggle to maintain large margins. Even a very small advantage can be enough for a big customer to switch processors with each new generation of hardware they purchase.

          • Mr Bill
          • 3 years ago

          [quote<]Sooner or later everyone caches up[/quote<] fixed that for you 😉

      • alphadogg
      • 3 years ago

      “I just don’t see how AMD could come from so far behind to pull up essentially even.”

      They did it before, with the Athlon. Still, pays to be skeptic until its independently proven.

    • AnotherReader
    • 3 years ago

    Ars Technica has [url=http://arstechnica.com/gadgets/2016/12/amd-zen-performance-details-release-date/<]slides with some microarchitectural goodies[/url<]. These seem to be a rehash of the [url=http://hotchips.org/wp-content/uploads/hc_archives/hc28/HC28.23-Tuesday-Epub/HC28.23.90-High-Perform-Epub/HC28.23.930-X86-core-MikeClark-AMD-final_v2-28.pdf<]Hotchips slides from August[/url<]. A few tidbits from the slides of interest: [list=1<] [*<] 8 entry L0 ITLB; another slide indicates that the L0 and L1 TLB are looked up in parallel, but I could be wrong. The small TLB is probably dedicated to the micro-op cache. [/*<][*<] 512 entry L2 ITLB and 1536 entry L2 DTLB; this is for server workloads [/*<][*<] Distributed scheduler which is the norm for AMD and was only changed in Bulldozer; Intel, on the other hand, has a unified scheduler for all architectures except the various Atoms. This looks like a performance/watt tradeoff. Unified schedulers are better for instruction throughput, but consume more power than split schedulers of a comparable size. [/*<][*<] Differential checkpoints: this helps reduce the size of the state that needs to be saved from one branch to the next. That will, in turn, reduce power, and may reduce the mispredict penalty. [/*<][*<] L3 is mostly exclusive of L2; sounds like Barcelona and Bulldozer which was exclusive except the storage of lines shared by different cores/modules. [/*<][*<] Each core can access all of the L3 slices with the same average latency; this indicates that the L3 interconnect isn't a ring [/*<] [/list<] [Edit]: added more white space.

      • chuckula
      • 3 years ago

      Nothing earth shatteringly new but it is useful information.

      Edit: Although Ars is — once again — showing why it is not the tech leader it used to be.

      1. The article claims that the original Athlon (from 1999/2000) was the last time that AMD challenged Intel for the performance crown… not really.

      2. The article gleefully claims that Intel “gouges” by charging about $1000 for the 8-core chips. Conveniently forgetting that the last time AMD *actually* had the performance crown in 2005 to early 2006, it had no problem charging the same amount of money (or more accounting for inflation) for its high-end 2 core chips.

        • AnotherReader
        • 3 years ago

        Ars hasn’t been the same since Jon “Hannibal” Stokes left. Scott also worked there for a while. However, their summaries of scientific papers of interest to the educated layman are excellent and their video game reviews and technology policy editorials are above par too.

        Gouging is the right term for what Intel is doing now and what AMD was doing then.

          • blastdoor
          • 3 years ago

          Agreed — gouging absolutely is the right term for both points in time.

          • Ninjitsu
          • 3 years ago

          Their video game reviews were “meh” last that i checked.

            • AnotherReader
            • 3 years ago

            Which site would you recommend?

            • Ninjitsu
            • 3 years ago

            Well depends on what games you’re looking for, but I generally read Rock, Paper, Shotgun. Sometimes PC Gamer and Eurogamer, and The Scientific Gamer.

            Games are subjective so it depends on who your tastes align with, but with Ars my feeling was that they were out of their depth.

          • derFunkenstein
          • 3 years ago

          Unless you want to scream at people in the comments of vaguely tech/science-related political posts, there’s really not much to do or read on Ars these days.

            • AnotherReader
            • 3 years ago

            The science articles are, for the most part, pretty good, and the community has some very informed posters. However, one should stay away from the comments thread of any post on global warming or evolution.

            • derFunkenstein
            • 3 years ago

            But none of that is why I ever found or went to Are Technical in the first place.

            • AnotherReader
            • 3 years ago

            That isn’t why I first started going to Ars Technica either, but sites, like people, can change and grow with time. I wish they had made more of an effort to stay relevant in their original forte.

            • jihadjoe
            • 3 years ago

            Game.ars was totally my thing back in the day.

      • Chrispy_
      • 3 years ago

      I am not well-versed on CPU code execution, but please correct my TL;DR since some of it is guesswork.

      [list=1<][*<]Potentially good for Zen's performance.[/*<][*<]Not really relevant for desktop/workstation IPC[/*<][*<]AMD chose efficiency over performance here[/*<][*<]Good for efficiency[/*<][*<]This is bad, right? Or cheaper, or not as good as inclusive at any rate...[/*<][*<]The ring was a solution to improve parallelism efficiency dropoff in large (10+ core) Intel chips right? Since Zen's cores are paired, the 8C parts neither need nor should suffer from lack of a ring interconnect and this has minimal influence on single threaded IPC anyway.[/*<][/list<] And yes, I'm still calling it Zen, just like I still call everything else by the far more sensible codename; Pascal, Polaris, Skylake, Faildozer, Zen.

        • AnotherReader
        • 3 years ago

        You are correct for the most part. Point 1 could have power benefits if a L0 hit leads to clock gating the L1. Point 4 has some performance benefits too. Point 5 makes sense for single socket workstations, but could hurt for multi-socket servers; again, there are ways to mitigate that. Keeping shared lines in L3 is a way to reduce the impact of exclusive caches on server workloads. Then there is the snoop filter which AMD implemented in Magny-Cours by using 2MB of the 12MB L3 cache.

        • Magic Hate Ball
        • 3 years ago

        Higher thermal efficiency with via the split scheduler, could that help with keeping frequency up as it won’t be cooking itself as much?

          • AnotherReader
          • 3 years ago

          It won’t help with other hot spots so I think it would primarily help with keeping power down for laptops and servers. For raising frequency, you need at least a core-wide scheme like [url=http://www.realworldtech.com/steamroller-clocking/<]adaptive clocking in Steamroller[/url<].

      • tipoo
      • 3 years ago

      To point one, you think the uop cache is virtually tagged? Seems unlikely imo. It’s more likely that the L0 iTLB is a general-use iTLB with full associativity (8-entry, fully associative == very fast, very hit-efficient, very tiny cache). Such a cache would improve the latency of all other physically-tagged lookups.

        • AnotherReader
        • 3 years ago

        It would be very unlikely for the micro-op cache to be virtually indexed. The size of the L0 TLB is smaller than would be required to cover the L1 I-cache; this is why I speculated that it might be dedicated to the micro-op cache. However, looking up the L0 and L1 ITLBs in parallel and clock gating the L1 in case of a L0 hit would be effective too.

      • jts888
      • 3 years ago

      Isn’t flat L3 latency only guaranteed within a 4-core cluster? (point 6)
      That’s pretty much near the point where you’d just want to use a crossbar switch anyway.

      I’m still waiting on more detailed presentation about multi-cluster and MCM interconnect details (hopefully at CES?), since it seems like tons of smallish NUMA islands was one of the fundamental design points for Zen since day 1.

        • AnotherReader
        • 3 years ago

        I agree with you. I am also waiting for details about the MCM interconnect; let’s hope they fixed their coherency protocol issues too.

      • Zizy
      • 3 years ago

      Ars updated the old post with new info on top. I missed “This is our original article from August” at first too 😀

    • Krogoth
    • 3 years ago

    Methinks this will end-up being a repeat of Phenom II. A capable chip at its price-points for mainstream but Intel dominates the HEDT segments like Bloomfield chips did back in their heyday.

      • chuckula
      • 3 years ago

      THE KROGOTH HAS SPOKEN!

        • ronch
        • 3 years ago

        Well, he’s not Krogoth for no reason.

    • ronch
    • 3 years ago

    Amidst all the fanfare surrounding ‘Zen these days I just wonder how the folks who worked on Bulldozer must feel. Personally I think they deserve more recognition even if BD almost finished AMD.

      • ptsant
      • 3 years ago

      I have been saying this for quite a long time but even if the 8150 Bulldozer was a complete failure, the Kaveri and Godavari variants are quite good within their given process limits. If it made financial sense, an FX with 8 Kaveri cores would probably be a decent chip, but AMD rightly chose to invest in Zen development and not waste resources.

      I really was impressed with my 860K Athlon, for example. The progress form the 8150 to the 860K was probably much more spectacular than the progress from 2600K to 6700K. Again, within the limits of the process (8150 was 32nm, 860K 28nm, so very little to be gained on that front).

        • ermo
        • 3 years ago

        What exactly impressed you about the 860K Athlon?

        As an owner of a FX-8350 I’m genuinely curious — and I agree that it’d have been cool to see one final hurrah in the guise of an FX-9xxx using 4 Excavator CMT modules, but with double the L2 cache in place of the GPU block and a nice chunk of L3 cache.

          • ptsant
          • 3 years ago

          I also own an FX-8350. The 860K is dirt cheap, has practically equivalent single-threaded performance without need for a huge cache and much, much lower power consumption.

          Don’t get me wrong, the FX-8350 is faster overall but the 860K is such a good chip for the price that I felt pleasantly surprised. As you say, an FX with 4 Excavator modules would be quite decent but the die area would probably make it expensive and the development time was certainly better spent on Zen.

      • mcarson09
      • 3 years ago

      AMD should pay their CPU engineers some money and give them a bigger R&D budget. I bet the GPU side is not as cash strapped.

    • ptsant
    • 3 years ago

    So, the IPC is clearly stellar, because the AMD chip is running at a lower frequency than the Intel one and doing at least as much work per second. However, the low clock will need to be improved in order to be competitive with the 4-core Intel chips. Kaby 7700K will clock at almost 5GHz and Zen will need to go much higher than 3.4GHz to be competitive in single-threaded tasks.

    Anyway, Zen v2 is were stuff gets interesting, just like the 8350 Vishera was a much better chip than the 8150 Bulldozer.

    • Rza79
    • 3 years ago

    AMD’s stock price is atm the highest in almost a decade.

      • ronch
      • 3 years ago

      Well, Core 2 hit AMD smack in the face in 2006 and we all know how much bearing AMD took since then. And in 2011 AMD thought they had a killer but it blew up on their face.

      So yeah.

        • derFunkenstein
        • 3 years ago

        Phenom II [url=https://techreport.com/review/16147/amd-phenom-ii-processors<]reviewed pretty well[/url<], at least. If Bulldozer was even half the improvement to Phenom that Sandy was to Nehalem, they'd be in a much better position right now.

          • ronch
          • 3 years ago

          Phenom came out limping, and while Phenom II gave Intel some good competition it was only up against the Core 2 Quad. Against Lynnfield and later on, Sandy, AMD had to resort to the ‘more cores’ strategy but we knew they couldn’t keep that up. That was why so much depended on Bulldozer.

      • Ninjitsu
      • 3 years ago

      Currently even Intel’s is…Nvidia’s on another level entirely though XD

      Quite interesting how they’ve all shot up after so many years of gloom and doom.

    • ronch
    • 3 years ago

    Too many posts here are parodies of songs.

    Just goes to show how silly ‘RyZen’ sounds.

      • Shobai
      • 3 years ago

      “My honey, my baby”, said little ronch to hi’self,
      “I shouldn’t have started these lines, and kept my rhymes to myself”.

      • drfish
      • 3 years ago

      There’s a lot of low hanging fruit people haven’t used yet though. Just saying…

        • Shobai
        • 3 years ago

        Ooh, has anyone tried:

        There is a CPU in AMD’s livestream,
        You know? That ‘Ryzen’ one?
        It’s been the butt of many a poor joke,
        and chuckula’s strawmen

        =P

          • drfish
          • 3 years ago

          How about:

          All my friends know the new Ryzen
          The new Ryzen is a little higher

          Low Ryzen costs a little lower
          Low Ryzen is a real goer

          Low Ryzen pwns every feat, yeah
          Low Ryzen is the one to beat, yeah

          Or something…

            • Shobai
            • 3 years ago

            Spectacular! Nice work

    • NTMBK
    • 3 years ago

    AMD Raisin.

      • douglar
      • 3 years ago

      If you had a LAN party at your server farm, would it be a “Barn RyZen”?

      • chuckula
      • 3 years ago

      THE ROOF!
      (or maybe prices)

    • bfar
    • 3 years ago

    This looks very promising. Intel have set the price for years, so i imagine these will come in at or slightly below Intel’s current pricing. A couple of years of tit for tat and we could be back in a great place.

    • Klimax
    • 3 years ago

    So how may twists, strange settings and surprises in their benchmarks to get their “victory”?

    We are back at Bulldozer scale hype. That will end well…

    ETA: Looks like People hate being grounded back to reality. -18 and counting – so much success! Looking forward to independent tests of this CPU. Will be so much fun…

      • AnotherReader
      • 3 years ago

      I am curious: were you this skeptical at the time of the Original Athlon or the Opteron or Core 2 Duo (Merom)?

        • Pwnstar
        • 3 years ago

        No. He’s gotten bitter.

          • Shobai
          • 3 years ago

          Did a Kiwi witch turn him into a newt?

          • Klimax
          • 3 years ago

          That would be Bulldozer and Fiji PR. (Plus AMD fanboys)

          ETA: And forgot whole fun with Mantle and AMD lies about DirectX 12.

        • Klimax
        • 3 years ago

        More or less before my time on net. (Only dial-up) I might have been reading, I might have had my own thoughts but back then I was closer to ignorant then knowledgeable.

        And I might have been back then sort of fan of AMD…

        ETA: And frankly, not really relevant question.

          • AnotherReader
          • 3 years ago

          Urging the gerbils to be cautious and skeptical is the correct thing to do, but I called you out because you only do it with one company when we know that other companies engage in similar behaviour. Moreover, this time, AMD has provided the [url=http://download.amd.com/demo/RyzenGraphic_27.blend<]render scene[/url<] and the settings they used.

            • Klimax
            • 3 years ago

            Similar? That’s vague unspecific statement. So any recent Intel lies? Unusual settings used by NVidia? Anything similar badness like Bulldozer PR, lies about DX 12 or Fiji PR benchmarks?

            And if you noticed, I didn’t attack them for not providing settings. I stated, that those settings might be special best case. So kindly, do not invent things I didn’t say.

      • ronch
      • 3 years ago

      Killjoy.

        • Klimax
        • 3 years ago

        My second name. 😀

    • ronch
    • 3 years ago

    RyZEN just sounds lame to me. I suggest we still call it Zen.

    Or ‘Zen.

    • ronch
    • 3 years ago

    The more I think about the latest bits of info on Zen the more I realize just how advanced it is. I remember when Bulldozer came out and someone at Intel said they just don’t see AMD as a competitor anymore. And then the eerie silence that followed Piledriver when everyone wondered just how AMD planned to stay in the x86 business if they merely put out ‘good enough’ CPUs at $80 a pop. I said it before and I’m saying it again: AMD NEEDS to be very competitive, not merely ‘ok’, if they want to keep making x86 CPUs and make money in the process, and it seems they knew this all along. I’m sure many folks are satisfied with what AMD has been selling this past several years (just look at user reviews of their chips over at Newegg) but we all know that’s partly because they work OK and more so because they’re cheap. People love to think they’re getting more for their money by buying a $160 FX that they think is on the same level as a $340 Intel. In a sense, they may be right but if it’s absolutely true do you think AMD would sell them for less than half the price? Business is business.

    AMD just can’t keep themselves alive selling chips for $70, with the huge-die FX-8350 going for about $160. They need an Athlon 64, not another Bulldozer. They need to sell them at higher prices and they cannot do that if they bench significantly behind Intel.

    As it is, I am still cautiously optimistic but I’ll be watching how this story goes for about a year or so because I still don’t feel the need to upgrade. But when i do I’d very much rather go with AMD again.

    Kudos to the Zen team for working hard to produce a chip as sophisticated as Zen. Heck, it’s probably even more sophisticated than Skylake. And when you consider AMD is a much smaller company with far smaller pockets than Intel, well, that’s just about as incredible as it gets.

      • Mr Bill
      • 3 years ago

      The wags back in the mid 90’s used to say ‘despite selling every CPU at a loss; they would make it up on volume’
      [quote<]AMD just can't keep themselves alive selling chips for $70, with the huge-die FX-8350 going for about $160. They need an Athlon 64, not another Bulldozer. They need to sell them at higher prices and they cannot do that if they bench significantly behind Intel.[/quote<]

    • Delta9
    • 3 years ago

    Mr. Mojo Ryzen, Mr. Mojo Ryzen
    Mr. Mojo Ryzen, Mr. Mojo Ryzen
    Got to keep on Ryzen
    Mr. Mojo Ryzen, Mr. Mojo Ryzen
    Mojo Ryzen, gotta Mojo Ryzen
    Mr. Mojo Ryzen, gotta keep on Ryzen
    Ryzen, Ryzen
    Gone Ryzen, Ryzen
    I’m gone Ryzen, Ryzen
    I gotta Ryzen, Ryzen
    Well, Ryzen, Ryzen
    I gotta, wooo, yeah, Ryzen
    Whoa, oh yeah

    They should have left the name at Zen. And the logo looks like a bloody sphincter. AMD has finally tricked me into believing in Summit Ridge is going to be a beast. The disappointment could be one too many.

      • Captain Ned
      • 3 years ago

      Didn’t I see you pumping gas in Paris recently?

    • ultima_trev
    • 3 years ago

    I’m not surprised about it being better in multi-threaded encoding apps.

    But I need to seem some game tests at low details with a resolution no higher than 640×480, pairing the CPUs with a couple high end GPUs in SLI/CFX to know true single-threaded performance. Given that I didn’t see any mention of that, I’m expecting Zen to have at best 75% of the performance of Broadwell in terms of single-threaded workloads.

    Even if they price it at the rumored $500 mark, most high end gamers won’t mind paying double for the higher IPC of the i7 6900K since that’s what matters for games.

    The lower TDP will equate to good efficiency gains in server workloads however. However, the lack of single threaded tests in their demos just seems to imply they’ve conceded the PC gaming market to Intel/nVidia.

      • muxr
      • 3 years ago

      Well in the demos they had boost disabled. It sounds like they are still finalizing the Turbo, and it would be kind of unfair to show it until it’s completed, considering single thread turbo is the most aggressive kind of turbo core.

    • KaosMike
    • 3 years ago

    I just did my own render test with blender and their provided render scene on my ancient intel i7 960 clocked around 3.3ghz.

    Total render time was 2 min 15 sec or 135 sec with a bunch of stuff running in the back ground.

    The zen cpu was able to crunch this render in roughly 37 seconds just looking at the video

    This is actually a very good test, I happen to be a 3d artist, and render tests of this kind do use up full power and all cores/threads.

    So roughly speaking, this zen cpu bench is about 3.65x faster than my 7? year old intel cpu

    I think its this one:

    [url<]http://cpuboss.com/cpu/Intel-Core-i7-960[/url<] Given the above, I'm actually a bit underwhelmed, with all the fancy new tech, 8 threads vs 16 in zen and 3.6x perf gain? Its still great, but somehow I expected more. Wish i could post screenshots here :\ edit: i suppose with boost+bit of OC, it should easily go over 4x Still totally getting one if its below $600

      • xeridea
      • 3 years ago

      Well, in the tests, it matches/beats the $1100 Intel CPU, so by the same logic, the 6900K is underwhelming. I should do a comparison with my 6350 to see how enormous of an upgrade it would be.

      Edit, FX-6350 took 3:34, or 214 seconds, so estimating FX-8370 would take ~150 seconds. With these numbers, looks like a massive boost. I wonder if there are some newer instructions used, or if perhaps a lot is due to BD line being weak at FP work. So if we pretend 16T is ~10 cores, and 8370 has 4 FPUs, then 150 * 4/10 = 60, then this would work out to ~62% more work per core, even with lower clockspeed. Pretty big boost, especially tasks heavily dependent on FP math.

      Maybe I should also do the handbrake test. Hmmm, is that not available?

      Edit 2: My comparison is vs a Piledriver chip, and 40% IPC boost is vs Excavator, so 62% boost in FP heavy task did seem a bit high. It seems they are still exceeding 40% goal, since it is running at lower clockspeed.

        • KaosMike
        • 3 years ago

        Yeah good point there Xeridea. Man, i had no idea that chip was such weaksauce, and they are from 2013 too. This chip with a 1080 card or whatever equivalent from AMD next year is gonna be huge for content creators and gamers too!

        • Klimax
        • 3 years ago

        I’d bet that they spent a lot of times looking for particular set of settings and workloads to get that. Probability of this being seen anywhere outside of AMD are close to zero.

      • Meadows
      • 3 years ago

      Expected more?

      It has a similar frequency and 2x the threads. Assuming perfect scaling, the fact Ryzen is 3.6x faster means its per-thread performance is still 80% faster than that of your old CPU.

      For CPUs, even on the scale of 7 years, an 80% improvement is not bad at all. It’s not far from what intel themselves could achieve.

        • Shobai
        • 3 years ago

        Well, I guess if you can accept AMD’s findings, it /is/ what Intel achieved [what with the 6900k putting up a similar number]

        • KaosMike
        • 3 years ago

        I suppose its human nature to always want more 😉 even if its 10% off from my estimated difference, it’s more than enough to push me over the edge to purchase one.

      • spartacus
      • 3 years ago

      Did the same test with i5 4690k. It ran for 74s on a quad-core 3.5GHz CPU with 16GB of RAM.

        • Ninjitsu
        • 3 years ago

        Interesting, one of the guys that I play Arma with reported 2 mins 9 secs on an “overclocked” (i don’t know by how much) 4690K. I don’t have access to my own 4690K to test with, but I’m beginning to suspect a memory bottleneck of some sort…

          • NTMBK
          • 3 years ago

          That wouldn’t make sense, though- Zen has half as many memory channels as the Broadwell chip it was up against, so roughly half the bandwidth. Unless you meant memory capacity?

            • Ninjitsu
            • 3 years ago

            I initially thought either capacity or speed of memory, but i ran it myself and it’s only a 260MB render so capacity wouldn’t make a difference.

            Our results so far:

            i5 2400U – 5m 18s
            i5 4690K – 2m 09s
            i7 3770K – 1m 51s

            All seem in line with each other. Can’t explain the 74s.

            p.s. i5 2400U + 8GB DDR3-1600

            EDIT: Number of samples appears to make a difference. With 100 samples the i5 2400U finishes in 2m 38s i.e. 158s.

            So the other two in my list above are probably going to be ~65s (confirmed) and ~56s. Zen with 100 samples apparently comes out to 25s.

        • chuckula
        • 3 years ago

        Did you reduce the sample size to 100?

        • derFunkenstein
        • 3 years ago

        Wow. My Skylake i5-6600K system doesn’t go that fast. 1:45.06 for me.

        edit: after seeing chuckula’s comment about changing the number of samples, I ended up with just under 53 seconds for the same system. Guess that makes more sense.

          • spartacus
          • 3 years ago

          I was using default settings (never had Blender installed before) and the provided sample file. After your comment, I checked the sample size, it was 200.

          But, here’s the main difference, these results are from a Mac (Hackintosh 10.11.6). The same machine on Win 8.1. needs 2.08m to render the sample complete.

          Why is there a such a massive difference, frankly, I have no idea. In both cases I’ve been using a F12 render option (Render Image) on the latest x64 Blender.

          So my times are:
          Mac OS X default: 1m14s
          Win 8.1 default: 2m08s
          Mac OS X sample size 100: 0m37s
          This is the same machine (i5-4690k stock, dual boot), using IGPU only.

            • derFunkenstein
            • 3 years ago

            Very interesting.

            My Mac is the four-core Ivy Bridge-E Mac Pro (3.7GHz). A quick look at the settings shows that it’s using CPU-only rendering by default, same as Windows, but it tore through the 200-sample benchmark in 58 seconds. 100 samples in 29 seconds. So my Mac is faster than Ryzen! 😀

            • chuckula
            • 3 years ago

            While I’m sure an 8 core RyZen is going to be faster than 4 core Intel parts in a highly parallelized benchmark like Blender, it is a good to remember that complex software like this can have a whole lot of variables that affect performance.

            • derFunkenstein
            • 3 years ago

            Yeah it really proves your earlier point about Clear Linux and the like

      • Kretschmer
      • 3 years ago

      Never trust an AMD benchmark.

      Wait for the TR review.

        • mesyn191
        • 3 years ago

        If they would’ve given SPEC bench numbers I’d trust those but yeah its fair to still be skeptical here.

        Same goes for similar “benchmarketing” stuff from Intel or NV.

    • deruberhanyok
    • 3 years ago

    Hmm. There’s plenty of precedent to have doubt, though with an 8C/16T part being compared to an 8C/16T part, I think there may be significantly more to the performance claims here than some of the marketing-enhanced stuff that came along with bulldozer.

    Optimistic here. Cautiously optimistic. The top end for most of us DIYers has been a 4C/8T Core i7 for, what, 7 years now? And Intel hasn’t done much to convince people to upgrade – how many of you were sitting on a Sandy/Ivy i5/i7 when Skylake launched, looked at the test data and said, “oh, well, I guess I can skip yet another generation”?

    I briefly considered a Broadwell-E setup when I built a gaming rig over the summer (I wanted to build something that would “last me for like five years” and I figured a higher core count would help in the future), but price/performance over an i5-6600k just wasn’t there. This launch could really change things up.

    Not so sure about the name though. I’m sure it will stick and we’ll all just forgive the use of a “y” where it doesn’t belong, and the pronunciation won’t sound so weird after a while, and if the performance is there then, eh, it’s just a name. But for a potentially game-changing, company-saving product, I feel like it falls a little short.

      • Klimax
      • 3 years ago

      Reminder: There are number of ways of screwing up competitor in your own tests to paint yourself in good light.

      Including fun stuff like disabling advanced instructions, strange compilations or very specific and unusual settings or workload. nd I don’t trust AMD one bit to dismiss any of these things.

        • AnotherReader
        • 3 years ago

        Like [url=http://www.anandtech.com/show/3839/intel-settles-with-the-ftc<]compiler shenanigans[/url<] from [url=https://techreport.com/news/8547/does-intel-compiler-cripple-amd-performance<]Intel[/url<]; oh sorry, that doesn't mesh with your baseless accusations. Agner Fog [url=http://www.agner.org/optimize/blog/read.php?i=49#49<]weighed on it back in the day[/url<].

          • chuckula
          • 3 years ago

          Intel has as much of a responsibility to write compilers that optimize for AMD hardware as AMD does to write graphics drivers that optimize for Intel IGPs.

          Hell, Intel compilers produce code that will run correctly on AMD hardware.
          That’s a far cry from AMD drivers at least functioning correctly on Intel graphics.

            • AnotherReader
            • 3 years ago

            You are right when you say that Intel has no responsibility to optimize for AMD; however, using [url=http://www.anandtech.com/show/3839/intel-settles-with-the-ftc<]benchmarks compiled with ICC[/url<] to compare AMD and Intel is disingenuous. That is why the best benchmarks use open source code and are compiled with vendor-neutral compilers. I am partial to the GCC subtest of SPECInt for general purpose workloads. The second analogy is false. AMD and Intel's CPUs have the same ISA. Intel's IGP and AMD's GPUs are very different. On a tangent, Intel is far more open about the internals of its IGPs than Nvidia and AMD are about their GPUs.

            • Klimax
            • 3 years ago

            Note: AMD still benefitted at least in one set of tests from ICC. In fact at times got bigger boost then Intel’s chips…

          • Klimax
          • 3 years ago

          There is a reason why I was not specific. I included ALL fun there. But ICC is quite old case. Things changed. (Bit more recent tests of ICC-GCC-VC are much more interesting thing)

          Anyway, as opposed to Bulldozer PR test? As opposed to fun stuff about PR benches for Fiji?

            • AnotherReader
            • 3 years ago

            AMD is no paragon of virtue, but I don’t see you making such insinuations when other companies known for similar shenanigans are involved.

            • Klimax
            • 3 years ago

            You already tried it once. Once again you are vague unspecific. Similar… So anything concrete? Or will it remains just in realms of vague assertions of 0 worth.

            Reminder: AMD BSed Bulldozer, lied about DirectX 12 and massively played with unusual settings on benchmarks for Fiji.

            • AnotherReader
            • 3 years ago

            I gave you the example of Intel’s compiler shenanigans. Another instance is when [url=http://arstechnica.com/gaming/2010/07/did-nvidia-cripple-its-cpu-gaming-physics-library-to-spite-intel/<]Nvidia lied about the performance improvement of PhysX on the GPU[/url<] by [url=http://www.realworldtech.com/physx87/5/<]sandbagging the x87 portion of it[/url<].

      • cldmstrsn
      • 3 years ago

      Hit the nail on the head for me. I have been rocking my 3770k since it launched and see no reason whatsoever to upgrade yet as it still is an amazing chip. I would love for my next CPU to be an AMD.

      • TheMonkeyKing
      • 3 years ago

      I WANNA BELIEVE!

      (That said, I’m not giving them any more money until I see actual gains in performance, heat and price.)

      • odizzido
      • 3 years ago

      I am actually still sitting on lynnfield(2009). Pretty much everything I use my computer for runs just fine though I am running out of cores sometimes which kinda sucks. And ram as well now.

      If AMD can supply 6+ core CPUs at a good price and they perform reasonably well I might finally upgrade. Intel has nothing in that part of the market that’s even slightly interesting for me.

    • Unknown-Error
    • 3 years ago

    I watched the live-stream. What was that other demo (think the 3rd one) they had between 6900K, Ryzen and 6700K? 6700K was overclocked to 4.5 GHz but it was really struggling compared the 6900K and Ryzen.

    Handbrake video-transcoding, it took 6900K (140W, 3.2 – 3.7 GHz) 59s and Ryzen (95W, 3.4 GHz) 54s to complete. So at least in some, very well threaded applications Zen can complete with Broadwell-E with similar clocks and core counts. This is in contrast to Bulldozer family when even in heavily-threaded benchmarks they struggled to surpass USD 300 mainstream Intel CPU with much lower TDP. So I guess that is progress.

    The last demo was Ryzen + Vega.

      • ZGradt
      • 3 years ago

      I was kind of stumped by the 6700K bit. They were really light on the details. I suspect they were were encoding the stream in software rather than using the encoder on the CPU or the graphics card. That would be really dumb and unfair to the 4 core CPU to not let it use Quicksync or Shadowplay. Especially since the 8 core processors don’t even have built-in video encoders like Quicksync.

      The gaming demo seemed rigged against the 6700K. That should have been all on the graphics chip. The Handbrake result really impressed me though. Handbrake never really did get the hang of taking advantage of GPU’s, but their later builds have gotten pretty good at taking advantage of more threads.

        • muxr
        • 3 years ago

        They should have explained more, but yes they were using software encoding. They were showcasing that Ryzen (and 6900k) with 8 cores can handle high bitrate software encoding while streaming Dota 2 at the same time. Basically the benefit of having 8 cores.

        Software encoding has its advantages, you generally get better quality for a given bitrate. So if say your upstream bandwidth was limited, software encoding would be the way to go.

    • albundy
    • 3 years ago

    I wonder how it compares to the $260 6700k that i got for BF. When is TR getting a sample? please dont say late 2017 cuz i will lol.

      • Prestige Worldwide
      • 3 years ago

      I’m sure they will be able to get a sample without having to do too much ……
      ………Damage.

      (•_•)
      ( •_•)>⌐■-■
      (⌐■_■)

        • chuckula
        • 3 years ago

        YEEEEAAAAHHHHH!!!!!!

        • DrDominodog51
        • 3 years ago

        GG

    • blastdoor
    • 3 years ago

    I am going to allow myself to take a few moments to be excited and hopeful.

    nobody else seems to be targeting my particular niche, but this sounds like it might be exactly what I need.

    Uh oh… I feel the cynicism creeping back…

    • ronch
    • 3 years ago

    I have a Rye..

    I have a Zen.

    UUHHH!!!

    Ryzen!!!

      • Waco
      • 3 years ago

      I hate you for sticking this in my head. +3

        • ronch
        • 3 years ago

        ^_^

      • drfish
      • 3 years ago

      Thanks to my 12 year old nephew, I get that reference.

      • DrDominodog51
      • 3 years ago

      [quote=”Michael Jordan”<]Stop it. Get some help.[/quote<] [url<]https://www.youtube.com/watch?v=9Deg7VrpHbM[/url<]

      • Mr Bill
      • 3 years ago

      [url=https://www.youtube.com/watch?v=zUQiUFZ5RDw<]A bad moon on the rise?[/url<]

        • Captain Ned
        • 3 years ago

        No, a bathroom on the right.

    • ronch
    • 3 years ago

    “RYZEN? What kind of stupid name is that?!?”

    -Mad Dog Tannen

      • rds
      • 3 years ago

      It’s clearly a reference to Rocky II

      Ryzen up, read the spec sheet
      Replaces bulldozer, no more chances
      Went the distance, now its nearing release
      8 cores with 16 threads to survive

      So many times, it benchmarked so fast
      Blender and Handbrake bring glory
      Don’t lose your grip on the cores of the past
      You must fight not to give up and cry

      It’s the new zen core, it could be alright
      Ryzen up to the challenge of its rival
      And Intel’s lastest i7 was kind of alright
      And AMD fanboi’s are waiting for the new zen core

      • Coran Fixx
      • 3 years ago

      Runner up is “Puppy-Monkey-Baby”

    • firewired
    • 3 years ago

    Zen + Bringing on Rye = Ryzen.

    Did they consult Captain Jack Sparrow for their marketing campaign? Savvy?

    Humor aside, AMD – like everyone else – will setup the demonstration environment (hardware and software) to show their technology off with its best possible results. No harm no foul but it has to be stated.

    Do not misunderstand me – I am pleased with their progress. But I will withhold my thoughts until retail hardware reaches the public, and more importantly the supporting platforms and core logic chip(sets). Historically, AMD has trailed Intel in core logic chip(set) stability and performance, regardless of CPU competitiveness.

    Time will tell, but I am pulling for them. Competition is good for all of us.

      • Mr Bill
      • 3 years ago

      No no, they went to a lot of marketing trouble to draw attention to the homonym between ‘Horizon’ and RyZEN. Thus the whole ‘New Horizon’ event and the visual references to nested horizons of a milky way wrapped like a horizon around a star against a planet and the final result of a planet transiting or occulting a red sun which slightly resembles the Japanese flag.

    • HisDivineOrder
    • 3 years ago

    I think people are going to be disappointed by the pricing. Otherwise, I think AMD would be screaming about it already.

      • morphine
      • 3 years ago

      During the live event, Lisa Su made a side remark about the competing Intel CPU they picked being a $1,100 model. Take that as you will.

        • HisDivineOrder
        • 3 years ago

        “Soooo… we’ve made a chip that performs as good for only $999! PROGRESS!”

          • rechicero
          • 3 years ago

          If they make a Intel-like chips for 10% less money… that means prices will go down 10%. Yes, that’s good for us, isnt it?

            • derFunkenstein
            • 3 years ago

            If these are performance-competitive with Intel CPUs they have to launch at far more than a 10% discount to really grab a lot of attention. Maybe high-end AM4 boards will be much cheaper than X99 to make up the difference. After all, the CPU is only a fraction of the total system cost.

            • Mr Bill
            • 3 years ago

            Has a price range for RyZEN been mentioned somewhere? The Core i7-6900K goes for $1100 IIRC. I hope the slowest RyZEN is going to be <$300.

            • Srsly_Bro
            • 3 years ago

            Rumors have stated there will be SR7 8/16 CPUs priced at $500, and $350. It’s a rumor, so who knows, but I like the price.

            • TheMonkeyKing
            • 3 years ago

            And what does that translate with the cost of a new AM4 mobo to go with this new architecture? From what I read elsewhere, it should have all the newest chipsets and functions like USB 3.1, etc.

            So are we looking at a $300 – $400 board as well?

            • mesyn191
            • 3 years ago

            Why would you think the board would cost $300-400?

            All the non-sever mobos will be AM4, dual channel memory (or less for ultra cheapo boards), and have most of the chipset integrated into the CPU itself.

            There is no reason for them to be anymore expensive than any decent Z170 mobo for a enthusiast targeted product so think $120-180 depending on features for most of them. If they anything they should be cheaper since AMD is doing 1 socket for all non-sever desktop products.

            • freebird
            • 3 years ago

            Definitely cheaper since it is a SOC, I’m not sure, but hope single or dual Gigabit Ethernet is included in the SOC. Several SATA, NVMe & USB3.1 ports are in addition to 16 PCIe v3.0 lanes. Pretty sure Naples will have 10GB ethernet controllers integrated I think I read some where, but didn’t read about Summit Ridge definitely having it.

            Anyhow, so a sound chip, a few additional USB 2.0 & maybe USB3.1 Gen2 ports via add-on means it should be a very cheap motherboard unless the increased traces from CPU is a problem and probably the reason AM4 will only be 2 dual channel memory (4 memory slots).

            • derFunkenstein
            • 3 years ago

            There will eventually be APUs with four cores and eight threads that will slot into that price range, but if you think you’re going to get eight or even six cores for less than $300, then you must also think that Ryzen will be a flop performance-wise. AMD owes it to itself to actually turn profit on this thing upon which they have thrust all its hopes.

            • Ninjitsu
            • 3 years ago

            $600-800 is what they should be aiming at, imo. (for the top end)

            • blastdoor
            • 3 years ago

            Sounds about right.

            • derFunkenstein
            • 3 years ago

            For the same performance as a Core i7-6900K that’s fine. But they need to extend the range down into the same areas as the 6800K and 6850K, too.

            BTW I think this will probably be harvested dies, not full eight-core, 16-thread processors.

            • bfar
            • 3 years ago

            There is so much cynicism working against the AMD branded CPUs, I’m not sure if they’ll get away with that. They need market share.

            • blastdoor
            • 3 years ago

            One tricky thing here, though, is that if they price too low it could actually work against them — it could create the impression that they lack confidence in their product.

            I think they should definitely sell it at much higher prices than their current lineup, but also meaningfully lower prices than Intel. Since the gap between Intel’s prices and their current prices is so massive, they can definitely do this. Intel has left a huge price umbrella.

            • Ninjitsu
            • 3 years ago

            What blastdoor said. If the reviews are good and price is right, word will spread on its own. They’ll have the whole “AMD IS BACK TO SAVE US” thing going for them.

            • freebird
            • 3 years ago

            $600 Tops. I always figured if Zen had a top line comparable to the 6900K then AMD would probably sell it in the $500-$600 range. The higher they price above that means the less they can physically produce…

            1) They want to under-cut Intel by a significant margin. Anything $350+ is a big plus to AMD’s CPU revenue/bottom line, but they don’t want to price too low to undercut 6 & 4 core Zens (cores disabled/non-functional) If they price too close to Intel then sales don’t IGNITE like they probably are hoping (considering production isn’t an issue), that and I doubt Intel will price cut that far… but that will be a wait and see with Intel’s response. Along with how many AMD can ACTUALLY make.
            2) As you see with their GPUs they also want to EXPAND the performance bands up/drop prices down for middle tier, which helps grow the market for things like VR, HDR, etc. which in turn can really fuel GROWTH in the market…

            I look at it as AMD is trying to get everyone to move up to the next tier from what they would usually buy into. So if you were someone that was satisfied with 2-core now you’ll look at their 4-core Zen, people with Quad cores will look at 6 core Zen, and also expanding the people that will buy into/can afford the high end 8-cores. This along with Vega will allow that many more have top of the line VR, 21:9 HDR and more and hopefully enable those markets to grow faster.

      • jts888
      • 3 years ago

      This wasn’t the actual launch, so giving Intel competitive pricing info so far in advance of customer delivery probably wouldn’t make the (longer term) shareholders happy.

      • ptsant
      • 3 years ago

      Everyone has been saying $350 with possibly $500-600 for a “special” high-end model, possibly with a watercooler and/or cherry picked.

      If, as I predicted, the performance is roughly as a 6600K in single-threaded and clearly better than 6700K in multithreaded, then $300-350 is reasonable. Over $500 means it has to win all benchmarks against the 7700K. Not very likely.

        • AnotherReader
        • 3 years ago

        Broadwell-EP is slower than the 6700K in all single threaded workloads, but it isn’t priced any lower. The fastest variant of an eight core, 16 thread, relatively high-clocked CPU with presumably competitive single threaded performance shouldn’t be sold for less than $400.

          • Krogoth
          • 3 years ago

          FTFY, Broadwell-EP is slower than 6700K with any workload that uses four-threads or less. 😉

            • AnotherReader
            • 3 years ago

            That is true and it doesn’t make Broadwell-EP bad.

            • Krogoth
            • 3 years ago

            It isn’t a bad processor. It is just not a good value for mainstream workloads. The lower-end models are excellent for streaming/gaming usage patterns though.

          • ptsant
          • 3 years ago

          I would agree if the name on the chip was Intel. This is AMD. They suck at marketing and have far less brand recognition. Plus Intel manages to price the -E series higher by providing a better platform (8x slots for RAM, gobs of PCIe lanes etc) and artificial product segmentation.

          $300-350 is a fair price I would say…

      • ronch
      • 3 years ago

      The only people who will be disappointed are those who want to get a great product but don’t want to pay a fair price for it.

      As someone once said, “I’m not expensive, you just can’t afford me.”

        • K-L-Waster
        • 3 years ago

        I suspect you’re right. There seems to be a sub-set of posters who are convinced that even if RyZen is a top performer that it will still be priced sub $300 “because AMD doesn’t gouge us like Intel does.”

        If it ends up being competitive with Intel, it will likely be priced similarly too. (Which is fair play IMO — AMD shouldn’t be expected to give up revenue and profitability if they have a product that delivers the performance.)

    • WaltC
    • 3 years ago

    I will be buying Ryzen next year without a doubt, along with new mboard, DDR4, and more…! This is what I’d hoped for with the FX series, but AMD management dropped the ball on that one. Critical for AMD, Dr. Lisa Su keeps demonstrating that she knows the market very well!–much better than any previous AMD exec (with the exception of Sanders, of course.) I am certain that AMD’s best years lie ahead of it. I think that if I come out @ 90%-110% the performance of the i76900, while consuming < 2/3 the power, and costing 33%-50% of what the i7 retails for that I’ll be way ahead of the game Intel would like to play with me…;) It’s difficult to say but it looks like there’s no contest between Ryzen and the i76700, which Ryzen seems to blow away…ve shall see…;)

    IPC: If you watched the presentation today you will have seen Dr. Su state flatly that the production-level chips exceed their 40% IPC-improvement goals. She seemed very proud of that.

    Benchmarks: I have yet to read an Intel/AMD cpu review/comparison which failed to roll out all kinds of “canned” synthetics to “demonstrate” the cpu…;)

    Boost clock: Will not be finalized until shipping–which indicates to me they are still in the process of finding out what the architecture/chip can do in that regard.

    Overall, not too terribly much that we didn’t already know–I would like to hear more but I know that most of my questions will have to wait for production chips to hit the market.

    • Tristan
    • 3 years ago

    benchmarked handbrake with another custom workload.
    It is still AMD propaganda though

    • Tristan
    • 3 years ago

    New HoRyzens

      • Coran Fixx
      • 3 years ago

      Reminds me of the commercial on the MadTV show “Lowered Expectations”. Hopefully single core performance is ok.

    • synthtel2
    • 3 years ago

    About time CPUs started doing boost Nvidia-style. Maybe I’ll use some gallium TIM if I get Zen.

      • willmore
      • 3 years ago

      Don’t use it with an aluminum heatsink.

        • synthtel2
        • 3 years ago

        The base and heatpipes are nickel-plated copper. It’s no cheap stuff. If my HSF were lacking, I’d have suggested an upgrade to that before the gallium TIM. 😉

          • willmore
          • 3 years ago

          Just to be clear, you know that gallium and aluminum don’t play well together, right?

            • synthtel2
            • 3 years ago

            Yes. Why so much concern about that? I thought gallium’s dangers were pretty well known.

            Edit: I meant “why are you so concerned”, not “why is gallium a concern”. See my next post below.

            • Mr Bill
            • 3 years ago

            e.g. Gallium’s danger to aluminum? Same as that for mercury or even mercury salts. Amalgamation to the aluminum beneath the oxide layer exposes it to very fast corrosion.

            • synthtel2
            • 3 years ago

            Bad phrasing on my part, sorry. I’m well aware of the precise problem, but I thought everyone else was too, to the extent that willmore and you are sounding like captains obvious (no offense meant). My confusion is because this conversation’s path implies that people regularly wreck heatsinks with this stuff. Are people really so bad at reading that they think they can use it like any other TIM?

    • just brew it!
    • 3 years ago

    XFR is gonna make benchmarking (and interpreting benchmarking results) really “interesting”. Real-world performance even for a non-overclocked system will be dependent on how lucky you are in the silicon lottery, HSF and case cooling effectiveness, and ambient temperature.

      • DPete27
      • 3 years ago

      I’m sure there’s a limit to how far the chip will push itself. Nvidia’s GPU-boost has this feature and their GTX 10xx series cards will exceed their advertised boost clocks, but there is a finite limit regardless of how well the cooling is holding up.

        • PixelArmy
        • 3 years ago

        I feel like as described XFR assumes that temps are ok means the CPU is running stable. I’m not sure I agree with that assumption. Though hopefully, there will be more settings to fine tune this.

        Kinda like Chill on the GPU side of things assumes no input means the user doesn’t care about lower framerate, not sure I agree there either. But at least there, there are software profiles…

      • ZGradt
      • 3 years ago

      Sounds like the Radeon 290. From what I remember, the first benchmarks varied quite a bit depending on what the reviewer had the thermostat set at. It seems like most silicon hits voltage limits before thermal though.

    • Misel
    • 3 years ago

    SQUEEEEEEEEEEEE!!!!!

      • ronch
      • 3 years ago

      GUINEA PIGS!!!

    • Chrispy_
    • 3 years ago

    [quote=”Jeff”<]showed that Ryzen parts will most likely be competitive with Broadwell-E chips from both a performance and a performance-per-watt standpoint.[/quote<] At last. Hoo-rah!

      • DancinJack
      • 3 years ago

      Except they didn’t, really. They ran the same canned Blender “benchmark” and another HEAVILY multi-threaded workload on Handbrake. I wouldn’t take anything they said today as gospel. Review samples or bust.

        • derFunkenstein
        • 3 years ago

        The good news is the compared like for like – 8C16T CPUs on both sides, and apparently they took the handcuffs off the 6900K. Maybe I’m not cool enough, but that seems to say to me that the per-core performance is fine.

          • DancinJack
          • 3 years ago

          It very well may be fine, but unless it’s within a few ticks of BDW-E at roughly the same clockspeed, then they’re doing everyone a disservice with that crap.

            • derFunkenstein
            • 3 years ago

            I have more issues with the 4K game demo, which isn’t CPU bound at all. If this is all about how great the CPU is, then why use such a high resolution? Show me how it does in 3DMark physics tests, or the FPS at 1080p, which is the most popular resolution on Steam. And don’t get me started about the Dota 2 test and how awful that setup was.

            • ptsant
            • 3 years ago

            Although I do understand your point, I think that 1080p testing (just like Dota 2 or CS:GO tests) is a bit old and probably not representative of people who buy new hardware, especially CPUs over $200. It is a delicate balance between revealing differences and also looking at situations that are likely to occur in real life. Yes, differences at 1440p are likely to be smaller, but if in practice CPUs are equivalent in the resolution you are likely to use, it’s probably nice to know.

            • Ninjitsu
            • 3 years ago

            1080p is very relevant, unless you have proof otherwise. My CPU cost $260, i have a 1080p monitor.

            (what is the relationship between CPU cost and monitor resolution anyway, seems very unrelated)

            • ptsant
            • 3 years ago

            Nobody has proof of what is relevant for current buyers, because relevance is not dictated by what people have but what they intend to get. My opinion is not proof, but people who game and buy new CPUs are likely to gravitate towards 1440p quite soon. This is even more likely for people who buy higher-end CPUs like the 6900K and the high-end Zen shown in that particular case.

            • Ninjitsu
            • 3 years ago

            I’m still not seeing the connection between resolution and the CPU. For gaming purposes IPC and clock speeds is still key, and after a point it’s mostly GPU bound. So no, I don’t see how 1440p is more relevant from a CPU point of view. Lower resolutions shift more load on the CPU, and that’s more relevant from a benchmarking point of view. The market will transition to 1440p when it’s cheap enough to do so (both from a GPU and monitor perspective). The CPU will probably be less relevant until mainstream GPUs are pushing 100 fps in 1440p. That point is not today, and it’ll probably be 2 more years till that point comes.

            I had a Q8400 and a GTX 560 when I went from 1024×768 to 1080p. Then I switched to a 4690K. Kept the same monitor. GTX 560 died so got a 660 Ti Boost for cheap. Same monitor. If I had to change anything about the rig, I’d change the GPU.

            What I’m saying is, unless [i<]work[/i<] dictates getting a bigger screen [i<]and[/i<] a better CPU, there's unlikely to be a correlation. There may be a correlation between a higher res monitor and a better [b<]GPU[/b<], that I would accept. Finally, (most) people don't buy the 6900K to game on, neither will people buy a 8-core zen for that purpose. If they buy a higher-res monitor it'll be for work purposes, not for gaming.

            • Chrispy_
            • 3 years ago

            The thing about Broadwell-E is that is had so much cache and such an effective internal ring-bus that it actually scales up really well.

            What I’m implying is that Broadwell-E’s 8C16T performance isn’t that far removed from it’s 1C2T performance, so [s<]Zen's[/s<] Ryzen's IPC can't possibly be that bad, even if I'm as doubtful and cynical as you are. PS. Don't take doubtful and cynical as an insult; Those are actually the common traits of the wisened and experienced among us.

            • DancinJack
            • 3 years ago

            I didn’t take them as an insult, no worries.

            I just don’t think that, with the information we have, we can really draw a conclusion on Zen IPC at this point. Lisa did say “exceeding” the 40 percent they had previously discussed, but I just don’t care to take two cherry-picked, heavily multi-threaded examples chosen by AMD as truth. And like I said, IPC may very well be that good for everything else too, but I’m just not going to say OK until individual reviewers have chips on hand and show me independent of AMD.

            You also may very well be right about BDW-E IPC too. Luckily, we have empirical evidence for you to even draw that conclusion. With Zen? Negative.

            Edit: grammar/spelling

          • Kretschmer
          • 3 years ago

          Remember all the past AMD benchmarks that showed them in a favorable light?

          Remember the TR reviews for those same products?

          Take these demos with a grain of salt.

            • derFunkenstein
            • 3 years ago

            The difference between Bulldozer HYPE and Ryzen HYPE is that Bulldozer was comparing itself to 4C/8T CPUs which basically gave away the fact that per-thread performance sucked. These are going toe-to-toe with Intel’s very best desktop CPUs. I mean, sure, there’s a chance that we’re all going to laugh at launch, but it seems like a bigger deal. And I am very happy with my Intel systems. I’m not going to build a Zen machine and I have no money in this race.

        • Zizy
        • 3 years ago

        Eh, why would you buy BD-E if not to run heavily multithreaded stuff? The not-that-multithreaded crowd will pick consumer i5/i7.
        IF these results are a good indication of general multithreaded chip performance, Zen does what it is supposed to pretty well – be a decent workstation/server chip (in both perf and perf/W). It doesn’t match top Intel parts, but serves the need of majority of the market.

        Yes, it might not be competitive in other workloads and be therefore a poor consumer platform competitor. We shall see in few months.

        Compare that with BD launch. Performance in multithreaded workloads was there only vs consumer platform. Perf vs workstation platform was bad and Perf/W universally sucked. The only thing AMD could compete on was price – offer the chip cheaper than the consumer i7.

        • bfar
        • 3 years ago

        If they’re confident enough to run a live test against a competitors product, it’s usually a very good sign. That’s the main conclusion I’ll draw from this right now.

        Yes, of course we’ll all wait for a good review or two before we hand over a red cent.

    • CheetoPet
    • 3 years ago

    very cautiously optimistic (very)

    • Srsly_Bro
    • 3 years ago

    Ryzen looks to be a great CPU. The demonstrations were great and showed the strengths of the CPU vs the competition.

    Side note:

    Did anyone notice the guy’s shirt was way too small for his big gut? I feel bad for the guy as he probs wasn’t aware it was that bad on camera.

    Zen is awesome, but I’m more interested in hearing about Naples.

    • astrotech66
    • 3 years ago

    A sneak peak at Vega, too, with some Star Wars thrown in … kinda cool.

    • chuckula
    • 3 years ago

    Ryzen is obviously the winner.
    I mean, Perceptron is my favorite Autobot and AMD managed to cram him into every freakin’ chip!

    Game over Intel! Game over!

      • DancinJack
      • 3 years ago

      AMD > INTEL. I predict Intel will have to stop making x86 CPUs in two years with this kind of performance coming from AMD.

      • derFunkenstein
      • 3 years ago

      If you’re talking about the microscope Autobot from the G1 cartoon, his name was Perceptor. #REKT

        • sweatshopking
        • 3 years ago

        lol of course you’d know that.

          • derFunkenstein
          • 3 years ago

          I basically grew up watching that cartoon every day and we even recorded it on VHS tapes. I loved that show.

    • Waco
    • 3 years ago

    Ha, they did use a Titan X for the comparison.

      • DrDominodog51
      • 3 years ago

      Since RTG is bundling their cards with Intel CPUs, why would AMD not demo their CPUs with Nvidia cards?

      • jts888
      • 3 years ago

      They actually tried to do a Vega “and one more thing” reveal at the end but botched it pretty badly.
      It was some unreleased Star Wars Battlefront DLC that needed like a minute to join a server for some reason, so everyone was just standing around at what should have been the hype climax of the show.

        • Waco
        • 3 years ago

        Yeah, I cringed. 🙁

        • Mr Bill
        • 3 years ago

        I thought that went well enough. All the nerds know that networks or servers can be sluggish sometimes.

          • derFunkenstein
          • 3 years ago

          I don’t think we’re placing quite enough of the blame on DICE and EA here, for this very reason.

      • Tristan
      • 3 years ago

      AMD said that Fiji is able to handle 4K
      but did’t said that is is 30fps

        • RAGEPRO
        • 3 years ago

        Fiji has Displayport 1.2a, which supports 4K @ 60 Hz.

          • Krogoth
          • 3 years ago

          I think he mixed up Displayport 1.2 with HDMI 2.0 which Fiji does not possess but it doesn’t really matter that since HDMI 2.0 is meant for A/V devices and HDMI 2.0 native devices do not support “Freesync/Gsync” yet.

            • freebird
            • 3 years ago

            Just a matter of time… there are HDMI monitors that support Freesync
            [url<]http://edgeup.asus.com/2016/10/14/vg245h/[/url<] I'd personally be VERY SURPRISED if Xbox Scorpion doesn't support Freesync on TVs (4K or 1080p) when it is released. You won't hear them talk about it until a month before release though you don't want that cat out of the bag until then... it could be a real game changer for consoles.

            • jts888
            • 3 years ago

            Variable sync is a nice band-aid for lower refresh rates, but I wish we’d see a push for whole-chain 120 Hz in the consumer TV and media market.

            Certainly not every game on every console could manage 1080p120 much less UHD@120Hz, but those with simplified/stylized graphics could get a huge boost in smoothness and responsiveness.

            As a side benefit, most of the judder of 30-60 Hz v-sync video goes away on a 120 Hz display.

            • Magic Hate Ball
            • 3 years ago

            It would be nice if a TV maker or two would put the effort in for a console gamer TV.

            With variable refresh and low latency.

            With the number of people who play games on TV’s, you’d think someone would have thought to work with Microsoft/Sony/Nintendo and created a niche market with a slight price premium.

            • jts888
            • 3 years ago

            It would only work if MS and Sony collaborated very early in their design cycles and agreed to put forth the functionality simultaneously with new generation releases.

            Otherwise you’d have instant Balkanization as the 2nd company to release would try to form a competing incompatible standard with other TV manufacturers just to hamstring the 1st.

            • ptsant
            • 3 years ago

            I don’t think there is/will be a push to 120Hz until the content comes. Right now we don’t even have 4k content for most of the stuff. Also, I would argue that with the exception of twitchy games, almost any other situation (TV, films, non-twitchy games) is probably better served by HDR. OLED is truly spectacular in that regard.

            As for games, you said it yourself: 1080p120 is already a very high target for the consoles. It is in fact, in fill-rate, equivalent to 4k30. Guess which is preferred by marketing.

      • Mr Bill
      • 3 years ago

      Wonder if there is any synergy between RyZEN and Polaris/Vega.

    • Yan
    • 3 years ago

    Do we know how well these CPUs will support [url=https://techreport.com/news/29611/win10-will-be-the-only-windows-supported-on-next-gen-hardware<]prior versions of Windows[/url<]?

      • Forge
      • 3 years ago

      I don’t know about previous Windows, but they run Linux pretty well, even older kernels.

        • just brew it!
        • 3 years ago

        I’m sure AMD has been working with the Linux community, and the CPU side is covered. I’m a little more concerned about driver support for stuff like the USB controller on the new chipset. If the USB is (as rumored) an Asmedia design, we may be in for a rough ride.

          • MOSFET
          • 3 years ago

          I dunno, for me these days, ASMedia is pretty equivalent to Realtek. I would prefer my NIC or USB to be an Intel (or some other) chip, but even when they end up Realtek or ASMedia, they work fine. I’m convinced that ASM1042 USB3 is better than AMD 990FX USB3, and you [i<]jbi[/i<], have just the mobo to check it out (IIRC). For some reason I just don't want my CPU to be an Intel chip. Keep it at NICs, Thunderbolts, and maybe again one day, dGPUs.

            • just brew it!
            • 3 years ago

            990FX does not have native USB 3.0 support. My M5A99FX Pro R2.0 uses an ASMedia chip for the USB 3.0 support (and it sucks massively under Linux, to the point of being basically useless).

            ASMedia SATA chips have been OK in my experience.

            In-tree Linux drivers for Realtek NICs and audio (which were provided to the Linux community by Realtek, BTW) were hit-or-miss until around 2012 timeframe (you frequently had to download the latest driver source tarball from Realtek’s site and build for your system to make things not suck). Since then they’ve generally been OK.

            • freebird
            • 3 years ago

            Yeah and neither does my VESA local bus motherboard gather dust in my basement… 😀

      • Krogoth
      • 3 years ago

      Microsoft will retain not providing direct support for any post-Skylake CPUs from either vendor for their older pre-Windows 10 OS. You are completely depended on third-party support.

      They did this with Window XP back in the day in regards to any CPU post-K8/Core 2 platforms. That’s why there’s no “generic” Microsoft driver for USB3 and PCIe controllers under XP.

      I doubt AMD will outright drop “basic” support for these platforms for anything older than Windows 10, but don’t expect anything from Microsoft. I suspect AMD will make Window XP demarcation point since it is EOL status. The industry has been moving away from it over the past five years.

      • Pwnstar
      • 3 years ago

      Get with the future, grandpa.

    • DrDominodog51
    • 3 years ago

    So XFR is similar to whatever Nvidia has on their Pascal GPUs?

      • RAGEPRO
      • 3 years ago

      I think it’s more similar to Turbo Boost MAX from Broadwell-E.

    • LSDX
    • 3 years ago

    crossing fingers for AMD. not a fanboy, but we badly need competition back in the cpu market. How many years have we been stuck with more of the same micro-improvements by intel now?

    And we’re still stuck somewhere between 3.2 and 4.2 Ghz… c’mon you can do better, bring us back the Ghz-race 😉

      • DragonDaddyBear
      • 3 years ago

      Amen! Crank up them clocks, AMD. Winter is here and it’s making up for lost [s<]ground[/s<] cold and my feet need a heater.

    • drfish
    • 3 years ago

    They see bulldozen’
    They hatin’, and trollin’, and tryin’ a catch me Ryzen dirty

    I see myself out…

      • DoomGuy64
      • 3 years ago

      lol, Make AMD great again.

        • 1sh
        • 3 years ago

        Yes we can.

          • freebird
          • 3 years ago

          If you like your Bulldozer, you can keep your Bulldozer…

      • Jigar
      • 3 years ago

      I read that in “They see me rollin” rhythm… +1 to you.

      • ludi
      • 3 years ago

      No, no out. You’re committed now. Finish the song!

        • drfish
        • 3 years ago

        Heh, brevity is the soul of wit and all that… 😉

      • RickyTick
      • 3 years ago

      Obviously white and nerdy.

    • DragonDaddyBear
    • 3 years ago

    I’m not seeing anything about IPC here. Everything that’s been discussed his heavily threaded. Here’s to hoping single-threaded tasks are at least the 40% they’ve been touting. But for people who need more cores, this might be pretty awesome.

      • xeridea
      • 3 years ago

      They ran against 8C/16T 6900K, so IPC should be similar….. Unless somehow SMT is giving more of a boost than HT is. We don’t know boost clocks, so it’s hard to tell for sure, but it seems very competitive.

        • kmm
        • 3 years ago

        Well, they picked this specific workload for a reason, out of multiple possible non-controversial/obscure options. I wouldn’t say that it’s a difference between Intel and AMD’s SMT implementations but perhaps how this specific workload maps out onto the hardware.

        Still, seems unlikely that IPC is not overall competitive across more workloads, so this is a very good sign.

          • xeridea
          • 3 years ago

          For SMT I was thinking perhaps one CPU had different amounts of resources per core, so may get more or less boost from multiple threads. Of course we don’t really know, so was just throwing it out there as a possibility.

          Improved branch prediction is awesome, this is a major factor in many workloads.

          Can’t wait for launch!

            • synthtel2
            • 3 years ago

            I for one wouldn’t be surprised if Zen’s SMT is more effective than we’re used to, since it spreads out functionality across more execution units.

          • Pwnstar
          • 3 years ago

          Your double negative in that sentence hurts my brain.

    • AnotherReader
    • 3 years ago

    Given the rumours, the base clock is higher than expected. Let’s see what today’s revelations are; so far, Zen, nay RyZen, looks promising.

    • chuckula
    • 3 years ago

    [quote<]My conversations with AMD employees suggest we'll learn more about this topic in future briefings, so we can probably stand down with the pitchforks for the moment. [/quote<] Next time, don't bother with feeding Wasson beer. Just go right to the Scotch.

Pin It on Pinterest

Share This