Zen 2-based Ryzen and Epyc processors are coming this summer

After the Radeon VII demo at its CES 2019 keynote, AMD CEO Lisa Su talked a bit more about gaming and then moved straight into the Zen 2 reveal. The new CPU core will naturally be finding its way into both datacenter and desktop processors, and AMD talked a bit about both. The company also demoed some early Socket AM4 silicon for us.

The first thing AMD said about its next-generation Epyc processors is that they'll offer double the performance per socket compared to the previous-generation Epyc chips, and specifically quadruple the floating-point performance. The new chips are, naturally, fabricated on TSMC's 7-nm process, and as we heard before, they'll offer up to 64 cores in a single socket.

AMD showed a brief demo of a single 64-core Epyc processor running a NAMD molecular dynamics simulation in direct comparison against a machine with two 28-core Xeon Platinum 8180 chips. Given the highly-parallel nature of the NAMD software, it's not too surprising that the 64-core Epyc walked away with a decisive victory, but the circa-20% performance advantage for the Epyc is a good bit greater than the simple difference in core counts (64 vs. 56) would imply.

Lisa Su hyped up the crowd a bit before announcing a preview of the third-generation Ryzen desktop processors. Unsurprisingly, they'll be designated the Ryzen 3000 series, and like the Epyc chips above, will be based on 7-nm Zen 2 cores.

The company started off with a brief gaming demo focused around Forza Horizon 4, where a Ryzen 3000-series machine with a Radeon VII graphics card was able to maintain over 110 FPS in the game running at 1920×1080 resolution with graphics settings at maximum. While that may not sound all that impressive on the face of it, high-refresh gaming requires strong single-threaded performance, and keeping the resolution relatively low helps emphasize the CPU over the GPU.

Afterward, AMD demonstrated a bit of early third-generation Ryzen silicon running Cinebench R15 alongside an Intel Core i9-9900K doing the same. Lisa noted that the eight-core Ryzen processor was pre-release silicon running non-final frequencies. Despite that, AMD's chip posted up a score of 2057, trumping the Core i9's 2040 score. That's in the ballpark of the 2072 score we measured for the Core i9-9900K, and so this test paints the upcoming Ryzen chips in a fair light—at least as far as Cinebench is concerned.

The more impressive part of this presentation is that AMD showed power consumption numbers for these systems on screen. The Intel machine purportedly drew around 180 W during the demonstration, while AMD's pre-release chip apparently pulled only around 135 W. That's a remarkable claimed advantage in power efficiency, although that's more or less what we would expect from chips on a next-generation fabrication process.

Dr. Su concluded the presentation by holding up a delidded Ryzen 3000 processor for the crowd to ogle. Seeing the chip laid bare—sorry, that is, the chips—all but confirmed what many in the enthusiast community had suspected: Ryzen processors on Socket AM4 are getting their I/O duties shunted off to a separate die, just like their server-bound cousins above. The smaller of the two dice is the eight-core compute chiplet, while the larger die handles the memory interface, PCI Express 4.0, and other I/O duties.

A render of one of the Ryzen 3000-series processors. Source: AMD

Rumors had indicated that AMD would be launching 16-core processors for Socket AM4, but there was no mention of any such thing today; the chip pictured above is an eight-core model. However, there appears to be room in the package for a second compute chiplet. It's possible that AMD could release a "mini-Ripper" using two 8-core dice inside the same package. The presence of the I/O die could make slotting such a product into existing AM4 boards a whole lot simpler.

Source: AMD

Lisa Su remarked that the upcoming Ryzen 3000 chips will be a drop-in upgrade for folks on existing Socket AM4 machines, although doing so will likely mean missing out on PCI Express 4.0—not that that's likely to make any real difference for most users. AMD says that new boards with PCIe 4.0 support will be available around the same time as the new CPUs, in the summer of this year. Given that we're a ways out from the new chips, the company hasn't breathed a word about prices yet.

Comments closed
    • kuttan
    • 9 months ago

    If this was Intel a new socket motherboard is a guaranteed requirement siting silly excuses.

    • Wirko
    • 9 months ago

    Are the NAMD benchmark results in nanoseconds per day? That seems impressively little but I’m not even a noob in this field. What kind of chemical reaction is being simulated in this particular benchmark?

    • Klimax
    • 9 months ago

    It doesn’t look like they matched Intel’s IPC at Intel’s frequencies otherwise they’d be shouting it front and center. Looks like there’d be yet again quite some catches to AMD’s offer once again. ETA: Thinking more about it, it might not be NUMA case, just potentially overstretched memory controller case. FSB-style memory access is apparently back. Definitely it’ll be interesting attempt from memory-latency and memory bandwidth sensitive application POV.

    Note on absence of IPC benchmarks: On the other hand it is quite smart, because that way they don’t get stuck in same problem as Intel for a while.

    And Note2: Pity about absence of any information on AVX. Especially AVX512.

      • Goty
      • 9 months ago

      Well, we likely know one of three things:

      AMD has matched Intel in BOTH IPC and clockspeeds
      OR
      AMD has still lags behind in IPC but already has a clockspeed advantage on ES parts
      OR
      AMD still lags behind in clockspeeds (still on an ES part) but has an IPC advantage

      There’s some wiggle room on each of those given that we have only one data point, but one of those scenarios is likely true. Personally, I don’t think it matters which is true because the whole IPC argument is entirely academic. If AMD or Intel came out with an architecture with half the IPC of the previous generation but three times the clockspeed (keeping power in check, of course), I don’t think any of us would really be complaining about the 50% performance boost.

      Lastly, we do have information on AVX, taken from the information we have on Rome. Each core now has a full 256-bit wide register meaning that AVX2 operations should now operate at full rate, similar to Skylake on the desktop, but there is no support for AVX-512. I will now let chuckula step in to explain how that means all AMD CPUs are worthless and if a benchmark doesn’t make use of AVX-512 that it is completely useless.

      • chuckula
      • 9 months ago

      AMD has stated on-record that there’s zero support for any of AVX-512.
      When they finally get around to it (Papermaster’s comments on Zen 3 tend to indicate it will also lack support) we’ll know well in advance since there needs to be a large amount of software prep work to enable a major upgrade like AVX-512 prior to silicon launch. That’s how we know exactly what Icelake’s support levels are even though it clearly hasn’t launched yet.

      • Waco
      • 9 months ago

      There’s quite a huge gap in latency and bandwidth between Infinity Fabric and the archaic frontside bus. I’ve heard very good things about “remote” access latency for Zen 2.

      EDIT: chuckula, please stop giving 3 thumbs down to anyone posting anything remotely praising AMD.

        • freebird
        • 9 months ago

        I’m become accustom to my posts on here being immediately down-voted by the same 3 thumbs… πŸ˜€

        and then challenged to buy him stuff if AMD doesn’t beat Intel at AVX-512 in the next CPU…

        My opinion is that AMD views AVX-512 functions better left to GPU compute units and for the vast majority of Server CPUs and/or desktops that silicon space is better used for other purposes or not used at all to make the die smaller.

        I’d be much more interested to see if they sneaked something like bfloat16 into the FPU.

          • dragontamer5788
          • 9 months ago

          [quote<]My opinion is that AMD views AVX-512 functions better left to GPU compute units and for the vast majority of Server CPUs and/or desktops that silicon space is better used for other purposes or not used at all to make the die smaller.[/quote<] There are many cases where AVX or AVX512 is better. If any task is actually contained in L3, L2, or L1 cache, it makes no sense to push the data to DDR4 RAM... and then push it to PCIe, then push it to GPU RAM, then GPU Registers and back again. Even the tasks that are DDR4 constrained are often better done on the CPU: DDR4 Dual Channel is ~40GB/s bandwidth, but PCIe 3.0 x16 is only 15GB/s bandwidth. So it doesn't make sense to move data to the GPU if you are RAM constrained. So right there, anything with ~1-million data elements (8-bytes per element x 1-million == 8MB, which typically fits in L3 cache) is in L3 cache and far faster to stay on the CPU. You need to have a huge data-set, low-communication speeds (only 15GB/s PCIe), AND latency-insensitive code before you can benefit from GPUs. On the other hand, AVX2 doubles the flops that any CPU can do. Sure, it is difficult to program, but the SIMD model employed by GPUs is well studied at this point. Programmers know how to handle that kind of parallelism. [quote<]I'd be much more interested to see if they sneaked something like bfloat16 into the FPU.[/quote<] Isn't bfloat16 conversion stupidly simple to do? [code<]bfloat16 blah = foo(); float x = reinterpret_cast<float>(blah << 16); [/code<] In effect, FPUs already support bfloat16, since all known GPUs support a simple bitshift left by 16. EDIT: It should be noted that AMD stores shorts / chars as packed into 32-bit registers. So one vGPR can hold 4-chars or 2-shorts. There are even assembly-level instructions to handle the extraction and bitshift without penalty. So on AMD GPUs at very least, bfloat16 is fully supported. I know NVidia GPUs were slow with bitshifts for a long time, but I'm pretty sure Volta / Turing have full-speed bitshifts now.

            • freebird
            • 9 months ago

            “Isn’t bfloat16 conversion stupidly simple to do?

            bfloat16 blah = foo();
            float x = reinterpret_cast<float>(blah << 16);

            In effect, FPUs already support bfloat16, since all known GPUs support a simple bitshift left by 16.

            If it is so stupidly simple to do then why is intel planning to add it to their server processors???
            [url<]https://www.phoronix.com/scan.php?page=news_item&px=Intel-BFloat16-Deep-Learning[/url<] Damn, Intel must only hire the stupidest engineers out there that can find a job with any other tech firm, right?

            • dragontamer5788
            • 9 months ago

            [quote<]If it is so stupidly simple to do then why is intel planning to add it to their server processors[/quote<] Because it allows them to recycle the FMA instruction without expending any additional die area, gets them to check off a bullet-point on the "Deep Learning" hype-train, and overall please its investors. But seriously: the whitepaper makes it obvious how bloody simple bfloat16 is. Did you read the whitepaper? Its like 7 pages. Its the simplest proposal I've seen, practically ever. CPU [url=https://software.intel.com/sites/default/files/managed/40/8b/bf16-hardware-numerics-definition-white-paper.pdf<]Documentation is normally far more complicated than this[/url<]. I mean, its a great idea. It doesn't use much die area, its bloody simple to understand, and clearly beneficial to the workload in question. That doesn't change the fact that GPUs practically can do this already. -------- BTW: Good engineers come up with simple solution to complicated problems. BFloat16 is one of them. Its very simple. Its clearly a good design, and something that a good engineer has thought about.

          • chuckula
          • 9 months ago

          Well tell ya what: I like things like a huge range of string operations, base64 encode/decode, bit manipulations, ternary logic, and a whole host of other practical day-to-day tasks to run faster.

          And AVX-512 lets you do all of those things massively faster.

          Here’s just one random blog that shows you exactly how:
          [url<]http://0x80.pl/[/url<] Tell me again how we need to have GPUs to do the basic functions of a trivial javascript and how much smarter you are than everybody else because in your ignorant "opinion" we should just use the GPU for everything. Ironic too since the GPU on an 8-core Icelake will curbstomp a 16 coar RyZen in OpenCL (remember "the future is fusion?") Incidentally, a 16-core 14nm Skylake part would be quite small compared to our Chiplet-monstrosity from AMD. So what the hell is AMD wasting its die space on since it can't even be arsed to implement Intel's 5 year old instructions properly?

            • Goty
            • 9 months ago

            You can tell how worried chucky is by how often he feels the need to insult and personally attack everyone who has an opinion that differs sightly from his own, so he must be absolutely terrified that his precious Intel is going to get that shiny performance crown knocked from its head.

            • chuckula
            • 9 months ago

            I’m sorry… that you couldn’t understand anything in that post and had to respond with a personal attack to cover your own ignorance.

            Tell ya what, since AVX-512 is so stupid and all, just get a degree and code your new Radeon VII to do all of that stuff a million times faster since obviously GPUs are the ultimate general purpose processors…. all while falling over yourself about Zen for whatever reason.

            • Goty
            • 9 months ago

            Nothing personal about my post, just pointing out how personal [i<]you[/i<] seem to take all of this, and I've never said a thing about AVX-512 except to note the fact that you think any CPU not supporting it is worthless. As a serious request to the siterunners, why are all of chuckula's personal attacks allowed to continue? Do any of his rants actually contribute anything to the site or discussion? Does he get free reign because he pays?

            • freebird
            • 9 months ago

            He gets three accounts to spam you with for that…

            • Waco
            • 9 months ago

            I don’t vote for silencing chuckula. He does go off the deep end sometimes (well, more than most) but he certainly can’t be faulted for being passionate and fairly knowledgeable about computing hardware.

            He certainly is waaaay too invested but he does do a pretty good job at spurring discussion. His biggest fault is insane hyperbole.

            • Goty
            • 9 months ago

            I don’t advocate for silencing him either, at least not permanently. I just think he needs a timeout so he can learn the value of participating in mature discussion instead of lashing out and name calling or insulting anyone who dares to have a different opinion.

            • cegras
            • 9 months ago

            I wouldn’t call spurring negative discussion a good thing. Chuckula’s contributions tend to be negative spins on publicly available information.

            • Waco
            • 9 months ago

            I see your point. I do appreciate the hyperbole sometimes when it’s not too extreme because it tends to get people going into the actual details. When it gets bad, though, it’s just a waste of time.

            • freebird
            • 9 months ago

            I notice any time you challenge his points (or mention his beloved AVX-512) the down-votes usually come with an immediate -3, which indicates to me he has at least 2 other “alter-egos” besides his Chucky Chuckula persona…

            • Goty
            • 9 months ago

            He gets three votes as part of his “Gold subscriber” status, but nobody cares about the votes.

            • freebird
            • 9 months ago

            They are probably devoting all the free silicon they can to build a neural network that models your indescribable Chuckula mind, since you are so smart and know every thing that all the engineers at AMD and Intel can’t seem to fathom to build…

            What IS unfathomable is that you’d need a XEON processor or XEON Phi to run your basic functions of a trivial java script, since they are the only processor Intel CURRENTLY sells that SUPPORT AVX-512.

        • anotherengineer
        • 9 months ago

        He can’t, for some reasons that are probably illogical.

        Sad really.

        • chuckula
        • 9 months ago

        Nice personal whine.

        Giving AMD all these accolades because their signaling rates for pushing data across a few millimeters of copper in a soldered-down package in 2019 are higher than an 11-year old Core 2 Quad that had to push data over close to 20cm of copper through a socket isn’t worthy of upthumbs, it’s frankly a troll.

        Don’t believe me? How about I compliment IceLakes GPU technology this year compared to AMD’s best integrated graphics from… 2017. See how many upthumbs I get.

        As for the magical world of Infinity fabric and the holy chiplets, you’ve made some frankly questionable statements yourself considering you’ve been going on about all the “flexibility” of chiplets while AMD’s own executives have said that AMD “chiplet” parts *aren’t getting graphics chiplets*.

        If “flexibility” means “Oh we can throw more cores in” then I’m bored. Even right now Intel has commercially produced Xeon-FPGA combos that show a lot more flexibliity than a “MIRACLE” I/O hub solution that lets MIRACLE 7nm chips get to 16 cores using 3 pieces of silicon when last-year’s model could get to 16 cores with 2 pieces of silicon.

        Funny how those idiots at Intel somehow manage to get “flexibility” by just making chips that have the stuff they want them to have.

          • Waco
          • 9 months ago

          [quote<]Giving AMD all these accolades because their signaling rates for pushing data across a few millimeters of copper in a soldered-down package in 2019 are higher than an 11-year old Core 2 Quad that had to push data over close to 20cm of copper through a socket isn't worthy of upthumbs, it's frankly a troll. [/quote<] You're stretching awfully far there...

            • chuckula
            • 9 months ago

            Where’s your explanation for this article: [url<]https://www.anandtech.com/show/13852/amd-no-chiplet-apu-variant-on-matisse-cpu-tdp-range-same-as-ryzen2000[/url<] This supposedly "ultra flexible" chiplet magic can't even get AMD to an 8-core desktop chip that can show notepad without a discrete GPU in 2019 and probably not 2020: [quote<]AMD stated that, at this time, there will be no version of the current Matisse chiplet layout where one of those chiplets will be graphics. We were told that there will be Zen 2 processors with integrated graphics, presumably coming out much later after the desktop processors, but built in a different design. Ultimately APUs are both mobile first as well as lower cost parts (usually), so different design decisions will have to be made in order to support that market.[/quote<] You've been selling this as a hell of a miraculous product, but needing a 14-nm GloFo Special to get 3 hunks of silicon to 16 cores in 2019 when it only took 2 hunks of silicon to reach the same core count in 2018 isn't going to put Intel out of business.

            • Waco
            • 9 months ago

            I don’t know what you’re arguing for here. I’m excited about new tech and somehow you seem to think there are sides here worth fighting for.

            Your goalposts aren’t moving, they’re redshifted.

      • dragontamer5788
      • 9 months ago

      [quote<] And Note2: Pity about absence of any information on AVX. Especially AVX512.[/quote<] AVX512 is likely Intel only. But Zen2 is known to have 4x 256-bit pipelines, as well as a 256-bit wide L1 cache. So AMD is entering the true 256-bit AVX / AVX2 world with Ryzen3xxx and Epyc 2. That's probably what the NAMD benchmark proves btw. (I'm not an expert on the NAMD benchmark though. But I'd expect that it is at least 256-bit AVX / AVX2 enabled).

      • Gadoran
      • 9 months ago

      If AMD is happy, we are happy.
      Pretty delusional to see a brand new semi-7nm chip capable to match but not to clearly beat a 14nm core from Intel developed years ago.
      Anyway…..in autumn a 8 core Ice Lake chip will do its job.

      But the worst thing is: why they not taked about mobile cpus for laptops on 7nm?? the real revenue and profit is here.

    • sweatshopking
    • 9 months ago

    GUIZE, IT REALLY DOESN’T MATTER CAUSE MY WIFE WILL LITERALLY NEVER LET ME UPGRADE FROM MY 4790K. SERIOUSLY THOUGH, AMD AND INTEL SHOULD JUST STOP MAKING CHIPS CAUSE IM NOT ALLOWED TO BUY ANY.

      • Redocbew
      • 9 months ago

      This man speaks the truth.

      • K-L-Waster
      • 9 months ago

      Awww, but SSK, we wantz da chipz!

      If Dr. Su buys your wife that thing she really really really wants (psst what does your wife really really really want?) can we haz chipz again?

      • ronch
      • 9 months ago

      I can totally relate to your right now. I just got myself a new Ryzen laptop and I don’t know how to let her know I bought it. What a mess I’ve dug myself into. Heh.

        • Goty
        • 9 months ago

        Just start using it and, when questioned about it, hit her with the, “Oh, this old thing?”

        I mean, it doesn’t work with [i<]my[/i<] wife, but you could try...

          • ronch
          • 9 months ago

          I remember doing that with a watch I bought last year. She saw it and said something like, “Nice watch!!”, as though she thinks it’s nice but at the same time is saying “Why did you buy it?”. I quickly answered back, “Oh this? This isn’t new..” LoLoLoL

      • chuckula
      • 9 months ago

      YOU HEAR THAT INTEL: GIVE UP!

      • KeillRandor
      • 8 months ago

      I’ve still only got an i7 920 πŸ™ (w/11GB RAM and a 1GB Radeon 7850…!)

    • hiki
    • 9 months ago

    Other than in notebooks, I bought my last CPU 10 years ago. Before that, I was forced to upgrade every 2 or 3 years. New CPU can’t give much in terms of single threaded performance, and due to Amdahl’s law, 8 cores isn’t much of an improvement over 4 cores.

    So, why I should buy it, AMD? Only 8 cores in 2019?
    At least you will be considering it entry level?

      • Waco
      • 9 months ago

      I think you’d be pretty surprised if you compared the single threaded performance of CPUs from 10 years ago to today…

        • Anonymous Coward
        • 9 months ago

        Yeah no kidding, I actually use old CPUs for gaming boxes for small humans, and it didn’t take any 10 years to make the old stuff look silly. I have a sweet 3.16ghz 6MB L2 45nm C2D that gets handily beaten by similarly clocked newer products: a quad i5 4xxx (3.2-3.6 ghz), quad i7 2xxx [i<]mobile[/i<] chip (2.6-3.4ghz), or a dual i5 4xxx [i<]mobile[/i<] (2.6-3.2 ghz), in the ever popular Fortnite. For even more shocking contrast, I have a Phenom (with silicon TLB fix) on hand. Talk about IPC improvement. (Been looking for a good used C2Q, but it would actually give up some single-thread performance, so I'm not quite happy about it.) That C2D has a solid design, big L2 cache, high clock, and its clearly inferior even at one thread.

          • Krogoth
          • 9 months ago

          I got an old Q6600 B3 w/ HSF that is collecting dust. The motherboard it uses is dead. It was overclocked for most of its life at 3.0Ghz. I’m willing to part with it at no cost if you are interested. Just PM me.

            • Anonymous Coward
            • 9 months ago

            A fine offer! But this goes into a old office PC I got for zero at work, mobo won’t overclock and besides that it would catch on fire if the CPU asked for 100+ watts. πŸ˜‰ I got the C2D E8500 used for almost nothing, added a new GPU and SSD and 8GB of (used) DDR3. I have an eye on a Q9450 (45nm, 2.66ghz, 2x6MB L2, 1333 bus) I could get used locally for a semi-reasonable price with cash … but is that just throwing money into a hole? Hmm. Descisions, descisions.

            Its fun to go find the perfect new hardware, but it might be even more fun to find interesting old hardware that I never would have bought at full price.

      • freebird
      • 9 months ago

      Not sure why you think they’ll only have 8 cores in 2019?

      Did you see the CPU she held up and in the pics above??? the CPU die is offset from the IO die with enough room for a 2nd 7nm CPU chip to be placed…

      They wouldn’t have built and put it together that way if they didn’t plan to put a 2nd chip there…so you can also assume there is enough connections in the IO chip to connect a 2nd CPU chip in this package & run PCIe v4.0 Guess we’ll have to wait to find out what other goodies are in the IO package (L4 cache or maybe a little ARM core that could lead to low power messaging and instant boot Windoze system of the future)

    • ronch
    • 9 months ago

    Wait, ECS is still alive?? :-O

      • Krogoth
      • 9 months ago

      Yeah, they are still kicking. They are mostly in the EU and Eastern Asian markets. Their presence in the NA market has been declining though. MSI and ASRock took their spot in NA.

        • MOSFET
        • 9 months ago

        They have the Intel NUC motherboard contract; and their name is all over the components inside (from a software perspective, not a flashlight perspective).

    • Gadoran
    • 9 months ago

    122mm2 +81mm2……203mm2 of silicon without not even a crap IGP to assemble a PC for calcule only, absurd.

    About footprint, this the first time a node shift do not allow silicon space saving. AMD profitability will become worse.

    About performance, my bet the frequency is Zen+ like (4.3 Ghz turbo) but some 10% boost in IPC and a massive L4 hidden in the 14nm chip do the job in CB. Who know in other workloads.

    Power consumption is the most unknown thing. Obviously 7nm do not drop of 50% the power draw on an high power cpu, so my idea they utilized a very light motherboard for OEM clients and the Intel cpu a power hungry motherboard for overclocking.
    Again a smart AMD attempt to look better. IDIOTIC. Actually are available a good number of frugal motherboards capable to run 9900K at a far lower system power draw, still they do not allow overclock.

    Not impressed, better wait Ice Lake SKUs.

      • Pwnstar
      • 9 months ago

      Power isn’t unknown. The consumption for this chip was right in the demo.

        • Klimax
        • 9 months ago

        Unknow conditions and under control of AMD.

        • Gadoran
        • 9 months ago

        Sorry but you forgot the huge process variation of these processes.
        AMD has done a simple thing, it showed the best (and pretty rare) silicon available on the lower part of the Gaussian, still Lisa forgot to say to journalists that this cpu will be rated 105W like actual Ryzens.
        It was a good and unfair show but even the worst geek around know that the realty is different and 7nm do not halve the power on high power silicon. Only low power, low clocked, low temperature SOCs can achieve this luxury. Leakage will exist forever unfortunately.

      • Chrispy_
      • 9 months ago

      7nm production capacity is constrained, and it’s more expensive than established 14nm.

      Profitability should be good, because AMD are only paying for a small core on the expensive new process, whilst the less power-consuming and performance-critical jobs are handled by older silicon at very low cost with excellent yields.

        • Srsly_Bro
        • 9 months ago

        The news has reported the opposite with cell phone demand decreasing.

        • Gadoran
        • 9 months ago

        Are you sure the memory trafic and control is the less demanding job on a chip?? Remember how hot were the southbridges years ago ?

          • Chrispy_
          • 9 months ago

          Yes, I’m sure.

          The northbridge was integrated into the CPU package last decade and it’s not a complex or demanding task.

          Remember that there are dual-channel, high-frequency DDR4 controllers built into both AMD and Intel processors which operate in a 15W envelope. That’s just 15W for four high-performance CPU cores, and integrated graphics core that dwarfs the CPU cores, as well as the dual-channel memory controller.

          So it’s obviously only a tiny fraction of 15W but that’s not even the best part. The best part is that Goldmont+ Pentium Silver (like the Pentium N5000) does all that in just 4.8Watts; Four cores, 18 graphics units, oh, and full dual-channel DDR4 2400 support.

          So, we know for an absolute, exists-in-the-real-world-fact, that a full-scale, dual-channel, 2400MHz DDR4 memory controller can operate at full speed as a lower-priority part of a 4.8W processor. The CPU takes priority when it comes to power usage. The iGP then gets what’s left over, and the memory controller (IMC) gets whatever’s left.

          Maybe it uses half a Watt or something like that, but it’s definitely not a problem in terms of heat, power draw, or something that cannot possibly run at its best on an older process. BayTrail-T Atom processors had a dual-channel DDR3 IMC as just a small part of the whole package limit of 2.2 Watts, and that was fabricated on the ancient 22nm process that Ivy Bridge used to be made on.

          For a memory controller (IMC) and IO hub (PCH):

          The process node isn’t important. 22nm is obviously plenty good enough for Intel.
          The power draw isn’t important. It’s negligible by all accounts, regardless of process node
          The speed of the memory controller doesn’t seem limited by either process node or power
          The AMD uncore doesn’t even run at the same speed as the cores in a current single-die design anyway.

            • Gadoran
            • 9 months ago

            Ummm I believe this depends on workload and on amount of memory traffic, on a desktop cpu we can assume a higher frequentation of DRAM cells than in a laptop.
            Anyway 120mm2 of silicon means the 14nm chip contain a lot of other things, not only two tiny IMCs. We have to wait Amd will tell us something more. My bet, between other hardware like the pci governor, is about a large cache to mitigate the FSB like approach showed in prime time on Rome.
            Sure they wasted too much silicon to have something to show a little before Intel. On Rome the impact is highly mitigated by the core number, on Ryzen is a crazyness that have small sense.

            New Ryzen should be between 120mm2 and 130mm2 not over 200mm2. These things have an impact in the long run.

    • chuckula
    • 9 months ago

    Looking more carefully at that I/O chiplet, I know there’s the whole screed of “IO DOESN’T SCALE” but I’m not really buying that line.

    Anand has some die shots with measurements here: [url<]https://www.anandtech.com/show/13829/amd-ryzen-3rd-generation-zen-2-pcie-4-eight-core[/url<] The I/O chiplet is estimated to be 122 mm^2. That's almost exactly the same size as a 14nm Quad-core Kaby lake die that includes an IGP *and* pretty much the same amount of I/O as this chip including dual-channel RAM and effectively the same number of PCI express lanes even if RyZen 2 upgrades those to PCIe 4.0. So by the whole "OMG I/O CAN'T SCALE" line, the 4 cores and the integrated graphics in Kaby Lake should literally take up zero die space since the rest must be devoted to I/O that can't scale. I'm not buying it. Ok, since it's AMD koolaid day let's take this a step further: AMD is always right and I/O can't scale. Obviously that primitive failure 9900K that Lisa Su EPYCALLY destroyed yesterday must devote 122 mm^2 of die space to I/O since *IO CAN'T SCALE*. And we know that the 9900K die is about 175 mm^2. So that means Intel's primitive and failed joke of a 14nm process fits 8 cores and an IGP into about 53 mm^ of die space... right? Meaning that the 8 cores of MIRACLE RYZEN on GloFo's 7nm magic process in that chiplet that occupy 81 mm^2 are somehow massively bloated compared to Intel's failed Skylake architecture? But I guess logic is the first victim of koolaid. [To the usual suspects, instead of reflexively downthumbing because I dared to use facts and my brain instead of spewing your AMD IS ALWAYS PERFECT propaganda line, I'd appreciate a rational response. I seem to recall that GloFo's 14nm process was so superior that a RyZen core + its cache was only half the size of a 14nm Kaby Lake core, so how is it that Intel's 14nm process is so insanely superior that its cores must literally take up zero die space because IO CAN'T SCALE as is the new line we are being fed?]

      • Waco
      • 9 months ago

      Would it surprise you if the IO die is the same one from Epyc? Or that laying it out without regard for space usage is far more efficient?

      EDIT: To be clear, I know it’s not the IO die from Epyc. I was just getting chuckula all spun up. πŸ™‚

        • chuckula
        • 9 months ago

        It would not only suprise me, it would be physically impossible since I already know the I/O die in Epyc is in the ballpark of 450 mm^2. You can lookup the photos from when Lisa Su held it up and people did the math. This I/O chip is not even remotely similar to the one used in Epyc.

          • Waco
          • 9 months ago

          I am just playing devil’s advocate. I know the answer but have NDAs restricting me. πŸ˜›

            • chuckula
            • 9 months ago

            If the answer is that the L3 cache is in the I/O chip then that’s just another sign that the 7nm MIRACLE process isn’t all its cracked up to be since nobody ever said that SRAM cells don’t scale.

            I doubt that the answer is that there’s integrated graphics in there either since they would likely be inferior to existing Raven Ridge in practice too. Not to mention that the NDAs you are under are related to Epyc and Epyc sure doesn’t have integrated graphics.

            • Waco
            • 9 months ago

            Let’s just say I think you’re being overly pessimistic. πŸ™‚

            • chuckula
            • 9 months ago

            What did they put in there? Some magical FPGA that’s not connected to anything? [Lisa Su conveniently said “FPGA” and mentioned wireless several times… all things that are not part of AMD’s product portfolio at all.]

            I think it’s the L3 since the early rumors are that Epyc doubles the L3 cache size and it’s likely not sitting in the chiplets.

            Even her hyped-up Epyc slide wasn’t blowing me away. I’d like to see Epyc 2 destroy Cascade Lake-AP in a properly configured Gromacs run with AVX-512 kernels.

            • Waco
            • 9 months ago

            I think you’ll find out closer to the end of Q2.

            • chuckula
            • 9 months ago

            It better be PONIES dammit!

            Oh, and 5G because… .OMG 5G!

            #EpycInMyIphoneHobags

            • Mr Bill
            • 9 months ago

            [url=https://www.youtube.com/watch?v=olNJ5YQ0vZ0<]Clydesdale[/url<]

            • Redocbew
            • 9 months ago

            Next codename: [url=https://www.google.com/maps/place/Clydesdale+Lake,+North+Kawartha,+ON+K0L+1A0,+Canada/data=!4m2!3m1!1s0x4cd48dfaee6671d9:0x8038b82260127849?ved=2ahUKEwjslJz41-bfAhUUJTQIHVeaBdcQ8gEwCnoECAYQBA<]ClydesdaleLake[/url<]

            • freebird
            • 9 months ago

            Not everyone’s world revolves around AVX-512 like your world Chuckula…

        • Goty
        • 9 months ago

        The Epyc I/O chiplet is something over 300 mm^2 I think, so definitely not the same one.

        • freebird
        • 9 months ago

        The AMD IO die is kinda like the Obamacare AKA (Affordable Care Act) when Nancy Pelosi said “We have to pass the bill before you can see what is in it…”

        So AMD has to make and release a Ryzen 3 before the can show us what is in it.

        p.s. did anyone else wonder WHY you see the 7nm processor offset from the IO die?
        I’m sure it not because it leaves room for a 2nd proccessor die to be placed there and hooked up to the IO die, right?

          • chuckula
          • 9 months ago

          So the 14nm die will just make something you already were paying for more expensive? Thanks AMD!

          As for the blank spot that’s old news. Nobody is going to be shocked when “miraculously” another chiplet shows up there although it’s very telling that AMD’s big dog-n-pony show couldn’t produce even an engineering sample of a couple of chiplets for a 16 core part or – far more importantly – a graphics chiplet.

            • thx1138r
            • 9 months ago

            AMD and Intel have both shown than producing reasonably large 7/10nm dies is rather difficult, so having a smaller die that you can make is going to be cheaper than a larger die that you can’t. And how do you know that AMD have not produced a dual-chiplet design already, do you have some inside information?

            And the dedicated graphics chiplet is an idea that doesn’t hold water, at least in the AM4 form factor. Even mid-range GPUs need a lot more bandwidth than two channels of DDR4 can provide, so a bundled 7nm GPU would be quite small, and probably would not be worth designing on an advanced process. What would make more sense would be a combined CPU/GPU chiplet, say 4 zen2 cores and a moderate GPU, a single chiplet could then be used on low-power/cheaper/laptop chips and a dual chiplet design could be used on more powerful desktop chips.

        • freebird
        • 9 months ago

        I’m sure AMD isn’t building it to be a “one & done” IO chip either… it is planning for the multi-chip future… this is just the beginning.

          • Waco
          • 9 months ago

          I would assume so. It’d be pretty clever to have the same IO chip in the next Threadripper (same IO die, just more CPU chiplets). It means dark silicon on the desktop, but perhaps only building 3 dies for your ENTIRE product line would be worth it.

      • Goty
      • 9 months ago

      Doesn’t Zen have 24 PCI-E lanes from the CPU while Kaby Lake only offers 16? I’m not sure that’s close enough to call them “pretty much the same.” Zen also has stuff like the dual on-die 10Gb NIC controllers that could be shuffled off into the I/O die (or left off entirely since this doesn’t appear to be the same I/O die as the Epyc one.) If we choose to entertain some of the wilder rumors/predicitons out there, it’s also possible that the chip includes memory controllers for other types of memory than just DDR4.

      *EDIT* Ooh, chucky quick with the downvotes today for someone who doesn’t agree entirely with him and offers reasonable conversation!

        • chuckula
        • 9 months ago

        Kaby Lake has 20 effective PCIe lanes with 4 going to the southbridge (and RyZen does the same thing).

        4 PCIe lanes is a rounding error.

        For all intents and purposes, this chip has the same I/O as any quad-core Kaby Lake.
        And if I/O *can’t scale* according to the new koolaid we are all supposed to drink, then how exactly can those incompetent morons at Intel fit all that I/O into a 14nm part that inconveniently also has CPU cores that are massively larger than the superior RyZen?

          • Goty
          • 9 months ago

          Yep, you’re right, so obviously the PCI-E isn’t the whole answer then. That only addresses one of the components of the die, though. There are the aforementioned NICs, rumored other memory controllers, a potential last level cache, etc. Since we don’t know exactly what it contains, casting aspersions based solely on its die size is pretty irresponsible.

          Additionally, you are misrepresenting one of the stated reasons for shuttling off all of the I/O into its own die. You keep saying “CAN’T SCALE” when the reason given has always been that I/O “doesn’t scale [i<]as well[/i<]." I know nuance isn't exactly your thing, but it's an important distinction. There's also the idea that it could be a process-specific thing. Maybe there are issues at 7nm in general or just at TSMC that required this approach. Again, we don't know. The only source we have on it is AMD (the only actual authority on the issue) and you don't see a lot of people who are knowledgeable about semiconductor design coming out of the woodwork saying, "That doesn't make sense." And last but not least, there's always the wafer supply agreement with GloFo to consider!

          • jts888
          • 9 months ago

          Zeppelin has 32 SERDES/PHY lanes (which can bifurcate down to 2*(4+4+4+2+2) IIRC), which are allocated as follows for Ryzen: 16 for a GPU slot, 4 for another peripheral slot, 4 for an NVMe m.2 slot, 4 for the SB, and 4 more unexposed to customers but probably reserved for rear-panel-side controllers. Kaby Lake may similarly have a couple spare lanes for all I know though. However, you should expect SERDES to be rather larger for PCIe 4.0, since that needs to handle ~16.5 Gbps vs. the ~12.5 Gbps that Zeppelin lanes were rated for.

          The other big difference muddying an apples-to-apples comparison is that the Ryzen 3 IO die has a full crossbar (between IOMMU, MCs, and now chiplets instead of direct CCXs) for IF compared to pre-Skylake-X Intel rings, albeit now with substantially fewer ports. OTOH, it could be fatter and/or higher clocked to help compensate for DRAM read latency penalties from hopping off-die.

          • kuraegomon
          • 9 months ago

          Good god man, you’re flailing around with your straw-man game even more than usual. Lets just simmer down and wait for the technical briefing on why the I/O chip has the dimensions it does. Based on AMD’s Ryzen-and-newer delivery quality, the reasons are probably mostly, er, reasonable.

          Most of the regulars around here are entirely capable of recognizing that AMD is still catching up to Intel in some ways. We’ll see if that statement still holds once the Zen 2 benchmarks are in. Actually, please don’t let that last comment drive you into a frenzy – I’m getting a little worried about you πŸ˜›

            • freebird
            • 9 months ago

            The lack of AMD newz lately has got Count Chuckula in a tizzy and had to whine about something…

        • thx1138r
        • 9 months ago

        (replied to wrong post, fixed)

      • thx1138r
      • 9 months ago

      I kinda agree. Two things first though, an 8-core zen chip is going to need a lot more interconnection I/O than a quad-core kaby-lake chip, and at this point who knows exactly how many extra transistors will be required to support PCI-e 4.0 over and above PCIe 3.

      Aside from those two issues and the other ones mentioned, that I/O chip still seems a bit on the large side. For me that leaves a few possibilities e.g.:
      1. The I/O chip includes some cache memory (cache sizes/levels have not yet been disclosed) or,
      2. The I/O chip was actually designed to interface two 8-core chiplets

        • Hattig
        • 9 months ago

        Indeed, the layout of the Zen 3k package leaves room for another chiplet, and also the package has visible stuff (bumps) occurring where that die would be.

        So that’s two IFOPs on the I/O die for the two potential chiplets (CPU and CPU|GPU).

        It’s still a large I/O die however. I suspect some form of cache – be it normal L4 or eSRAM/eDRAM.

        Alternatively it could include a small GPU (Polaris 12 is 100mm^2 for 8CUs + 128bit GDDR5, so a 3CU shared memory GPU should be even smaller). However I don’t think this is likely.

        Otherwise, it’s some other logic. Memory-wise, a DDR5 controller is possible for future AM5 SKUs. Hopefully it supports LPDDR4 and LPDDR5 as well, for mobile systems. PCIe4 will be bigger. Maybe there are more channels too, again for AM5.

        • blastdoor
        • 9 months ago

        It’s funny…. I think many folks look at that I/O chip and think “boy, that’s big!”

        The divergence happens when we go to figure out why. In Chuckula’s mind, it proves that GloFo’s 14nm process sucks. In the minds of others (including me) it raises the question of whether there’s a lot of cache in there.

        Time will tell!

          • chuckula
          • 9 months ago

          [quote<]In the minds of others (including me) it raises the question of whether there's a lot of cache in there. [/quote<] Sorry, I had that thought long before you did. Additionally, for all the wonders of having a huge 14nm-hunk of silicon in what is supposed to be a "new" 7nm miracle chip, it makes me question the entire supposition that AMD is so far ahead of Intel because Intel "can't make 10nm". I mean, apparently AMD can only kinda make 7nm too when their miracle chips don't use it very much.

            • blastdoor
            • 9 months ago

            AMD doesn’t make 7nm chips, TSMC does.

            I don’t think AMD’s chiplet strategy implies a problem with TSMC yields. It implies AMD can’t afford to design a bunch of 7nm chips and so chose a design where they only need one 7nm chip that gets combined with different IO chips for different products.

            • Waco
            • 9 months ago

            Just so chuckula can’t complain too much – Intel is also going down the route of chiplet designs for fab efficiency.

            • Goty
            • 9 months ago

            Yeah, but according to him, AMD is only doing it because they CAN’T make the larger chips and Intel is just doing it because it’s SMART.

            • Waco
            • 9 months ago

            Yep. AMD knew years ago about a process that hadn’t been invented yet, wasn’t owned by them, and is clearly already yielding well enough to completely change their chip design strategy.

            Ha. πŸ™‚

            • freebird
            • 9 months ago

            So you don’t think that this design is more process efficient, cost effective and therefore PROFIT EFFICIENT?

        • jihadjoe
        • 9 months ago

        I’m guessing cache. With all IO going through IF to the IO chip there’s going to be an across-the-board latency increase for Zen 2, so they must be doing something to mitigate it.

      • NTMBK
      • 9 months ago

      You forgot to count the IO required to connect two chiplets via infinity fabric.

      EDIT: And the logic for routing between the two of them.

      • Krogoth
      • 9 months ago

      AMD couldn’t tape out a 7nm CPU design with I/O logic without sub-par yields. They needed to take advantage of Intel’s 10nm woes. They took the sensible approach with their limited time and resources to what we see today.

      Zen3 will likely see the return of I/O logic assuming TSMC’s 7nm process matures and yields improve.

      • tay
      • 9 months ago

      Can you instead of mocking the AMD fans, tell us regular folk what is going on here (in your estimation obv). I’m not taking anything you say as gospel πŸ˜€

        • jarder
        • 9 months ago

        Indeed, when Chuck makes so many repeatedly unfunny anti-AMD posts it hard to take a request for discussion with AMD fans seriously.

        • hiki
        • 9 months ago

        It is already known that the IO is 100% older technology, and is in a separated chiplet, so the CPUs get cheaper, in higher yields and the only part with new technology.

      • astrotech66
      • 9 months ago

      I didn’t downvote you for using facts, I downvoted you because of your incredibly condescending attitude toward Lisa Su and anyone else who may like AMD. Once I see multiple uses of all caps and repeated phrases that are supposed to be funny or clever (but aren’t) then I quit reading. I don’t read every post in the comments, but I haven’t seen anyone else pushing the “AMD koolaid” in any manner like the one you’re using to defend Intel.

        • chuckula
        • 9 months ago

        Tell ya what, when Lisa Su starts saying that a new AMD product is better because it can beat an old [b<]AMD[/b<] product instead of taking shots at Intel I might be more impressed. To note the differences in attitude and professionalism -- which is why Intel is the leader and not AMD -- Intel's Ice Lake demos focused on how much better Ice Lake is in image-search inferencing workloads compared to [b<]other Intel chips[/b<]. Aside from the fact that these workloads are about a trillion more times interesting than yet another Cinebench retread, you didn't see Intel up there insulting their competitors and comparing 10nm Intel products to 14nm AMD products and acting like they deserved a trophy or something.

          • jarder
          • 9 months ago

          LOL, that’s the most pathetic complaint about AMD I’ve heard today, and I’ve read some of your other comments. Here’s an exercise, imagine your level of indignation if AMD had only compared a new chip with one of their old ones.

          Oh, and if Intel were so professional, how come they lost the silicon process technology lead. That’s the difference between between desperately trying to look professional and doing your actual job in a professional manner.

        • Goty
        • 9 months ago

        Attempting to have intelligent conversations with chuckula is my sport.

          • chuckula
          • 9 months ago

          Considering all the insults you’ve hurled at Intel over their “failed” 10nm process and all the sniping you do at Nvidia because they have the gall to make a profit* I could say the same about you 10 times over.

          I’d still love to hear you tell us again about why AVX-512 is so useless in the enterprise.
          Not to mention how idiotic it is to use Optane in a database server… because we all know enterprises buy high-end servers for Cinebench runs and only an idiot would try to run a database with one.

          On top of all that, where’s the awe-inspired worship of the fact that Intel will produce the world’s first 10nm “APU” this year while AMD doesn’t have one?

          * I can tell you hate Intel & Nvidia a lot more than your pretend to like AMD because if you really liked AMD, you’d be jumping for joy when Nvidia announces an expensive product and would ask for them to jack the prices.

            • Goty
            • 9 months ago

            You must have me confused with somebody else as I don’t believe I’ve done ANY of those things. This is a good bit of information, though. Now that I know you don’t live in the same reality as the rest of us, I can tailor my attempts to converse with you to suit!

            • cegras
            • 9 months ago

            Never forget:

            Teach a man to fish, and he can fish forever:

            [url<]https://techreport.com/discussion/30540/amd-gives-us-our-first-real-moment-of-zen?post=996584[/url<] [url<]https://techreport.com/news/30539/intel-announces-next-gen-knights-mill-xeon-phi-accelerator?post=996782[/url<] [url<]https://techreport.com/news/30394/3dmark-time-spy-benchmark-puts-directx-12-to-the-test?post=990429[/url<] [url<]https://techreport.com/discussion/30587/intel-kaby-lake-cpus-revealed?post=998808[/url<] Look you sociopathic D-bag with your "strawman" whines to hide your own history of lying, here's a reminder of how you insulted Haswell with a completely disingenuous and dishonest rant that only got upthumbed because your crew was circling the wagons in desperation: [url<]https://techreport.com/discussion/24879/intel-core-i7-4770k-and-4950hq-haswell-processors-reviewed?post=735330[/url<] Guess what sunshine? In 2017 AMD is flat out copying that chip* on which you heaped so much scorn in 2013. In literally every way AMD is basically waving the flag and saying that Haswell was so good that they hired Jim Keller to dump everything they've done for the last decade and just clone Intel to the best of their abilities. Riddle me this you little shill: You know how scared of Zen Intel is? They're so scared that according to your own little strawman Kaby Lake is not any better than the same chips Intel has been selling since 2011! That's how much AMD keeps Intel's design team up at night that Intel basically saw no need to do anything in response to Zen even though we've been hearing hype about the stupid chip since 2012. How does it feel to be that irrelevant? * OK, in fairness the un-core part of Zen is more like a 2009 Nehalem instead of a 2013 era Haswell.

            • chuckula
            • 9 months ago

            Guess what idiot: In 2019 AMD is still copying Haswell.

            It’s called finally implementing the full AVX2 suite that I’ve been running on my desktop for over 5.5 years.

            So you literally show your own technical ignorance while spending half the day searching TR just to “embarrass” me with statements that I’m proud I made: Haswell didn’t deserve to be insulted by shills in 2013 and given that AMD is literally still in the process of copying major architectural features [b<]six years later[/b<] there should be a shrine to it at AMD HQ. Tell ya what: [b<]SIX YEARS FROM NOW[/b<] I invite you to point out some "miraculous" feature in Zen 2 that Intel took 6 years to finally implement. I'm sure that they'll totally have 14nm side-chips that are larger than Kaby Lake doing things like accessing RAM because we all know that can't possibly work in an integrated package.

            • Srsly_Bro
            • 9 months ago

            Excellent work. They had to go back in history to find something to fight you on and still failed.

            • freebird
            • 9 months ago

            Hmmm, you mean something like the fix for MELTDOWN that dates back over 10 years worth of Intel CPUs?

            It doesn’t exist in ANY AMD CPUs… so
            I could say it took Intel over 10 years to fix something
            AMD never broke in the 1st place…

            Pretty Sad…
            πŸ™

            • cegras
            • 9 months ago

            Bruh, it’s a post pointing out your habit of inconsistent and extremely emotional tirades. You gotta stop. You need to be gagged.

          • JustAnEngineer
          • 9 months ago

          [quote<] Never wrestle with a pig. You both get muddy... and the pig [i<]likes it[/i<]. [/quote<] I feel pity for someone who dedicates so much of their life to spreading hate for a particular company.

      • ermo
      • 9 months ago

      [quote=”chuckula”<]> "I'm not buying it."[/quote<] Of course you aren't.

      • Mr Bill
      • 9 months ago

      Maybe this needs to be done to fully enable the infinity fabric model.

      • DeadOfKnight
      • 9 months ago

      Who said it couldn’t scale? It just doesn’t benefit as much from scaling and it’s much cheaper to yield a bunch of chiplets per wafer.

        • Krogoth
        • 9 months ago

        It is easier to fab on a fairly immature process. It is not necessary cheaper overall because you have to spend more on interposers and tracings on the CPU packaging for a separate I/O chip.

      • wierdo
      • 9 months ago

      Here’s AMD’s explanation followed by AdoredTV’s deep dive into the topic:
      [url<]https://www.youtube.com/watch?v=21xK8Ow7PT8&feature=youtu.be&t=09m45s[/url<] Maybe it'll help explain the pros and cons of this design decision better. Later videos dive into costs and yields etc, feel free to pull those up if more detailed analysis is of interest. The point is that sometimes older processes are better for producing certain parts when all things are considered, this may be one of those cases according to some experts.

        • chuckula
        • 9 months ago

        Adoretv: the exact same guy who promised us those 16 core $399 RyZen 2s launching this month!

          • wierdo
          • 9 months ago

          Launching end of this “year” I believe? I’d have to re-watch the video to remember, but he has a detailed roadmap video, well researched. It included the models launched this week, said they would be first to launch, so was pretty spot on.

          Feel free to watch his listing of models and estimate of yield trade-offs etc, pretty smart guy.

            • chuckula
            • 9 months ago

            I’d rather watch reality TV. At least they know they are lying.

            And he specifically said that we’d only have to wait until May for the “high end” 16 core 5Ghz RyZen while the rest of the line including the “regular” 16 core part was already about to ship.
            Funny how “end of the year” morphed into pre-orders with specific model numbers and prices on a sketchy Russian website.

            But I’m sure he’s on the up and up.

            • Goty
            • 9 months ago

            He reports and makes predictions based on rumors. Shockingly, he’s wrong on occasion, and any analysis should be viewed through that lens.

            • freebird
            • 9 months ago

            I watched the video and he didn’t mention when they would “ship”. He used the word “announced” and I’m pretty sure he stated the 16-core would be “announced” around May of 2019. All of which sounds reasonable, although no specific models were announced at CES. I was hoping for an April-May launch of Ryzen 3, but I always figured resources would be put towards getting EPYC2 out before Ryzen 3 for better ROI.

            • wierdo
            • 9 months ago

            It’s up to you to disagree with him and the HardOCP staff that are backing his speculations up – but based on their own sources.

            But anyway, he just came out with a new update, he thinks AMD is hiding something up their sleeve:
            [url<]https://www.youtube.com/watch?v=g39dpcdzTvk[/url<] He speculates that Lisa decided to go with a conservative same-core comparison, using their midrange CPU vs Intel's top CPU. With that in mind, the results were comparable while consuming half the power, and that empty spot on the silicon is an obvious clue as to what they may be up to.

            • chuckula
            • 9 months ago

            Oh yeah AMD is SO secretive. Literally NOBODY thinks they could slap another chiplet in there!
            Seriously, all those Morans at Intel are in the dark here!
            Raj has no clue!
            Jim Keller obviously can’t count past the number 8 so he’s clueless!

            It’s a magical secret that only the in-crowd at AMD knows! KEEP IT A SECRET!

            On a note that actually matters in the real-world, unless AMD execs are lying to the press ala “overclocker’s dream” they also said that there’s no GPU chiplet coming to AM4, so it looks like the future isn’t fusion after all.

          • anotherengineer
          • 9 months ago

          I thought that was you posting that misinformation πŸ™‚

            • chuckula
            • 9 months ago

            I’m AdoredTV?

            But you just said I was Jen-Hsun in the Radeon thread!

            AdoredTV *IS* Jen-Hsun… CONFIRMED

      • kuttan
      • 9 months ago

      Ranting doesn’t work as you expect it to be. You neither can stop AMD from releasing their Zen 2 nor people to stop buying it it.[b<] These kinds of sponsored shilling no longer works[/b<]. You are wasting time and energy.

        • chuckula
        • 9 months ago

        Have you *ever* had an intelligent post on this website ever?

        Got any technical response to my valid points above? Or are you just too busy saying AMD GOOD EVERYONE ELSE EVIL while you rock back and forth in the corner?

          • jarder
          • 9 months ago

          Pull the other one, a vitriolic tirade like that is not a request for serious discussion, it’s a cry for help, please get some.

    • ptsant
    • 9 months ago

    Sounds like a decent drop-in upgrade for my 1700X.

    • blastdoor
    • 9 months ago

    Historically, on the few occasions that AMD had a product competitive with intel, AMD was capacity constrained. Now that might not be the case. With iPhone sales disappointing, TSMC might have all the capacity that AMD could hope for. Go green!

      • plonk420
      • 9 months ago

      you mean red?

        • blastdoor
        • 9 months ago

        Well…. prior to buying ATI, AMD was also green.

          • Mr Bill
          • 9 months ago

          Red…Green, I think I see how we can use duct tape to attach a second compute chip…

            • K-L-Waster
            • 9 months ago

            Please tell me Harold isn’t writing the firmware….

    • Unknown-Error
    • 9 months ago

    Very interesting stuff AMD. I’ll wait for TR reviews but if those power numbers hold true, then…..wow! What a turnaround.

    I am wondering how many top companies are vying for Dr. Lisa Su.

      • K-L-Waster
      • 9 months ago

      [quote<]I am wondering how many top companies are vying for Dr. Lisa Su.[/quote<] Do you think one of them might be in Santa Clara?

    • gerryg
    • 9 months ago

    Looks like a good solid chip. Looking forward to real-world release reviews and pricing.

    Wondering if the new AM4 chipsets will offer anything special beyond PCIe 4.0. Maybe 5G? Built-in StoreMI memory?

      • ronch
      • 9 months ago

      Yeah it’s a solid chip. Never saw a liquid chip.

    • ronch
    • 9 months ago

    Double the floating point performance per core compared to existing Ryzen. Good job, AMD. Not that I’ll actually need it but I appreciate these FPUs being there for when the math gets tough.

    Now I know what to gift myself next Christmas.

      • MOSFET
      • 9 months ago

      FX-8350 to upgrade the 8320?

        • ronch
        • 9 months ago

        My FX-8350 is actually still adequate for my needs for over 6 years now. I just got an Acer Nitro 5 with a Ryzen 2500U today though. My routine these days calls for a laptop that can actually play games. My desktop may soon go Ryzen too.

          • Anonymous Coward
          • 9 months ago

          I have a good pile of DDR3 RAM so I’ve actually considered buying a [i<]new[/i<] FX-6300 and motherboard. Cheap of course. Haven't seen good deals on the used i5/i7/phenom2 market recently, and the motherboards I do see are full ATX. Could do a C2Q upgrade on a box I have, but not sure. So anyway, a [i<]new[/i<] FX. Hmm.

            • freebird
            • 9 months ago

            Yeah, I have some builds to do of old Phenoms X6, X4 & a FX8320 to replace some family members old PCs. The Phenom X6 1100 could hold a solid 4Ghz on all six cores, but it still got replace with an FX8320 @ 4.6Ghz. Still either one should run rings around my mom’s e8400 dual-core @ 3Ghz.

            • Anonymous Coward
            • 9 months ago

            How much FX does it take to get more single threaded performance than a E8400, I wonder?

            • Waco
            • 9 months ago

            Quite a bit – I remember going from a Phenom II (roughly similar IPC to a Core 2 family chip) to an 8120 on launch day. Even with the 8120 at 5 GHz, it was slower than my Phenom II at 3.8 GHz in lightly threaded tasks.

            The FX 8XXX chips were a decent step forward from the original Bulldozer chips but not *that* much.

            • freebird
            • 9 months ago

            Don’t know about that, but my FX8320 runs rings around that E8400… it’s CPU gets pegged 100% running Windoze 7 & a current browser… it has an SSD, but use the motherboard IGP, which is probably it’s biggest crutch, but it was a “muy poquito” ITX build with no room for a dual slot GPU. So I’m just going to replace it.

            • ronch
            • 9 months ago

            There’s this belief that people need the latest and greatest. In my opinion that’s just nuts. Unless something is already starting to fall apart or exhibiting issues here and there, or unable to fulfill your needs it’s perfectly acceptable to use it for as long as you can. The world is full of electronic crap anyway.

          • ptsant
          • 9 months ago

          First impressions from the laptop?

            • ronch
            • 9 months ago

            Great build quality, looks GAMER without being too gaudy. After applying discounts to both variants it’s cheaper by $125 compared to the variant that has a Core i5-8300H. Paid $740 for the AMD. Where I live Acer sells the Nitro 5 with just 4GB of RAM, a 1TB HDD, and no SSD, so I used the cost savings to get another stick of 4GB DDR4 and a 500GB WD Blue M.2 SATA SSD. Total spend = $860. But wait… where I live Acer has this promo going on where they’re giving away a G-Shock watch with this thing, and upon checking prices the watch costs about $90. So if you factor in the G-Shock, it’s like I paid $650 for this laptop + $120 in upgrades. Not too shabby, I think.

            The Ryzen 2500U may not be as fast as the 8300H (it’s about 20% slower I think) but it’s a 15w chip while the i5 is a 45w chip, so it’s quite amazing.

            Note – prices are approximate since I’m not in the United States and I didn’t pay in USD.

    • Wirko
    • 9 months ago

    Wow, someone actually bought a Ryzen 3000 in retail and then delidded it! There’s traces of goo!

      • K-L-Waster
      • 9 months ago

      Dr. Su probably got the employee discount.

        • blastdoor
        • 9 months ago

        Or maybe she knows somebody in the supply chain.

    • Voldenuit
    • 9 months ago

    “Summer” sounds a lot better, psychologically speaking, than “Q3”.

    Summer: “hey, only half the year is over!”

    Q3: “WTF, 3/4 of the year is gone?” (Granted, Q3 can technically mean the start of the second half of the year, I’m just pointing out the instinctive reaction to the phrase)

      • chuckula
      • 9 months ago

      September 20th: Summer launch CONFIRMED!

        • K-L-Waster
        • 9 months ago

        Paper, silicon, or Falcon Heavy?

          • Aranarth
          • 9 months ago

          B.F.R.!!! (accept no substitute!) πŸ˜€

            • kuraegomon
            • 9 months ago

            Does anyone else think Nvidia took a page straight from the Musk playbook when they named their Big [i<][b<]Format[/b<][/i<] Game Displays πŸ˜‰ - i.e. "OMG, you [i<]can[/i<] get away with it! Yaaaasssss"

            • RAGEPRO
            • 9 months ago

            I mean, it’s all Doom references with the Big F-ing gun. Of course, there’s also this: [url<]https://www.magnumresearch.com/bfr-big-frame-revolver/[/url<]

    • Mr Bill
    • 9 months ago

    I am impressed.

    @ … Should have waited for a 3000!

      • just brew it!
      • 9 months ago

      At least it’ll be a drop-in upgrade.

        • Mr Bill
        • 9 months ago

        I know, right?

    • shaq_mobile
    • 9 months ago

    I’m probably just ignorant, but no word on threadripper?

      • Chrispy_
      • 9 months ago

      Was gonna say something about yields, but I suspect it’ll come once AMD have sold enough EPYC processors to cover the highest-profit enterprise/datacenter market first.

      • Srsly_Bro
      • 9 months ago

      Nothing yet. They probably don’t want to kill current gen sales by announcing this far in advance. The 24 and 12 core TR2 haven’t been out for that long. I’m betting the launch will be late q3. Rumor is 32,48, and 64 core versions at the highest end and all the memory channels enabled. This also somewhat substantiated by Dr. Su making the claim AMD will have a dominant position in the hedt segment. A 32 core Zen 2 will dominate but won’t be amazing. I think that statement can lend some truth to the 64 core TR3. holding out for the 64 core, myself

        • shaq_mobile
        • 9 months ago

        Ah ok. Bummer. I’m itching for a TR build for my UE4 stuff. My 1700 is great but as i get closer to releases I do a lot more builds/compiles/packaging which consumes a lot of time, and the TR stuff seems like a great “budget” solution. I guess I’ll have to wait util later 2019.

        Hopefully the price wars will take their toll on the GPUs by then!

          • Chrispy_
          • 9 months ago

          What exactly are you doing? The CPUs may be decent but you’re likely to find that your OS/Software becomes the bottleneck. So many things aren’t yet NUMA-aware; We had the budget for 32 1950X or 24 2990WX and after back-to-back testing, the renderfarm expansion went with 1950X.

          Maybe in 5 years from now having more than 20 cores won’t be a problem. Until that issue is solved though, AMD 16C or Intel 18C is the hard limit for a lot of things.

            • shaq_mobile
            • 9 months ago

            I’m doing mostly Unreal Engine 4 and Visual Studio. So compiling C++, shaders, baking lighting, packaging/compressing…

            Probably the heaviest or most time consuming is the shader compiling and baking lighting, since that directly interrupts workflow and can take a while. Both are heavily multithreaded (i actually can’t use my computer when baking lighting or packaging, shaders are hefty but its usually only a 30-60 second wait). Lighting in UE4 can actually be setup for distributed rendering, which is super useful with large levels and scales VERY well with ram and cores.

            Most of it doesnt take forever, but when you add 10-30 seconds for small changes, 1-10 minutes for packaging, and 12-15 hours for a large level to bake lighting… it starts to mess with your workflow pretty quick!

            • freebird
            • 9 months ago

            That work load sounds EPYC! or even EPYC2! πŸ˜€

    • fredsnotdead
    • 9 months ago

    Anyone know if Zen2 addresses Spectre/Meltdown?

      • Chrispy_
      • 9 months ago

      AMD has always been fully immune to Meltdown, and Spectre variant 3.

      AMD *was* vulnerable to Spectre 1&2 variants but due to the architecture, All OS and microcode patches combined have <2% performance penalty, unlike Intel which had measurable drops in performance every single month over the course of several months. AMD’s architecture was never the target for Spectre or Meltdown, it was specifically an exploit for Intel’s lack of security within their speculative/predictive execution.

      I run a VMware estate with mostly Intel servers and plenty of performance headroom. Things never got desperate in the months of Intel patches, but there’s probably 25-50% more resource usage now, simply because the underlying processors have been slowed down by software patches (Haswell-E and Broadwell-E for the most part).

        • anotherengineer
        • 9 months ago

        Speaking of that

        [url<]https://support.microsoft.com/en-us/help/4090007/intel-microcode-updates[/url<]

          • Chrispy_
          • 9 months ago

          Yeah :\

          Intel really screwed up. It’s looking more and more like their performance advantage for the last decade was a giant security gamble at our expense and it’s come back to bite them. Add 10nm issues and we’re really starting to see the complacent cash-cow for what it is.

            • Klimax
            • 9 months ago

            It wasn’t just Intel, everybody went after performance at all cost. (AMD,ARM and IBM included) And holes were definitely not easy targets, that’s why it took so long.

            And blaming only Intel is stupid. Everybody was caught.

            • Chrispy_
            • 9 months ago

            Everyone was caught out by the implication of the flaws, but by far the biggest performance loss from all this is Intel hyperthreading, due to the way that so many resources and buffers are shared between two threads in a hyperthreaded pipeline. For performance reasons, Intel assumed that hardware checks and isolation weren’t necessary because these buffers couldn’t be exploited but no design is perfect and these buffers have been forced to leak data that ought to be protected by a priviledge check.

            [quote<]Protected memory is one of the foundational concepts underlying computer security. In essence, no process on a computer should be able to access data unless it has permission to do so[/quote<] [url=https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html<]Full details here[/url<], but the TL;DR version is that Spectre 1 and 2 use a side branch attack to figure out which data in cache is protected by 'knocking on the locked box' to see if it's empty or not. By rapidly rejecting secured data the attacker can isolate specific areas of cache [i<]within the same pipeline[/i<] using overflow attacks to evict protected data into an area the attacker has access to. AMD's SMT implementation has more duplicated function units in hardware for each pipeline; The resources that are shared in Intel's hyperthreaded pipeline are physically seperate pipelines in AMD's SMT implementation. Spectre 1 and 2 variants effectively target [b<]shared resources between two threads in the same pipeline[/b<], and Intel's exposure to Spectre 1 and 2 is much greater than AMD's because far more of that Intel pipeline is shared between two threads than The performance penalties applied to isolating threads in HT are therfore far greater than isolating threads for AMD's SMT, simply because the security patches prevent speculative execution in a shared part of the pipeline, thus bottlenecking prediction of each thread by up to 50%. Whilst not quite as simple as this (Intel HT =! AMD SMT), security patches don't need to prevent speculative execution for most of AMD's pipeline, because the resources aren't shared - the security model AMD followed when designing their SMT implementation physically isolated that part of the pipeline to one SEU per thread. Nobody is saying AMD aren't affected by Spectre, but the Spectre 1&2 attacks are exploits specifically built around Intel Haswell hyperthreading. AMD suffer minimal performance degredation by simply not doing SMT the way the exploit works. Perhaps someone will find an AMD-Zen exploit that Intel are immune to, but for the most part, Spectre is an anti-Intel attack and any collateral damage done to AMD, ARM, IBM is minimal and incidental. [b<]Spectre is an incendiary round to Intel's Hindenberg[/b<] and if AMD/ARM/IBM's helium-filled zepellins are punctured and leak slowly, it is only Intel's marketing department and shilling that are somehow convincing everyone that their competition are 'equally affected'.

            • chuckula
            • 9 months ago

            Spectre isn’t the same thing as hyperthreading because “speculative execution” (you know “Spectre”) and hyperthreading are two different technical concepts [Intel has sold x86 Atoms in the relative recent past that have hyperthreading and that are Spectre-proof since they are in-order architectures, which kind of pokes holes in your melodrama when I can’t think of AMD x86 parts that don’t do speculative execution]. While there have been hyperthreading attacks found, all the ones I’ve seen are generic to all forms of hyperthreading meaning that AMD’s vaunted “symmetric multithreading” is just as vulnerable and should be turned off. Funny how when Intel does hyperthreading for years its “cheating” but suddenly when AMD invents symmetric multithreading out of thin air its proof of their genius and is literally impervious to any security exploit because… uh… AMD?

            As for the rest of the melodrama, since Intel’s entire performance lead was based on “cheating” or some other nonsense, why don’t you explain how the 2700X didn’t exactly wipe the floor with the ancient design of the 9900K that’s had hardware and software mitigations put in place that obviously cripple its performance in some catastrophic way.

            For that matter, explain why the 2700X didn’t exactly wipe the floor with the 8086K *after* the 8086K received software mitigations and didn’t even get the extra hardware mitigations.

            • Waco
            • 9 months ago

            …SMT and HT are two very different implementations of multi-threaded cores. Intel’s version happens to be more vulnerable to side-channel attacks.

            Are you really going to try to argue against that?

            • ronch
            • 9 months ago

            ^ THIS THIS THIS

        • Klimax
        • 9 months ago

        Sorry, not exactly correct. Spectres are haunting AMD too and some are better on AMD than elsewhere:
        [url<]https://arstechnica.com/gadgets/2018/11/spectre-meltdown-researchers-unveil-7-more-speculative-execution-attacks/[/url<]

          • freebird
          • 9 months ago

          “…and some are better on AMD than elsewhere:”

          Not sure what this comment means, because the only thing I read in the article states that comes close to what you are implying…

          “In particular, one of the variants of the original Spectre attacks has been shown to have greater applicability against AMD’s latest processors than previously known; likewise the attack has also been shown to be effective against ARM processors.”

          which doesn’t equate to AMD being more susceptible to this attack than other CPUs.

    • YukaKun
    • 9 months ago

    Well, the 8c/16t combi for mainstream seems to be enough with dual channel and wideness. What I’d like them adding into that space is HBM. With just 2GB they’d make the CPUs stupidly fast with little added. Costlier, but I’d love to see that.

    Also, that space would be interesting to see with, say, vega cores instead.

    It is interesting the space left, because they do intend to fill it with something. We can just go wild with what they may use.

    Cheers!

      • enixenigma
      • 9 months ago

      [quote<]Also, that space would be interesting to see with, say, vega cores instead.[/quote<] You spelled Navi wrong...

      • Anonymous Coward
      • 9 months ago

      Is HBM suitable for general workloads? I’m under the impression they’d do better with a DRAM cache there, however many MB of that would fit. 128, 256MB? Or a GPU to meet certain market segments, or a second CPU die … they have some options there.

    • Vhalidictes
    • 9 months ago

    A two-die Ryzen wouldn’t make any sense, unfortunately.

    The current 8C/16T design is close to being bottlenecked by the 2-channel memory design, and this can’t be easily changed since AMD is trying to keep the same socket / motherboard design for this generation. (That’s a good thing).

      • NTMBK
      • 9 months ago

      We already have 32 core Threadripper on 4 channels; this would be the same compute/bandwidth ratio, but without the weird NUMA asymmetry.

        • derFunkenstein
        • 9 months ago

        Having the IO in a separate chip on the package eliminates so much weirdness with Threadripper. That’s a great point.

        • just brew it!
        • 9 months ago

        Just because we already have 32 cores on 4 channels doesn’t mean it isn’t bottlenecked. πŸ˜‰

        It’s a niche product for workloads that require lots of CPU in relation to DRAM bandwidth. Not sure that niche applicability translates down to the consumer space well enough for it to be a viable product.

          • derFunkenstein
          • 9 months ago

          It may be bottlenecked, but it’ll be symmetrically bottlenecked now. πŸ™‚

            • Waco
            • 9 months ago

            Getting rid of the weird NUMA-ness of the current Epyc/Threadripper linup with the IO chip is one of the many reasons I’m happy to see that public. πŸ™‚

      • Krogoth
      • 9 months ago

      They are going for a small Navi in a future revision. AMD just needs to close the clockspeed gap to overtake Intel’s performance edge in the desktop market not core count.

      • Srsly_Bro
      • 9 months ago

      Just like how the 2950x, amirite??

      [url<]https://techreport.com/review/33987/amd-ryzen-threadripper-2950x-cpu-reviewed/3[/url<] [url<]https://techreport.com/review/33531/amd-ryzen-7-2700x-and-ryzen-5-2600x-cpus-reviewed/4[/url<] You get a down vote for outright making nonsense up. Above is the link. You make up factually incorrect statements and spread misinformation. (That's not a good thing.)

      • blastdoor
      • 9 months ago

      That IO chip sure is big. I know it’s an older process, but even so, it seems like a lot of space for what they describe. Maybe there’s a lot of cache in there? Might that help mitigate effects of memory limitations?

        • Goty
        • 9 months ago

        The whole reason for the I/O die’s existence likely has something to do with that, too: I/O simply doesn’t shrink as efficiently as cache or logic.

          • Krogoth
          • 9 months ago

          Without crappy yields is more like it.

            • Waco
            • 9 months ago

            Certain features just don’t shrink much despite smaller minimal process feature size. IO drivers are some of them.

            • blastdoor
            • 9 months ago

            Maybe use 28nm then and save more money?

            • Krogoth
            • 9 months ago

            It still would be cheaper to put everything together if didn’t compromise yields in the process.

            It seems that TSMC’s 7nm process isn’t mature enough to handle larger chip builds yet without subpar yields. In light of this, AMD took the sensible approach in the short-term and let I/O stuff remain on an older but proven process. Assuming TSMC’s 7nm process matures and yields improve. It is quite possible that AMD will throw the I/O stuff back into the CPU packaging with Zen3.

            • Shobai
            • 9 months ago

            Some are asserting, as you do, that the split is due to insufficiencies at TSMC. Others are saying it’s a way for AMD to navigate their obligations to GloFo and GloFo’s older process tech. It would be nice to hear from someone who can say one way, the other, or what mix of the two.

            • Waco
            • 9 months ago

            I think you’re missing the other driver: one high performance design on cutting edge processes (to get maximal yields and engineering efficiency) while leaving the parts that are easy to fab, design, and implement on an older more proven process. If yields are incredibly good that means they can sell every single expensive cutting edge die they can make and only have to cut them down for market reasons, not technical reasons.

            I would guess we’ll continue to see a couple core designs at most because it lets AMD focus resources on the best focal point of the moment. One CPU core chiplet (maybe 2 for cut-down designs), a few IO hubs, a couple GPUs, etc.

            They’ve been trying to go this way for a long time, it’s nice to see it come to fruition.

            • blastdoor
            • 9 months ago

            As someone pointed out previously, it’s hard to see how AMD could have designed these chips in response to TSMC yields. At the time they were designing these things they likely didn’t know what TSMC’s yields were. Also, they might have thought they’d be using GloFo.

            What they likely DID know, however, is that the fixed cost of taping out a new design on 7nm was going to be very high and they don’t have big piles of fixed-cost-paying-cash lying around. So they picked an approach in which they design a single 7nm chip — the 8 core chipset — and use that in all products. They vary the number of chipsets and they vary the I/O chip (made on a less expensive process).

            This yield rate story just doesn’t make sense to me. (Also, TSMC seems to be doing a fine job making very large volumes of 7nm SOCs for smartphones)

      • Anonymous Coward
      • 9 months ago

      Since the dies will be connected equally distant from the RAM, there would seem to be little penalty for using two dies other than the cost of it. They could conceivably make their ultimate halo product by choosing two good dies, fusing off a 2 or 4 cores on each, then leaving the full L3 and thermal headroom for the survivors to use.

        • Goty
        • 9 months ago

        There is the penalty of the die-to-die hop, which is a bit slower than going from CCX-to-CCX (though I forget by how much), and now I wager you’ll need two hops to do that as opposed to just one before.

      • jts888
      • 9 months ago

      AMD’s (reasonable, IMO) answer is throwing 2x as much L3 per core and potentially moving to 8c CCXs from 4c.

      Zeppelin local L3 latency is ~40 cycles vs. 70-80 (increasing with higher core clocks) for Skylake-X, so there already a decent amount of workloads that can have better perceived memory latency/throughput characteristics on Zen 1/+ than some Intel chips despite markedly slower DRAM latencies. More L3 doesn’t fix everything for everybody, but it will expand the set of workloads that play well on Zen 2.

      • ronch
      • 9 months ago

      Let’s just wait for ze benchmarks.

        • K-L-Waster
        • 9 months ago

        xactly

Pin It on Pinterest

Share This