AMD kicks off Computex with a 12-core Ryzen 9 CPU and some Navi news

AMD is going head-to-head with two of the largest semiconductor companies in the world, and perhaps the red team seeks to take the initiative against its blue and green rivals, because it is charging into battle with standard held high. Exactly as expected, AMD CEO Lisa Su announced a whole pile of new products during her opening keynote at Computex. Dr. Su specifically spoke about the third-generation Ryzen CPUs sporting the Zen 2 architecture and the Navi-based Radeon RX 5000-series graphics cards, but the Ryzen 3000-series CPUs are coming along with the new X570 chipset, too.

AMD CEO Lisa Su holds a Ryzen 9 3900X CPU on stage at Computex.

The good Doctor's most exciting announcements were about the third-generation Ryzen desktop CPUs, also known as the Ryzen 3000 family. AMD says that the 7nm chips offer a 15% functional improvement in instructions-per-clock over Zen+. The chips also have enormous caches; AMD doubled the L3 cache from 16MB to 32MB per eight-core die. That distinction is important, because Lisa Su confirmed that AMD can and will stick multiple eight-core CPU dice on a single CPU package by announcing the Ryzen 9 3900X. This is obviously the first CPU in the Ryzen 9 family, and it comes with 12 cores and 24 threads clocked at up to 4.6 GHz. Here, just look at the table:

3rd-gen Ryzen

desktop CPUs

Cores /

Threads

TDP

(Watts)

Boost /

Base clock

L3

Cache

SRP

(USD)

Ryzen 9 3900X 12 / 24 105 4.6 / 3.8 GHz 32+32 MB $499
Ryzen 7 3800X 8 / 16 105 4.5 / 3.9 GHz 32 MB $399
Ryzen 7 3700X 8 / 16 65 4.4 / 3.6 GHz 32 MB $329
Ryzen 5 3600X 6 / 12 95 4.4 / 3.8 GHz 32 MB $249
Ryzen 5 3600 6 / 12 65 4.2 / 3.6 GHz 32 MB $199

AMD directly compared the Ryzen 7 3700X against the Core i7-9700K and in its own testing found that its chip beat the Intel offering by 1% in single-threaded workloads, and 30% in multi-threaded tasks—not that surprising given the latter chip's lack of Hyper-Threading, but still impressive considering the Ryzen chip's 26W TDP disadvantage. This writer was more impressed with AMD's demo showing the Ryzen 7 3800X putting up a fight against the Core i9-9900K in PUBG. Also, the Ryzen 9 3900X edged out the Core i9-9920X in a Blender render in an on-stage demo. For those who don't know, that Core i9 is an LGA 2066-socket chip with quad-channel memory.

There's more to talk about here, including the massive L3 caches and the presence of PCIe 4.0, but the real newsworthy bits here are those prices. AMD will apparently be asking just $330 for an 8-core CPU that boosts up to 4.4 GHz, and only $500 for a 12-core CPU. AMD's own Threadripper 2920X 12-core HEDT CPU goes for $629 on Newegg as I write this, and while that's a larger and more complex processor than the Ryzen 9 3900X, it surely isn't as fast—at anything. Also interesting is that these chips will slot into existing systems, even including some motherboards based around the first-generation X370 and B350 chipsets. Not all boards will work, though, so check your manufacturer's site to make sure.

Speaking of motherboards, there's that new X570 chipset, too. AMD says ASRock, Asus, Colorful, Gigabyte, and MSI are bringing out "over 50 new motherboard models" based on the X570 chipset, and there's actually ample reason to upgrade. Unlike previous Socket AM4 chipsets, X570 is an in-house AMD design that offers up 16 lanes of PCIe 4.0 capability. That's right—both the Ryzen 3000 CPUs and their chipset support PCI Express 4.0; a total of 40 lanes altogether if you count the 4 that will be used to connect the CPU to the chipset. There's a bevy of boards to behold, and we'll look at some of them in detail tomorrow when the whole staff isn't on holiday.

Of course, aside from CPUs and system boards, AMD also announced new graphics parts. We fully expected AMD to trot out GCN one last time for us to gawp at, but instead, the company surprised us by announcing that Navi is based on what is purportedly a new graphics architecture called "Radeon DNA", or RDNA for short. Dr. Su claimed on stage that RDNA offers the Navi graphics chips—which she officially christened as the Radeon RX 5000 series—a 25% boost in performance-per-clock and a 50% improvement in performance-per-watt over the company's Vega architecture.

Given that AMD's CEO explicitly referenced both design and process in her statement, that 50% improvement in performance-per-watt clearly includes the move to 7nm fabrication, so the comparison is against Vega 64, not the Radeon VII. The company performed a short on-stage demo of what Scott Herkelman, AMD RTG's GM, referred to as a Radeon RX 5700, running Strange Brigade against a GeForce RTX 2070. The Navi-based Radeon pulled out around a 10% advantage in the canned benchmark, which is particularly decent given that the chip Lisa Su held in her hand was quite small indeed. That fits in neatly with earlier rumors that Navi would reside in the mid-range, at least at first.

We're as thirsty as you are for more details about the next Radeons. AMD says cards bearing those chips will be available in July, and that the company will hold a livestream on June 10 from E3 with more details. The third-generation Ryzen chips, on the other hand, have a set release date: AMD says all five of the new CPUs, as well as new X570-based motherboards to mount them, should be stocked on store shelves on July 7.

Comments closed
    • caconym
    • 7 months ago

    If they’re planning a 16-core part, does calling the 12-core the 3900x make sense? What would they call the 16-core since they’re out of numerals?

    Maybe I’m missing something obvious.

      • techguy
      • 7 months ago

      3950x

        • caconym
        • 7 months ago

        I guess that would have to be it, huh? Just seems weird to me because all the other models are even hundreds.

      • Anonymous Coward
      • 6 months ago

      I think it shows they don’t intend to offer 16 cores in that socket this time around.

      That would be not a bad choice in my opinion. For one thing there is the memory interface, for another, the workloads that use so many cores are probably attached to bank accounts that can handle a more expensive platform.

    • DeadOfKnight
    • 7 months ago

    Kind of odd that the 12-core version has disabled cores, yet they’re binned for the highest clock speeds. The inevitable 16-core Ryzen 3 launch might use even higher binned chips. I expect if they are doing that, the result will be an expensive product. Some suggest they don’t want to cannibalize Threadripper, but I think this 12-core will already do that. If anything they are still binning chips and saving it until Intel makes their next move. Nothing would win the mindshare of consumers like offering not only the best value, but also the under $1000 performance crown.

    • blastdoor
    • 7 months ago

    I look forward to seeing analysis/discussion of the trade offs between the giant L3 cache and the loss of an integrated memory controller. I guess the implication of the limited AMD benchmarks is that it’s a wash, but I imagine this will vary by workload.

    • K-L-Waster
    • 7 months ago

    It will be interesting to see whether the rumour about the 12c/24t chip running at 5GHz all core turns out to be:

    a) XFR working very well
    b) Manual overclocking

    or

    c) complete fiction

    If it turns out to be a) that’s a really interesting chip. If it’s b), it’s a sorta interesting chip. If it’s c), well, that’s the interwebz for ya….

    • ronch
    • 7 months ago

    Going from the 3600X to the 3800X, you get very small clock speed improvements, 33% more cores for 60% more money and 10w more. Going from 3600X to 3900X you get 2x the cores, same base but more boost clocks, twice the price and also just 10w more. The 3900 X does seem to offer more for the money than the 3800X.

    Personally though I think I’ll end up with the 3600X if I buy this gen.

      • f0d
      • 7 months ago

      im getting the 3600 asap to replace my 1700x (who knows when ill be able to get one in aus tho)
      i dont need 8 cores – 6 is plenty and ipc/clock speed matters more at the moment

      im looking forward to having fun overclocking it to see what it can do under a pretty extreme water setup

        • ronch
        • 7 months ago

        I’m all for AMD but if you’re upgrading from a 1700X then single-thread must be critical for you, and in this case I’d probably point you to Intel and OC it for all it’s worth.

          • f0d
          • 7 months ago

          going by the info at computex it looks like amd has caught up (or is close enough) to intel in terms of ipc and is close enough at clock speed but the price at least here in australia is much cheaper for 6 cores from amd – around $300 for the 3600 and an i5 9600k is $400
          also i can use my old mobo and just sell my old 1700x for 2/3 of the price of a 3600

          its all about the price, its basically going to cost me about $100 to upgrade and get intel like single thread performance

          pretty much every system i have had starting from core 2 has been intel because lets face it they were MUCH better (FX was a joke) but since ryzen amd has actually been worth buying and since im not a fanboy of any brand at all i go for whatever is best price/performance whenever i buy

          now we just need amd to make good gpu’s again

      • Zizy
      • 7 months ago

      Yup, 3700X->3800X is a smaller jump in performance than 3800X->3900X (on paper) with about the same relative jump in price.
      But the rosy outlook of the 3900X could be hampered by having 2 clusters – it might not perform as good as a monolithic 12C chip.

      Still, the 3700X is surprisingly cheap for the performance while going up requires quite a lot of money, offering questionable value.

    • Mr Bill
    • 7 months ago

    So, I guess we can bring back the Quake benchmarks for this new generation of video cards?

    Edit: Because Quake gets ray tracing introduced June 06.

    • tfp
    • 7 months ago

    Can someone explain how AMDs 7nm process node compares to Intels current 14+++++++ nm and pending 10 nm? I thought I read somewhere that the 7 and 10 nm processes have similar properties when I it comes to size of gates and possibly power usage. If that is true has the process definition gotten fuzzy?

      • Shobai
      • 7 months ago

      Yep, I understand that that’s pretty much the state of it.

      • just brew it!
      • 7 months ago

      Definition of process size has been fuzzy for a few years now, and isn’t getting any clearer. Heck, other process features like of copper interconnects, SOI, FinFETs, etc. have also introduced other variables at various times over the past couple of decades, so direct comparison of process nodes between different manufacturers has been a bit of an apples-to-oranges thing for quite some time.

        • tfp
        • 7 months ago

        Make sense, thanks.

          • Wirko
          • 7 months ago

          It’s worse than fuzzy: manufactureres have been talking 1x, 1y, 1z process nodes in the recent years, at least for DRAM. It’s vaguely known that x, y, z are between 0 and 9, and 1z < 1y < 1x, except if they aren’t.

          [url<]https://www.anandtech.com/show/14118/samsung-develops-8-gb-drams-using-3rd-gen-10nmclass-process-technology[/url<]

    • not@home
    • 7 months ago

    I built my first PC with 64MB of ram, later upgraded to 128MB. Now I can get a CPU with 64MB of L3. If those old OSes could run on new hardware, how fast would they be?

      • jackbomb
      • 7 months ago

      Ludicrously.

      • ronch
      • 7 months ago

      Some time ago I ran the Quake 2 Timedemo on my FX-8350 + HD7770 combo and IIRC got something like 600+ FPS (or was it twice that??). So that’s a fairly complex 3D game on a CPU that’s considered lackluster today and a GPU that’s far from being the fastest today. When I get one of these new Ryzens I’m gonna bench again.

    • ozzuneoj
    • 7 months ago

    Looks like a nice improvement, but I’m still not sure that it’s time for me to retire my overclocked 2500K. My workloads are pretty light these days. If the Ryzen 6 “8-core higher clocked” rumors had been true, I would have considered one, but at the current pricing I might as well continue to wait. I’d like to double the physical core count and have a large IPC increase (and clock speed increase as well of course) for $200-$250. Preferably without having a system that runs hotter at stock clocks than my old chip with a 30% overclock.

    AMD seems to be closer to achieving all of those requirements than Intel (mostly due to heat and cost), but it’s looking like 8 years still isn’t quite enough for this to happen. Thankfully, I’m patient. πŸ™‚

      • odizzido
      • 7 months ago

      I am running an i5 750 myself. AMD certainly has my attention but my 750 still works just fine.

      My limiting factor is turning out to be ram. I originally got 2x2gigs, leaving room to upgrade later if I needed, which I did. Now all of my ram slots are full and I really am starting to need more than 8gigs. I never expected to have a ten year old CPU running everything I want it to just fine. It’s not worth spending the money on DDR3 unless I can find some for really cheap, but it’s also not worth buying a new CPU/mobo because mine is just fine. I guess I will just sit on it for now.

      • Spunjji
      • 6 months ago

      What clock speed is your 2500K running at?

      The Ryzen 5 3700X would fulfil at least the IPC and core count requirements with gusto, and definitely use less power into the bargain. Give it 12 months after release and you’ll probably find it at your preferred price πŸ™‚

    • Usacomp2k3
    • 7 months ago

    Any mention of timing on the APU version? the 3600g was the rumor. If they would take the $199 3600 and slap on a decent video card, I’ll buy one day 1.

      • Spunjji
      • 7 months ago

      I think those rumours may have been inaccurate. They’re already using the Ryzen 3000 nomenclature for updated 12nm Zen+ and Vega APUs (3200G and 3400G), so I’m expecting a bit of a wait before we get a proper Zen2 + Navi APU.

        • Usacomp2k3
        • 7 months ago

        Where are you seeing a 3400g?

      • ronch
      • 7 months ago

      I expect it to come out in Q4.

    • ronch
    • 7 months ago

    I don’t know why but now that I think of it, the more I realize ‘chiplet’ shouldn’t even be a term. It’s just an MCM, strictly speaking, isn’t it?

      • Aranarth
      • 7 months ago

      I suppose except that in this case you have cpu’s talking to another chip acting like the northbridge all on the same package.

      When I hear MCM I think of core 2 quad or Pentium 4Dual with two completely identical cpu’s talking to an external northbridge.

      I guess it is just in how you look at it.

        • Zizy
        • 7 months ago

        Well, IBM included cache in their MCM beast.

        I think chiplet got its name by inability to function independently. AMD requires IO part and the compute part to function. On the other hand, all MCM parts could be stripped down to a single chip.

        • ronch
        • 7 months ago

        I think AMD just coined the term. It’s still an MCM.

        • Rza79
        • 7 months ago

        Like ronch says, it’s always been called MCM.
        Intel’s own first gen core i-cpu’s for laptops and as i3 on desktop had the northbridge as MCM on the same package. Both chips were on a different node even. Just like AMD is doing. That was back in 2010-11. AMD is doing nothing new here but blowing a huge horn about it.

        [url<]http://cdn.cpu-world.com/Images/uploaded/0001/01/L_00010121.jpg[/url<]

          • Waco
          • 6 months ago

          Chiplets are different in they aren’t functional on their own without other dies and the connectivity between them is more akin to routing on a chip versus an external bus. So sure, it’s an extension of an existing technique, but it’s still new in execution and scope.

            • Rza79
            • 6 months ago

            You want to tell me that those two chips on the picture that I linked can work individually?

            Not only that but it uses QPI to interconnect the chips which in it’s base is not too different to Infinity Fabric, with both being point-to-point interconnects.
            I really can’t see the difference to Intel’s Arrandale from 2011. One chip is purely a CPU and the other a northbridge (and in Intel’s case, it also contains a GPU).
            You can twist the words as much as you want but it really isn’t new in execution and scope.

            • Waco
            • 6 months ago

            I guess so. I have a hard time comparing the Rome/Ryzen 3 design with Clarkdale/Arrandale given the latter is literally repacking older external chipsets into the same package. AMD designed this to be flexible across the stack with up to 8 tiny chiplets and the larger I/O die. Using uber-small chiplets versus monolithic CPU dies is the distinction IMO.

            • Rza79
            • 6 months ago

            [quote<]...given the latter is literally repacking older external chipsets into the same package[/quote<] That's where you go wrong. The Arrendale CPU core and accompanying northbridge are custom designs to form Arrendale. At that time, Intel was still mainly on 45nm. As such the i5 and i7 from the time were 45nm chips with integrated the NB. Having limited 32nm capacity 'forced' Intel to the MCM design where the NB is 45nm and seperate from the 32nm CPU cores. AMD's situation is quite similar with 7nm supply and 14nm GF contracts. I don't understand where you get the idea that Arrendale was "older external chipsets into the same package" because it's totally the opposite.

            • Waco
            • 6 months ago

            The memory controller design was a slightly changed version from the P45 chipset.

            • Rza79
            • 6 months ago

            The memory controller is just one part of the NB. The rest was heavily updated.
            New IGP, Nehalem CPU cores, QPI interconnect, …
            Even so, that besides the point that I’m trying to make.
            The overall design of Arrendale is the same as what AMD is doing.

            • Waco
            • 6 months ago

            Except in scope and execution, agreed. πŸ˜‰

      • tipoo
      • 7 months ago

      Afaik, MCM connects two monolithic chips, two CPUs or a CPU/GPU etc each with everything they need on each die. The difference with a chiplet is that each chip doesn’t have to have all the requisite parts they need, i.e AMD put I/O and DRAM controllers into a single functional block for the overall part, while its CPU cores and L3 cache are contained within each individual component.

      I guess there’s some MCM’s that already do that though, so it’s a good deal of branding like APU too.

      • Lordhawkwind
      • 7 months ago

      here’s a good description of what a chiplet is

      [url<]https://www.extremetech.com/computing/290450-chiplets-are-both-solution-and-symptom-to-a-larger-problem[/url<]

    • ronch
    • 7 months ago

    I double posted so I edited this post to what it is now and is reserved for whatever I can think of in the future to post about this article. Waste not want not.

    • crabjokeman
    • 7 months ago

    “not that surprising given the latter chip’s lack of Hyper-Threading, but still impressive considering the Ryzen chip’s 26W TDP disadvantage.”

    Don’t compare AMD and Intel TDP numbers directly. They measure TDP differently.

    • Blytz
    • 7 months ago

    Can someone educate me 3600X vs 3700X

    6 core higher base clock, same boost clock as the 8 core but 95w TDP vs 65w

    I know TDP is a thermal rating, but surely and 8 core unit generates more thermal requirement for cooling than a 6 core unit at the same speed.

    My head hurts, someone please teach me.

    edit – and while you’re at it, why aren’t we seeing boost clocks higher on lower core count dies (seems if you want say 4.6 you gotta buy a 12 core unit, they can’t/won’t do it on 6 ?)

      • Spunjji
      • 7 months ago

      Best guesses:

      1) Their top-bin dies running with 6 cores will all be going to the 3900X. This likely means the ones left are running some wacky process variances, especially this early in the product cycle.
      2) That +200Mhz base clock will come with a voltage penalty, multiplying the above effect.
      3) You’ll probably find a lot of chips running below TDP but due to the above concerns, they’ll want their partners taking the weakest chips into account when designing thermal solutions.

      For your final question: See (1) above, plus product segmentation! Yay! As their products become more naturally competitive, so their incentive to be “generous” with their low-end products will fade. Hopefully we can make up for that with a little overclocking.

    • Klimax
    • 7 months ago

    And just spotted article on Ice Lake:
    [url<]https://www.tomshardware.com/news/intel-10th-generation-core-10nm-ice-lake-gen11-graphics-sunny-cove-thunderbolt-3-usb-c,39477.html[/url<] No wonder Intel was recycling Skylake so much, Ice lake expanded most of OOO resources including L1 cache (notoriously massive structure). Looks like CPU space is about to become hot again...

      • thx1138r
      • 7 months ago

      Hmmm, this isn’t the first time Intel claimed to be shipping 10nm mobile chips, why should we believe them this time?

        • Klimax
        • 7 months ago

        Well…
        [url<]https://ark.intel.com/content/www/us/en/ark/products/136863/intel-core-i3-8121u-processor-4m-cache-up-to-3-20-ghz.html[/url<] πŸ˜‰

          • Spunjji
          • 7 months ago

          “Launch Date: Q2’18”
          The first reviews of a shipping device including that partially-broken CPU appeared in mid-January 2019. That link you posted is evidence for thx1138r’s comment!

      • sconesy
      • 7 months ago

      Like clockwork. But all the features in the world won’t mean a thing if Intel can’t quiet the vulnerability jitters.

    • ronch
    • 7 months ago

    That 3700X looks very nice at just 65w but I wonder if it’ll throttle too much to stay within power and thermal limits. Then again there’s the 3800X which I expect won’t stay at $400 past Christmas.

    It’s also worth noting how AMD with a much smaller budget call match Intel now not only in multithreaded but single-thread as well, with just one iteration (I wouldn’t call Zen+ an iteration). Yes it’s a new core but Intel’s core is an absolutely refined, leading-edge design and I wonder what trick is left in the book that’ll help squeeze out more performance.

    All in all exciting news. Navi will hopefully close the efficiency gap a lot too.

    • tipoo
    • 7 months ago

    “a 25% boost in performance-per-clock and a 50% improvement in performance-per-watt over the company’s Vega architecture.”

    Over 7nm Vega or Vega 56/64?

      • auxy
      • 7 months ago

      He said it in the post:
      [quote<]Given that AMD's CEO explicitly referenced both design and process in her statement, that 50% improvement in performance-per-watt clearly includes the move to 7nm fabrication, so the comparison is against Vega 64, not the Radeon VII.[/quote<]

    • anotherengineer
    • 7 months ago

    So, I’m guessing there is no one from TR at Computex this year?

      • auxy
      • 7 months ago

      There is no one to go… ( ;βˆ€;)θ‰Έ

    • Concupiscence
    • 7 months ago

    It’s going to be a really interesting release. If AMD manages to deliver a general performance improvement on the order of 12%, that’s as big a jump as Sandy Bridge enjoyed over Nehalem. Zen 2/Ryzen 3 won’t enjoy an according bump in clocks like Sandy did, but the difference for AVX workloads will make amends for Zen’s relative weakness in multimedia and scientific computing to this point.

    If the 12 core part isn’t too constrained by dual-channel memory bandwidth, it’s going to be absolutely killer. And if it manages the rated clocks with a solid stock cooler, it’ll be reliably faster outside of AVX-flogging apps than the Mount Vesuvius I call my 7940x…

      • ptsant
      • 7 months ago

      For scientific computing, the new chips have doubled the FPU units. And they do support AVX, just not the 512 version. So, with the exception of AVX-512 (which is limited to top-tier Intel chips), the gap should close quite a bit.

        • Klimax
        • 7 months ago

        Not exactly:
        [url<]https://ark.intel.com/content/www/us/en/ark/products/136863/intel-core-i3-8121u-processor-4m-cache-up-to-3-20-ghz.html[/url<] Aka sole Cannon Lake does have it, so most of Ice Lakes should have it too. And it would be very good idea. AVX- 512 excellent CISC set.

    • rudimentary_lathe
    • 7 months ago

    All of this looks very promising.

    With respect to Navi, hopefully small == inexpensive. If so, AMD can count me in for at least one.

    Zen 2 also has me eyeing a CPU upgrade, though I don’t really need one. The pricing is a bit higher than I was expecting, though. Hopefully those prices come down with promotions.

      • NovusBogus
      • 7 months ago

      New architecture is a good sign, it’d have been DOA if it was just another GCN derivative. Most of Radeon’s competitive problems stem from it being a decent 2012 architecture still shambling about in 2019, whereas Nvidia is offering a slightly better than decent 2018 architecture (and, at the lower tiers, an extremely strong 2016 architecture) and Intel will presumably be bringing a 2019 or 2020 architecture to bear with Son Of Larrabee.

    • anotherengineer
    • 7 months ago

    I suppose the most surprising thing is no Ry7 3700 cpu, well that and,

    ” X570 is an in-house AMD design that offers up 16 lanes of PCIe 4.0 capability. That’s rightβ€”both the Ryzen 3000 CPUs and their chipset support PCI Express 4.0; a total of 40 lanes altogether if you count the 4 that will be used to connect the CPU to the chipset.”

    • Johnny Rotten
    • 7 months ago

    Definitely a strong, resurgent showing but not the home-run I was hoping for. The 3900x will probably be very close to the 9900K in single core workloads, lets say it will be a draw. Will be significantly faster in multi threaded operation obviously. Will *probably* have a moderately better power profile. Basically no real reason to purchase a 9900K over the 3900X. The problem is that all of that comes from the massive trump card of a large process advantage. I was hoping that they would be ~10% faster so that when intel shrinks their process, THEN the two would be “equivalent”. The way its looking to me is that once intel shrinks their process, they are going to be well in front again which is a little disappointing.

    Now that said, intel has been bungling their new node for quite some time now and, if rumors are to be believed, are still at least a year away from getting that sorted out. So its looking like AMD has at least a 1 year runway where you’d have to think they have the upper hand, I just wish it was a lot more decisive.

      • Shobai
      • 7 months ago

      ~10%? So, another Spectre-type mitigation or two and you’ll have what you’re asking for, amirite?

        • Johnny Rotten
        • 7 months ago

        No not at all. I want more performance from AMD not less performance from intel. In 12 months when intel shrinks their process I want them to be catching up to AMD performance (core for core) not leap-frogging them which is what it is looking like. I want to see intel in the chase position (performance wise).

        • Klimax
        • 7 months ago

        Until another few holes are in AMD’s CPUs. (They got some spectre variants too already) Just because they are not in spotlight doesn’t mean they are flawless.

          • Spunjji
          • 7 months ago

          Looking only at what we already know, Intel have so far been disproportionately impacted by speculative execution vulnerabilities.

          That may change in future (and it also may not), but the thing is, AMD don’t have to be flawless – they just have to be less-flawed.

      • Anonymous Coward
      • 7 months ago

      Naw, I don’t see Intel shaking off AMD easily this time. AMD has homed in on a solid design and their fab partners are on top of things. It seems to me that only mistakes, rapid changes in what the market demands, or a price war can separate the two now.

      • freebird
      • 7 months ago

      The problem is that Intel’s 10nm will not perform at clock speeds as high as 14nm++ they’ve shown this before in presentations.

      Two, 10nm is going to be a severely supply constraint node. They converted a couple of plants back to 14nm and others they are hoping to gear up for 7nm.

      Intel will only “catch up” when they get to 7nm maybe, but they don’t have the production use experience TSMC & Samsung are both gaining from using EUV in Production NOW. The more wafers you are able to run through EUV is going to be the experience that Intel is going to be playing “catch up” on…

      I’ve posted about this before, but TSMC is buying 60% of all EUV machines made by ASML this year (2019) and Samsung is bringing a 2nd EUV plant on-line in 2020, which will need EUV machines. So yes, Intel will definitely have to “catch up”.

        • techguy
        • 7 months ago

        This is untrue. At the Architecture Day event late last year where Intel talked about Sunny Cove (among other future products), they were specifically asked about clock speed regression on 10nm and the response was something to the effect of “well, we’re not going to go backwards”.

          • Redocbew
          • 7 months ago

          I’m not sure that’s the best example. The correction is that it’s possible to avoid clock speed regression on 10nm, not that it’s easy or that we’ll be seeing actual products you and me can buy sometime soon.

            • techguy
            • 7 months ago

            Sure, except it’s an Intel exec making the statement which clearly indicates there will not be clockspeed regression on 10nm products (compared to existing 14++ SKUs) in a public forum –
            to me this indicates that:
            1) the process is healthy
            2) they’re confident in the ability to deliver >= clocks vs. 14++

            You can take the pessimistic view if you like, but Intel would be looking down the barrel of a lawsuit from investors if this statement turned out to be false. I don’t think Intel execs are inclined to put the company in that position at this point.

            • Waco
            • 7 months ago

            Even *if* clocks dropped for consumer parts, if they stay in the same ballpark for Xeons, Intel won’t be in any hot water.

            • Redocbew
            • 7 months ago

            [url<]https://www.crn.com/news/components-peripherals/208801780/intels-gelsinger-sees-clear-path-to-10nm-chips.htm[/url<] From 2008. 2008! Eleven years ago! And you still believe them!

            • techguy
            • 7 months ago

            No, I clearly referenced an event that took place in December 2018. That’s not 10 years.

            • Redocbew
            • 7 months ago

            So RDF aside… you realize I was mostly agreeing with you, no? Even for us nerds there are good reasons not to be very interested in the problems Intel has been having with their 10nm process. There are plenty of other ways to increase performance other than shrinking transistors and increasing clock speed, and meaningful differences in stated process sizes got fuzzy quite a while ago. I wouldn’t call the word of some exec a “good reason”, but I suppose there has always been a lot of people who would rather be told what to think than figure it out for themselves.

            • techguy
            • 7 months ago

            Being told what to think does not factor into this situation. My point was simply that a public, forward-looking statement by an officer of the company has legal ramifications if he is lying, and stock ramifications if he is just wrong. That being the case, executives of multi-billion dollar corporations tend to err on the side of caution, rather than braggadocio.

            • Redocbew
            • 7 months ago

            I wish execs were that accountable. That’s a great idea. Shame it doesn’t fit the world we live in.

            • techguy
            • 7 months ago

            Corporate culture matters. Maybe there’s a shady company out there run by mafia members in suit and tie that acts as you imply, I haven’t seen a high-profile case like this since Enron though. Certainly not Intel.

            • Redocbew
            • 7 months ago

            I guess you haven’t been following Theranos.

            Anyway, I’m not comparing Intel to them. The problems Intel has had even getting to this point are well known, and the story has been the same for quite some time now. That 10nm is “challenging”, but they’re “confident” and “looking to the future” or some such thing. Roadmaps get shuffled around and products get moved forwards and backwards all the time. A misbehaving process is just one reason of many for why that happens.

            The point is that if what you said was true this should cause Intel(and everyone else) more than a PR problem, but usually it doesn’t.

            • freebird
            • 7 months ago

            Pretty sure Intel Execs were saying 10nm should be ready by 2017, 2018 and now 2019…

            Maybe one out of a hundred chips will hit the same clock speeds of 14nm++ then that wouldn’t have been a lie, correct? Then again, it won’t be a lie if they don’t make any desktop chips on 10nm either… maybe they will only be producing U & Y based laptop chips on 10nm, they surely won’t have enough capacity to make laptops, desktops, GPUs and server CPUs all on 10nm with the limited 10nm production facilities.

            If you seriously think Intel will be shipping desktop parts on 10nm with comparable clock speeds of the current top end 14nm++, then tell me why Intel took all the plants that were slated for 10nm production and backported several to 14nm and moved others to be upgraded to 7nm? Doesn’t sound like “confidence” to me in the process.

            • Spunjji
            • 6 months ago

            We already have several different sources indicating that they have, in fact, gone backwards on clock speeds on 10nm. The first indication was Cannon Lake which maxed out at 3.2Ghz boost with the lone i3 chip they released.

            Since then, leaks indicate that Ice lake i7-1065G7 has clock speeds of 1.5/3.5/3.9Ghz for base/all-core boost/single-core boost. These speeds were leaked by wccf and have since been partially corroborated by leaked Geekbench info. The leaked scores also align with that clock regression vs. the predicted IPC gains, wherein the two roughly balance out.

            Whiskey Lake i7-8565U was 1.8/4.1/4.6Ghz, for reference.

      • Aranarth
      • 7 months ago

      btw if you need quad memory channels and more horse power than the 12core 24 thread or 16 core 32 thread zen2 then wait for the next round of thread ripper.

      that will rip (blow) your socks off…

    • derFunkenstein
    • 7 months ago

    are those L3 cache numbers in the table right? If so then aren’t all these multi-chiplet modules?

      • auxy
      • 7 months ago

      No. Lisa said they doubled the L3 cache. Zeppelin (Zen/Zen+ 8c die) has 16MB of L3, so that means 32MB of L3 cache per core chiplet. (*’Ο‰’*)

      [quote=”AMD”<]Hmm. Memory performance is still bad. Screw it! Stick an egregious amount of cache on it![/quote<]

        • synthtel2
        • 7 months ago

        It does make sense with the chiplet design – power density would be pretty high without something like that, and if the die needs to be bigger anyway, what better to fill it up with?

        • derFunkenstein
        • 7 months ago

        So it’s the text in the paragraph above the table that’s off. It says it went from 8 to 16.

          • RAGEPRO
          • 7 months ago

          Yeah, that was my bad. I fixed it. Thanks for pointing it out.

      • ptsant
      • 7 months ago

      My guess is that 6/8-cores have a single chiplet with its associated L3, while the 12-core has two 6-core chiplets and therefore double the cache.

        • derFunkenstein
        • 7 months ago

        Agreed, it was the difference between 16 MB in the text and 32 MB in the table that I was asking about. Zak fixed it.

      • Shobai
      • 7 months ago

      For those playing at home, the text above the table says each 8-core die gets 16 MB cache (up from 8 MB), but the table suggests 32 MB. Something’s not lining up.

        • thx1138r
        • 7 months ago

        The text is wrong/confusing.
        The original Ryzens had 2MB (L3) cache per core, which equates to 16MB cache on an 8-core chip.
        The new Ryzens doubled the cache size to 4MB per core to give 32MB cache on an 8-core chip.

        The confusing part is that on the original Ryzens, the cache was split into two parts, with each 4-core CCX getting half of the total, so the 16MB of cache was usually written as 8+8.
        The new Ryzens doubled the CCX size so that each core complex now holds 8 cores, so the cache is now all in one place i.e. 32MB, not 16+16.
        Then on the top-end Ryzens with > 8 cores the cache will again be split into multiple CCX’s (or chiplets). So their cache will be referred to as 32+32.

        Easy πŸ™‚

          • Anonymous Coward
          • 7 months ago

          Interesting that they went from 8+8 to a unified 32. Thats a [i<]monster[/i<] cache. For [i<]years[/i<] Intel has treated the 8MB option as deluxe, a paid upgrade on the laptop I got last year at work, and the same with the 4MB option on the dual core parts.

    • Aranarth
    • 7 months ago

    OH BOY!!! now I really wanna see the benchmarks!!!

    I’ll take the 8c/16t 65watt chip! I think the price is very reasonable for the performance. I wonder how well it overclocks…

    If navi proves to be a serious upgrade from the rx580 8gb at $250 I might take one of those as well.

    I really hope you can use nothing special / cheaper ram with it as well.

      • enixenigma
      • 7 months ago

      I have my doubts that the 65W chip will be able to maintain the boost clocks for long. I’m looking to move to the 3800X as a real upgrade to my 1700X. I really hope that AMD is able to offer Navi at a good price, but I worry that the combination of a new architecture and 7nm is going to lead to them matching price with their competition (2060/2070).

        • Pancake
        • 7 months ago

        Then you’re only $100 short of the 12-core 3900x. Life would really have to suck if you couldn’t squeeze the extra dough for the four cores given how much the rest of the build will cost.

        But, like others, the 3700X is looking quite the part. 65W. 8 fast cores. What’s not to like? If it’s even 30% faster at single/low-threaded tasks than my i5-3570 then it would be an upgrade option. But I bet it doesn’t even get there.

          • enixenigma
          • 7 months ago
          • Spunjji
          • 7 months ago

          Is that a 3570K or a 3570? If it’s the latter, then your bet’s a bad one! πŸ™‚

          The 3700X is already at a 15% advantage over the i5-3570 on clock speed alone (4.4 turbo vs. 3.8). Even if you ignore AMD’s figures on Zen2 entirely and just take that clock speed with Zen+ IPC (~11% better than Ivy), that’s a 27% advantage over your 3570 in single threaded performance, while the boost in multi-threaded performance will be *at least* 150% (twice as many cores + SMT + IPC).

          If AMD can tack on another ~10% of IPC then you’re up to roughly a 40% boost over the 3570 in single threaded applications. Exciting times! πŸ™‚

            • Pancake
            • 7 months ago

            40% or even more I can believe for particular rendering or computational workloads leveraging the latest FP extensions.

            But for the mundane but 95% tasks of web browsing or coding using Eclipse which is all I mostly do I’d be expecting closer to 20% single/light threaded improvement. Sure, I also get twice the cores I’ll almost never use and a welcome but slight improvement in energy use. But after 7 years? 20% after 7 years. That’s pretty lame.

        • Aranarth
        • 7 months ago

        I bet what you will see is people sticking a huge heat sink on it, fooling the chip into thinking it is a 100w or 125w part and letting the chip figure out the max clocks itself.

        Since they already have an ES running at 5ghz I don’t see why overclocker’s could not achieve the same by increasing the wattage headroom and sticking plenty of cooling on it.

          • enixenigma
          • 7 months ago

          You may very well be correct, assuming that Zen 2 can actually reach those clocks with normal cooling and that the 3700X isn’t getting lower-binned chips that simply can’t hit those clocks (I don’t believe this to be the case, mind you.). If OC/XFR can bring the 3700x to parity with the 3800x for the majority of cases, then it would obviously be the no-brainer choice.

        • sconesy
        • 7 months ago

        Because of temperature? Would a better cooler enable better boosted clocks as Aranarth says?

          • enixenigma
          • 7 months ago

          Possibly. AMD hasn’t detailed how boost/XFR will work with these new chips. Good cooling and overclocking would likely allow the 3700x to match 3800x stock speeds (the 3800x boost is only 100MHz higher, after all).

    • cynan
    • 7 months ago

    [quote<] We fully expected AMD to trot out GCN one last time for us to gawp at, but instead, the company surprised us by announcing that Navi is based on what is purportedly a new graphics architecture called "Radeon DNA", or RDNA for short.[/quote<] So they're applying their GPU marketing strategy of refreshing products in name only to their GPU architecture naming scheme now?

    • DancinJack
    • 7 months ago

    lol @ Intel “playing catch up” comments

    It’s like some of you have forgotten that Intel STILL leads in IPC despite being on 14nm++++++++++++++++++++++++, is still more than competitive in power for current designs that are on shelves, and they have a new CPU to come out this year just like AMD.

    You can all call AMD the winner if you want at this point I suppose if it makes you feel good.

      • jarder
      • 7 months ago

      I’d don’t know about raw IPC, let’s wait to see the real benchmarks, but I do think that the 9900K will very likely retain the performance crown for gaming, a very important set of benchmarks that do bestow tangible bragging rights.

      Apart from that, Intel may not be playing catch up at the moment but it’s looking like they will be come July in some regards at least. In particular losing on the power consumption benchmarks and being well behind the curve in terms of value ($1200 for a 4.3Ghz 12-core i9-7920X looks like a bad deal).

        • DancinJack
        • 7 months ago

        I don’t and didn’t disagree with any of that. I’m just taking issue with the fact that there are numerous people in the comments saying Intel is playing catch when in fact they are the market leader, and currently sit on top. Those are just facts.

          • just brew it!
          • 7 months ago

          All depends on what you’re measuring, I suppose.

          It does seem like AMD (with their fab partners) is currently ahead of Intel in manufacturing tech. I fully expect Intel to get their process issues sorted eventually (probably soon enough for them to remain the market leader). But this isn’t guaranteed, and if it doesn’t happen, their position on top of the CPU industry could be in danger.

          Maybe Intel should farm out some of their manufacturing to TSMC as a stopgap. πŸ˜‰

            • DancinJack
            • 7 months ago

            Except there aren’t any CPUs out there from AMD/TSMC at 7nm yet. I fully concede that TSMC 7nm is probably better than Intel 14nmplusplusplusultra, but that’s not the matchup we have currently.

            As usual, people just don’t want to wait for benchmarks from anyone but marketing teams. downvote and ridicule this man for using facts and independent analysis!

            As I have said numerous times, it’s great if they can deliver everything in the slides and the success of AMD in general. But like, Intel is doing fine? and people in these comments act like Intel’s end is nigh. (except for the ~80 percent market share they have I guess)

            [url<]https://www.cpubenchmark.net/market_share.html[/url<] [url<]https://www.tomshardware.com/news/amd-market-share-desktop-server-notebook,38561.html[/url<]

            • just brew it!
            • 7 months ago

            Guess we’ll know for sure in a little over a month…

            And I did say in my post that I do expect Intel to get their process issues straightened out, allowing them to maintain their dominant position. So it’s not like I am predicting doom and gloom for Intel. But if they don’t execute smoothly going forward, they [i<]could[/i<] end up ceding additional market share to AMD.

            • DancinJack
            • 7 months ago

            agreed πŸ™‚

      • sconesy
      • 7 months ago

      It’s more the fact that Intel has been the gold standard for so long – AMD’s continued progress plus Intel’s failures at 10 nm have seriously jeopardized that status. The catch-up is more like catching-up to being ahead again, the natural order of things. Even AMD shills are speaking about it this way subconsciously. But the playing field will be more even that it has been in a decade as of Zen 2 release.

        • DancinJack
        • 7 months ago

        i like this explanation.

        • ronch
        • 7 months ago

        I was just gonna say something like this. It’s not about Intel catching up with AMD. Anyone who thinks that is twisting truth a little bit. It’s more like Intel catching up to their usual cadence. Right now they’re obviously losing a little bit of steam. I also view AMD’s efficiency advantage right now with caution because they are on a smaller node which I would think is actually somewhere between Intel’s 14nm and 10nm. When Intel rolls out their true 10nm AMD might lose their advantage or at least find themselves on equal footing with Intel on efficiency again.

          • sconesy
          • 7 months ago

          I’m a little confused by the Intel roadmap leaks but assuming they are accurate and I’m reading them the right way true desktop 10 nm isn’t happening until 2021, which gives AMD ample time.

      • Krogoth
      • 7 months ago

      Zen2 already has surpassed Skylake IPC albeit it looks like it is only by a small margin. Intel’s only ace in their pocket is AVX512 support. AMD hasn’t release anything that supports it yet.

        • DancinJack
        • 7 months ago

        that’s…not true at least to my knowledge. Please provide a link?

          • Krogoth
          • 7 months ago

          AMD’s official PUBG demonstration makes it painfully obvious. 3800X was able to keep up with 9700K despite having a clockspeed disadvantage (4.9Ghz Max turbo in practice it is closer to 4.8Ghz versus 4.5Ghz on 3800X). FYI, PUBG has been more favorable with Skylake family over Zen1 and Zen1+.

          Zen1+ have been on the toes of Haswell/Skylake in majority of the CPU-depended applications What do you know happens if Zen2 provides seemly a 10-15% gain over them? Just enough to get over Skylake although by that not much.

          Intel has been too complacence with just tweaking Sandy Bridge and evolving it. They are starting to feel the long-term consequences of it.

            • Redocbew
            • 7 months ago

            That’s not a link. I’d be interested also since I’d be very surprised if IPC was publicly disclosed for either Zen or Skylake.

            • DancinJack
            • 7 months ago

            AFAIK IPC isn’t disclosed for anything modern at all. But like, I just don’t get where he gets off spouting this. If AMD overtook Intel in IPC, it’d be news? I haven’t heard a peep.

            • Waco
            • 7 months ago

            If AMD chips are beating Intel chips with similar core counts and less clocks…the writing is on the wall for IPC. Clearly, at least in the workloads shown so far, AMD has an edge. A slight one, but still an edge.

            • DancinJack
            • 7 months ago

            Again, you guys can enjoy the marketing if you want, but it’s exactly like AMD has a great reputation there?

            Even then, I still don’t know how Krogoth is definitive in his statement and the lack of proof is still an issue. And i’m not just picking on AMD as almost everyone seems to say. I have said multiple times that Intel, Nvidia’s etc propaganda is equally as useful. Not to mention that Kroggy seems to make assertions like this without evidence on the regular, so I choose not to trust anything until a third party benchmarks chips. That’s not too much to ask IMO.

            • Waco
            • 7 months ago

            Trust, but verify.

            Agreed that AMD generally is pretty good at cherry-picking, but in general, gaming performance isn’t something so easy to game.

            • Voldenuit
            • 7 months ago

            >Again, you guys can enjoy the marketing if you want, but it’s exactly like AMD has a great reputation there?

            We’ll know it’s legit if AMD releases a 90’s style home music video with Scott Wasson jamming on air guitar.

            • DancinJack
            • 7 months ago

            we can only hope

      • tipoo
      • 7 months ago

      When testing is done, it should check if they still lead Ryzen 3000 in IPC with all the mitigation applied.

      “If looking at the geometric mean for the tests run today, the Intel systems all saw about 16% lower performance out-of-the-box now with these default mitigations and obviously even lower if disabling Hyper Threading for maximum security. The two AMD systems tested saw a 3% performance hit with the default mitigations. While there are minor differences between the systems to consider, the mitigation impact is enough to draw the Core i7 8700K much closer to the Ryzen 7 2700X and the Core i9 7980XE to the Threadripper 2990WX.”

      [url<]https://www.phoronix.com/scan.php?page=article&item=mds-zombieload-mit&num=10&fbclid=IwAR2WARMF4Zv8t3Gh2t9ubYV4WIMVWo3s_fLCY2d6pq_HwNCBa63iTUZdUTY[/url<]

      • madseven7
      • 7 months ago

      Actually they are. I wish that AMD had released the 5GHz chip they have. If you doubt that they cannot reach it think again. Their highest clocked chip is the highest core count chip. If they can reach 4.6 GHz on 12 cores they could certainly reach 5 GHz on 8 or 6 Cores.

        • enixenigma
        • 7 months ago

        Binning is a thing. They could be reserving their best clockers for the Ryzen 9 series parts. Also, the Ryzen 9 is a 6×6 configuration part. Maybe 8 cores in that small chiplet running at 5GHz is more of a cooling issue than we know. Spitballing here.

        Beyond that, I was also curious as to the overclocking potential of the 3800X in particular. With half the cores and the same TDP as the 3900X, you’d hope that they could get a bit more out of it. I guess we’ll have to wait until July to see if that is the case.

        • Spunjji
        • 7 months ago

        That depends a lot on process characteristics and how they interact with fundamental limitations of the architecture. We genuinely won’t know if *any* of the chips will hit 5Ghz until products come out and we can get a handle on the power/voltage curves.

        • techguy
        • 7 months ago

        You don’t know this. The lower-priced SKUs tend to have worse silicon, relative to the higher-priced SKUs. For all we know, AMD is running each and every Ryzen 3000 part at or near the upper bound of the shmoo plot (for ambient cooling anyway). TDP ratings don’t mean anything if the silicon simply can’t switch at higher speeds, so don’t try that argument.

      • Spunjji
      • 7 months ago

      A hint on the downvotes: I’ve been through all the comments here, and not one person said anything about Intel “playing catch up”. The closest is Krogoth being un-Krogothed about the potential gains for AMD.

      Take a breather, slow down, go easy on the straw men.

        • DancinJack
        • 7 months ago

        You must have just missed it then.

      • ronch
      • 7 months ago

      Well it’s just really kinda cool to see a much smaller company with peanuts to spend on R&D come up with products that are very good and very competitive against products from a much, much larger company that spends tons of money on marketing and .. shall we say, ‘incentives.’ You root for the big bully if it makes you feel good.

        • K-L-Waster
        • 7 months ago

        Surely there must be better things to root for than multi-billion dollar corporations.

          • ronch
          • 7 months ago

          These big companies need the support of us plebeians.

            • K-L-Waster
            • 7 months ago

            Has anyone told them about insoles?

    • sweatshopking
    • 7 months ago

    1.5x performance per watt still [i<] seems [/i<] like it's well behind pascal, never mind turing. Can somebody with more energy and time look that up?

      • Aranarth
      • 7 months ago

      Not sure… I thought i read somewhere that NVIDIA was 30% ahead in price/performance

      this would bring them to equal parity

        • JustAnEngineer
        • 7 months ago

        NVidia clearly charge a higher price for performance than AMD does, much to the detriment of gamer’s wallets.

        I believe that you meant to seay that NVidia uses less energy for performance than AMD.

      • Spunjji
      • 7 months ago

      It might depend how they’re measuring. Based on reviews of Vega 56 in the magical AMD unicorn laptop (Acer Helios 500) vs. the GTX 1070 equivalent, Vega’s PpW lags Pascal only by a short distance when it’s not being clocked way past the shoulder of its efficiency curve. The problem for Vega was that peak performance was too low, so they had to absolutely ruin PpW in pursuit of Pp$.

      In theory, a new architecture (how new is it really though?) + 7nm process could net them enough of a boost that they can finally stop clocking the nuts off their products for the first time since GCN released. If they get 1.5x PpW against what Vega achieved at reasonable clocks, it’ll be good. If it’s against stock desktop Vega 64… not so much.

      • Anonymous Coward
      • 7 months ago

      Its quite difficult to pin down the relative efficiency when so much depends on how hot AMD clocks their GPU. So I say we can conclude nothing of significance from AMD’s claim at this point.

    • jarder
    • 7 months ago

    So what’s the thoughts on the lack of a 16-core, it’s clearly feasibly so why don’t we have one.

    [list=1<] [*<]Power: 16 cores would fry the consumer-size sockets. [/*<][*<]Memory Bandwidth: Despite the increased cache and probable higher memory speeds, 16 cores is too much for dual channel memory. [/*<][*<]Holding in reserve: AMD want to see what Intel's response is going to be and if/when they come out with a 10/12 core, AMD responds with 16 cores. [/*<][*<]Benchmark ambiguity: A 16 core chip would likely not top the 12-core at many tasks, it would have to run at a slightly lower clock and/or use more power, thus it will only top the multi-threaded benchmarks and lose in the single threaded ones. [/*<][*<]Binning: AMD don't have enough top-speed 16 cores to satisfy demand yet. [/*<] [/list<] I think the main reason is mostly a combination of the last two.

      • Krogoth
      • 7 months ago

      It is more like they don’t want to cannibalize their Threadripper 1/2 line-up just yet and they don’t need to since their 8c/12c Zen2 SKUs are able to contest or outpace Intel’s offerings.

      16-core desktop Zen2 will likely launch around the same time as Threadripper 3. AMD will likely slash prices on lesser Threadripper 1/2 SKUs beforehand to get them out of channel.

        • auxy
        • 7 months ago

        Yah. You and Sahrin have the right of it imo. There’s also binning to save the golden dies for those 64-core EPYC chips.

      • cynan
      • 7 months ago

      What about the utility of going beyond 8C/16T with 2 channel memory?

      I think it’s most likely a combo of 3 and 5. If they’re doing 12C, 16C is inevitable.

      • Sahrin
      • 7 months ago

      >So what’s the thoughts on the lack of a 16-core, it’s clearly feasibly so why don’t we have one.

      Saving it for Ice Lake, I assume. Intel’s likely going to cook up some pretty exotic EMIB-based stuff and it remains to be seen how it’ll work.

      There’s definitely a 16-core, because the 7nm chiplet has 8 cores on it.

      #1 Power: Not likely. They could sell a downclocked version (dramatically decreasing thermals), but you’re only talking about 30% more power. AMD has released 125W CPU’s before.

      #2 Bandwidth: In some workloads this could be a problem (in some workloads it already is). It’s doubtful that it would be true in every workload.

      #3: Reserve: Yes, this is correct. AMD is going to slap Ice Lake in the face with a 16-core 105W part.

      #4: Amibguitiy: They are featuring cinebench which we already know scales well

      #5: Binning: I doubt binning is the issue, if anything I feel like they are aggressively pushing clocks down. 65W for the 3700X@4.4 is pretty impressive. They are certainly harvesting lots of chips to support Epyc, Threadripper, and future releases. I don’t buy that TSMC’s process can’t yield 5.0 – my guess is all these dice are going to TR SKU’s.

      • Aranarth
      • 7 months ago

      #1 is highly unlikely

      #2 is possible with large workloads that exceed the cache.

      #3 I bet money on that one since we hace seen ES that is 16 core…

      #4 another possibility depending on workload

      #5 definite possibility considering this is a new process.

      • FuturePastNow
      • 7 months ago

      3 or 5 would be my guess.

      • ermo
      • 7 months ago

      It’ll certainly be interesting to see how the chiplet design responds to benchmarks and real world workloads in terms of cross-die latency etc.

      I won’t lie: Right now I *want* the 3900X more than I’ve ever wanted a CPU, even though I don’t really need it.

      From a more rational perspective, Intel has been playing the onerous over segmentation game for far too long and have been caught napping when it comes to hardware vulnerabilities. I don’t want to reward that sort of behaviour if I can avoid it. Voting with my pocketbook and all that.

      It’ll be interesting to see when Jim Keller’s effect will make itself felt in intel’s lineup. From what I understand, what we’re seeing with Zen/+/2 is in large part due to the bet AMD made when hiring him.

      Fascinating times ahead.

      • dragontamer5788
      • 7 months ago

      [quote<] Power: 16 cores would fry the consumer-size sockets. ... Memory Bandwidth: Despite the increased cache and probable higher memory speeds, 16 cores is too much for dual channel memory. ... Benchmark ambiguity: A 16 core chip would likely not top the 12-core at many tasks, it would have to run at a slightly lower clock and/or use more power, thus it will only top the multi-threaded benchmarks and lose in the single threaded ones. [/quote<] That didn't stop the 2990wx, and AMD had to solve those three problems for that chip. (Albeit in the niche Threadripper marketplace, but it was solved nonetheless. Mostly with "consumer will deal with it"). IMO, #3 and #5 are the biggest reasons. EPYC is clearly more popular, with all of the Meltdown / Spectre / Hyperthreading bugs under the Intel systems... AMD EPYC sales are probably much higher than expected. So 8c / 16t dies will be harvested for EPYC / Rome. Probably not enough of them to warrant a 16c release (or even a Threadripper release!). ------------ Frankly, I'm more surprised at the lack of Threadripper than anything. Perhaps 12c/24t with potential of 16c/32t on AM4 is too much for Threadripper? But if so, then AMD should release a 24c / 48t Threadripper. Even if they don't have enough for the highest bin yet, selling 4x dies per Threadripper at slightly higher margins surely has to be profitable?

        • Krogoth
        • 7 months ago

        AMD is still probably validating PCIe 4.0 that is being used in platform that will drive Threadripper 3 and onward. They are probably touching up PCH-side with more HEDT-goodness too.

        • freebird
        • 7 months ago


        Frankly, I’m more surprised at the lack of Threadripper than anything. Perhaps 12c/24t with potential of 16c/32t on AM4 is too much for Threadripper?

        But if so, then AMD should release a 24c / 48t Threadripper. Even if they don’t have enough for the highest bin yet, selling 4x dies per Threadripper at slightly higher margins surely has to be profitable?”

        Selling chips in TR3 will be more profitable, but overall not important. The volume of TR is so low that it shouldn’t play much role in overall revenue. On the other hand, maybe AMD needs all the chips they can get just to be able to ship enough Ryzens for the limited SKUs, so they don’t sell out and shelves are empty for weeks at a time, that would be leave a very bad taste in AMD’s support, because once AMD decided to launch/announced at Computex, people that really want Ryzen 3000 will wait for it. They won’t be buying Ryzen 2000 as a hold over. I think AMD is trying to balance the new releases along with supply and anticipated demand. TR3 has to take a back seat, along with 16-core Ryzen, chips good enough for those 2 will be used in EPYC2, is my guess at least until inventory builds up.

      • Chrispy_
      • 7 months ago

      1. At 65W/8-core, 130W is reasonable for a consumer-size socket.

      2. AMD already solved this with the 2990WX. Scaling isn’t perfect but it’s good.

      3. I think they’re holding the best bins for EPYC servers, not low-profit consumer parts. At the consumer level, it looks like their $499 Ryzen 9 will beat a $1200 Skylake-X, so why waste dies that could be selling for $2500 in servers on consumer chips that might sell for $650 or so?

      4. No, the best yields with all cores defect-free are the best-clocking, most efficient ones too. That’s how it’s always worked.

      5. Correct. Intel’s HEDT (Skylake-X i9) platform is just leftover scraps from their server market. Until the server supply exceeds demand, AMD will be pushing Rome EPYC chips. I would expect 16C Ryzen 9 and possibly Threadripper 2 announcements once yields are up and server demands have subsided a little.

      • Wirko
      • 7 months ago

      14-core or 10-core CPUs might be technically possible too, and AMD could take advantage of that.

      Edit: CPUs, not chips.

      • Zizy
      • 7 months ago

      I have two different reasons

      6. Seeing 16C with meh clocks would lead to speculations AMD will again only compete with more cores and offer worse ST performance. Especially as the 12C is said to win against Intel’s 12C, but Intel’s 16C has a large enough TDP advantage it would surely win.

      7. AMD has top 6C part at 250$, 12C part is at 500$. Same progression gives 800$ for the top 16C. This is insanely expensive for a consumer platform. People already moan about high prices, imagine seeing that sticker shock …

      However, both problem points vanish after other parts are released as people know benchmarks of the chips.

      As for your reasons:
      1.) TDP can be 125W or even 140W just fine. It won’t work on all cheap boards but on enough of them it shouldn’t be a problem.
      2.) Maybe, we need benchmarks. L3 cache is pretty large so it might not be a huge problem.
      3.) Agreed.
      4.) 16C should be slightly ahead in MT and win ST as well. But there is comparison with Intel’s 16C AMD could be concerned about.
      5.) Maybe.

      • Freon
      • 6 months ago

      1. Power management is a thing. It would just mean a lower base clock. We already see this when we go from 4 to 6 to 8 to n core CPUs. Seems irrelevant.
      2. This seems very application specific. There are plenty of apps that run just fine with just single channel memory despite using 6 or 8 core CPUs. It’s not tested often, but you can find plenty of sources.
      3. Possible.
      4. a) It’s already true that more cores don’t always scale for all applications, nothing new and it won’t change. b) It seems Intel and AMD are both getting pretty good at getting single core boosts on higher core count parts approximately equal to the lower core count parts. If the 16 core part only boosted to 4.4ghz I don’t think it would be a huge deal.
      5. Possible.

      I tend to agree with other posts, they don’t want to cut too deep into Threadripper, but does the 12 core part not already use two 8-core dies? I guess I’m still a bit fuzzy on what each SKU actually has under the lid.

    • chuckula
    • 7 months ago

    We’re so screwed that we don’t even know what to cancel over here!

      • NTMBK
      • 7 months ago

      The 9900KS just cancelled power limits!

        • cynan
        • 7 months ago

        Hey! They’re not allowed to steal AMD’s signature process/architecture inferiority compensation move. It’s even trademarked:

        “AMD TPU SHMEE-P-U dynamic thermal overclocking[super<]TM[/super<]. Others may bring the hurtz. But we bring the heat!" Now available on most Radeon GPUs and pre-Ryzen 3 CPUs.

          • auxy
          • 7 months ago

          I upvoted you for the effort even though your execution was bad. You tried too hard. The first line was enough for the joke. (‘Ο‰’)

        • jihadjoe
        • 7 months ago

        Does it steal my kills as well?

          • just brew it!
          • 7 months ago

          Yes. It also hits on your daughter/sister/mom.

        • Neutronbeam
        • 7 months ago

        But did it cancel the apocalypse? Well, did it?

      • Krogoth
      • 7 months ago

      Intel isn’t screwed though. They just playing catch-up and it is going to be painful in the near-term.

      Intel will be back with a vengeance and will not be on a Skylake derivative either.

        • Wirko
        • 7 months ago

        Unless they’ve learned how to make logic based on XPoint, it will most sure be a Skylake derivative. Just (much) more of the same.

          • Mr Bill
          • 7 months ago

          Or, are going to make logic based on Power Point.

      • Srsly_Bro
      • 7 months ago

      I’m ready for the July cancelled product but it’s too soon to post.

      • K-L-Waster
      • 7 months ago

      Just go Oprah. “You get a cancellation! And you get a cancellation! Everyone gets a cancellation!!”

      • freebird
      • 7 months ago

      Looks like Intel might be able to cancel their “Intel Inside” marketing campaign… might have a negative affect… at least they will save on marketing $$$.

      Question is how soon until we start seeing “Powered by AMD” ads?

        • JustAnEngineer
        • 7 months ago

        20 years ago, with Stuart Pankin:
        [url<]https://www.youtube.com/watch?v=MK0hU0OYvCI[/url<]

      • Mr Bill
      • 7 months ago

      Made me laugh.

    • DancinJack
    • 7 months ago

    Also, it PISSES me off to no end when people attempt to be precise by adding the time zone to the hour (in the last above image for Navi, it says 3 PM PST), but then completely blow themselves up by not knowing it is daylight savings time. PISSES me off.

    Hi we’re AMD we don’t do time lol

    (also they’re very much not the only big corp to do this, tons of people shoot themselves in the foot with exactly the same issue and I WISH THEY WOULD STOP IT)

      • JustAnEngineer
      • 7 months ago

      Isn’t Computex on Taipei Standard Time (UTC+8) ? Taiwan doesn’t have daylight savings time.

        • DancinJack
        • 7 months ago

        While I think that’s true, they put PST in the image. Unless they ACTUALLY mean PST and not PDT, then they are doing it wrong.

      • Freon
      • 7 months ago

      The world should standardize on UTC for everything, everywhere.

        • DancinJack
        • 7 months ago

        agree

      • LoneWolf15
      • 7 months ago

      Hyperion Corporation advises you their Chill Pills are what you need.

      • albundy
      • 7 months ago

      if you have an issue, grab a tissue.

        • Srsly_Bro
        • 7 months ago

        You got some sick rhymes dawg

      • crabjokeman
      • 7 months ago

      MORE CAPS LOCK WILL HELP GET YOUR POINT ACROSS

    • DancinJack
    • 7 months ago

    AMD created slides with AMD chosen benchmarks, i give zero poops.

    I’ll be happy to see Ryzen 3xxx vs Ice Lake this winter though.

      • jarder
      • 7 months ago

      No problem, we only have to wait till 7th of July to get the real benchmarks.

      And Ice Lake this winter? From what I’ve read there will only be a few low power laptop SKUs for the tail end of this year.

        • DancinJack
        • 7 months ago

        yup, hopefully NDA is down July 7 and we get real info.

        And yeah, it could be early 2020 before we get real Ice Lake desktop parts. I suppose we don’t really know for sure. That’s fine with me though, I wasn’t planning on buying either part in 2019.

        In any case, winter does extend all the way to late March of 2020 so I stand by my statement. πŸ™‚

        edit: forgot a word somehow.

      • madseven7
      • 7 months ago

      Yet you start Dancin Jack when Intel demoed the 9900k beating Zen but forget to mention that it was cooled with a FREEZER..lol

        • DancinJack
        • 7 months ago

        That’s not what we’re talking about? Intel marketing can bite my shorts too. Without independent benchmarking, i give zero poops, still.

      • LoneWolf15
      • 7 months ago

      A little fiber is sure to help you there.

        • DancinJack
        • 7 months ago

        πŸ™‚

      • Redocbew
      • 7 months ago

      There needs to be no other example than the PCIe bandwidth test they did to show how many grains of salt we may need here. To be fair, some points were more interesting than others, but this one was ridiculous, misleading, pointless, and par for the course no matter exactly who was on stage.

      The upcoming CPUs may be good, but we won’t know until there’s been independent testing.

        • DancinJack
        • 7 months ago

        finally a voice of reason and logic. my fren.

      • arbiter9605
      • 7 months ago

      Yea even the gaming ones, 6 games they listed. cs:go my 6 year old machine could run that at 200fps 4k. LoL and Dota yea don’t need much hardware to run those. pubg well that is just a badly optimized game so fps #’s on that are just worthless no matter what system its on. Overwatch and GTA V would be heaviest games they got listed and of those 2 gtav is most demanding even then most hardware in last 3-4 years has no issues. They should used more current demanding games like maybe AC Odyssey & division 2 to start with as those are newer games and more demanding. There is plenty more games they could use to stress it but they didn’t.

      • ptsant
      • 7 months ago

      Hey, would you expect AMD to ask Intel to choose the benchmarks? Has this ever happened?

      There are multiple reasons to believe that the improvement will be significant, and without believing a single AMD slide.

    • Krogoth
    • 7 months ago

    Getting “K8 versus Netburst” round 2 vibes here

    Also Navi = Vega 56/64 cheaper replacement that might force 2070, 2060 and 1660 to go down in price.

      • DancinJack
      • 7 months ago

      [quote<]Getting "K8 versus Netburst" round 2 vibes here[/quote<] please elaborate

        • Krogoth
        • 7 months ago

        The Lake dynasty runs hotter/eats more power than the new round of competition. It is running up against the hard limits of its fab node and architecture like Netburst in its waning years.

        Zen2 family will begin to snatch away marketshare from Intel’s more lucrative markets (SMB/Enterprise) like K8 did back in its heyday with the Netburst dynasty.

        IMO, Intel needs to go back to the drawing boards if they seriously want to overturn the momentum that Zen dynasty has been building. They can also patch-up/address those nasty hardware-level exploitations that is plaguing their SMB/Enterprise markets.

          • DancinJack
          • 7 months ago

          Wait, a years old CPU design on a years old CPU manufacturing process burns up more power than brand new (to be released) CPUs that you have zero actual power benchmarks on? Shocking.

          It’s almost like you’ve not been paying attention to anything Intel has put out over the past year about Ice Lake. But that’s fine. It’ll be a fun winter.

            • Krogoth
            • 7 months ago

            Ice Lake is more going to be evolution not revolution. I wouldn’t be surprise it end-ups being the “Cedar Mill” of the Lake dynasty.

            You don’t seem to understand how devastating Intel’s loss in the foundry wars really is. It will likely take several years to Intel to actually recover from mishaps of their 10nm process.

            • DancinJack
            • 7 months ago

            Again, I don’t know how you even know these things? Neither CPU is out. All you have is slides from both companies. You continue to make claims with zero actual evidence for some reason and I’m not sure why.

            • Spunjji
            • 7 months ago

            The original post talked about “vibes”; you’re demanding an unusually high standard of evidence for a gut feeling. It’s essential to match your requirement for evidence to the strength of the claim being made. Here’s the data that leads me to agree with Krogoth:

            – Intel just announced the 5Ghz all-core i9-9900KS, squeezing every last drop of performance from their current architecture and manufacturing process with little regard for power draw; that sounds distinctly Netburst/Bulldozer-y to me. We already have the Xeon W-3175X stretching that paradigm to its most comically absurd conclusion.

            – Since original Zen was announced, AMD have maintained a solid track record of providing early performance claims that the product lives up to. You’re right that we need independent testing, but what we have is certainly good enough for a hunch.

            – Intel just released info about the 15W version of Ice Lake, but have given nothing on the desktop variants. Intel’s silence on an upcoming product – especially at a time ripe for “competitive analysis” – is *always* meaningful.

            It’s not all doom and gloom for Intel, just like it wasn’t at the end of the Netburst era. They’re about to get their nose bloodied again, though.

            • Anonymous Coward
            • 7 months ago

            I like the bit about proportionality in arguments.

            • DancinJack
            • 7 months ago

            yeah, except he provides ZERO evidence for it. spitballing from krogoth, as you may know, is constant and VERY rarely is he correct. He just spits out stuff with zero evidence all the time.

            I don’t think it’s doom and gloom for either company. Read the rest of my comments. All I ask is that people stop acting like Zen 3 is the second coming, or third, or whatever, and rely on actual testing rather than BS marketing slides and Krogoth’s usual BS.

            edit: if you guys think ANY evidence is demanding something “unusually high” then well, guys, i don’t even know how to talk to some of you I guess.

            • Krogoth
            • 7 months ago

            There is public evidence for it (albeit it isn’t that strong) and much more will be coming down in the pipe in few months. Public demos are no longer consider to be evidence now?

            Zen2 is an exciting fresh breath of air because the last five years have been relatively stagnant in the desktop world. While Intel was been winging it by with Skylake derivatives.

            • K-L-Waster
            • 6 months ago

            [quote<]Public demos are no longer consider to be evidence now?[/quote<] Nope. Not since every vendor stacks the deck in their demos. This is why no one should buy anything until they see independent reviews.

            • MOSFET
            • 6 months ago

            Dude, they’re just smaller, faster Ryzen cores, and demonstrably more of them. That’s pretty good evidence right there.

        • Sahrin
        • 7 months ago

        When Intel realizes they’re about to fall behind in important performance categories they release what’s called an “Emergency Edition” CPU, named after the Pentium 4 Emergency Edition which was the first 4.0 GHz CPU that could boil to steam up to 5 tons of water per minute.

          • Spunjji
          • 7 months ago

          i9-9999+++ KYS πŸ˜€

          • techguy
          • 7 months ago

          P4 EE never hit 4GHz. If you’re going to bash them at least do it right.

      • NTMBK
      • 7 months ago

      Nah, Skylake is still a fundamentally great architecture, it’s just hamstrung by being stuck on 14nm for 4 years. Netburst was Intel committing architectural suicide.

        • Krogoth
        • 7 months ago

        It is running into the limits of monolithic CPU designs going into the beginning of an era where chiplets and ASIC modules will begin to dominate enterprise/SMB markets. Monolithic designs will remain in embedded and lower-end segments of the computing market where the demand for computing power has already plateaued.

        Intel has been going the chiplet/layering direction too. It is just AMD managed to get ahead of them. Intel will most certainly catch-up.

          • Spunjji
          • 7 months ago

          I’d go as far as to bet on them overtaking, given their colossal resources and the fact that they’ve already been practicing with EMIB in shipping products.

        • JustAnEngineer
        • 7 months ago

        Skylake was a great architecture and 14nm was a great process [b<]four years ago[/b<]. What have you done for me, lately, Intel?

          • bhtooefr
          • 7 months ago

          I’m even gonna argue against Skylake being a great architecture, seeing how it’s been getting slower. Computers getting slower over the years is just supposed to be increasing performance demands from software, as well as cruft in a Windows install. It’s not supposed to be CPU security vulnerabilities requiring microcode patches and disabling major performance features of the CPU.

          Meanwhile, Zen’s been getting faster.

      • LoneWolf15
      • 7 months ago

      While I think DancinJack’s position is highly suspect (and perhaps a little fanb0i), I don’t think the comparison is very fair, considering how poor Netburst was in the IPC game. Coffee Lake R, in comparison, is quite good at it, even though it could improve in performance-per-watt.

      Cedar Mill did greatly improve on performance-per-watt due to the improved die process, however, it didn’t change the fact that IPC wasn’t there; clock-speeds had to be high with Netburst to achieve its performance. Which is why Core architecture replaced it.

      I’m very happy that it looks like Ryzen 7 3xxx will provide much-needed competition, and I love what AMD is doing (and will cheer them on). At the same time, I don’t think your 9700K or 9900K are in danger in this generation; we’ll need to wait to see what the next iteration from both sides brings. I’m also very happy to see AMD making solid chipsets; for a number of years, I saw chipset and driver support being more of an Achille’s heel than the processors themselves, hurting their chances in the business market.

        • srg86
        • 7 months ago

        Their chipsets (even going back to the 760 for Athlon) always gave me trouble. It’s one of the reasons I’ve such a better experience with Intel (AMD performance was still very good though).

          • just brew it!
          • 7 months ago

          Stability of the chipsets did improve after they switched to ATI-based designs. But they continued to suffer from stale chipsets with outdated feature sets, forcing mobo makers to use 3rd party add-on controllers (e.g. for USB 3.0), and ancient Radeon 3000-based IGPs lingered into the AM3+ era.

          It is good to see them staying on top of their platform game these days.

      • albundy
      • 7 months ago

      knowing nvidia, not likely. also, paper releases dont mean squat. i’ll wait for the real world benchmarks.

      • Srsly_Bro
      • 7 months ago

      Then comes Intel with the price fixing and bribes round 2.

        • DancinJack
        • 7 months ago

        v good comment bro

      • srg86
      • 7 months ago

      I’m not getting that at all. I’m going to make a counter and say Sunny Cove, could be the next Sandy Bridge (Or at least Haswell). At least we’ll have 2 good new archs coming along (Sunny Cove vs Zen2) and not one good and one bad (thinking Netburst/Bulldozer for the bad).

      • FuturePastNow
      • 7 months ago

      While Zen 2 definitely gives me the same cozy feeling as the Athlon 64, Intel isn’t selling anything as bad as the P4 now.

        • Krogoth
        • 7 months ago

        SMB/Enterprise customers aren’t feeling the same way with the growing list of hardware-level security holes and the mitigations that eat-up performance along with the downtime to implement them.

    • tipoo
    • 7 months ago

    I found it interesting that SuBae name dropped Mark Cerny and team as a “key revolutionary” for the Navi architecture. If that’s true that they had substantial architectural input, the rest of this could well be too.

    [url<]https://www.forbes.com/sites/jasonevangelho/2018/06/12/sources-amd-created-navi-for-sonys-playstation-5-vega-suffered/[/url<]

Pin It on Pinterest

Share This