Cores, clocks and memory: a Ryzen 3000-series rumor roundup

How was your weekend, gerbils? I spent a huge portion of mine without internet access after spending the majority of Friday without power thanks to hurricane-force storms on Thursday. Today I'm back in business though, so let's talk about some hardware. Specifically, let's talk about AMD's upcoming third-generation Ryzen processors.

AMD CEO Lisa Su previewed the still-upcoming Ryzen 3000 CPUs at CES.

For those who don't know, the extant Ryzen CPUs are all based on fundamentally the same microarchitecture. The Zen and Zen+ cores used in Ryzen 1000- and 2000-series chips aren't identical, but we understand the incoming Zen 2 design to be radically different from its predecessors. Rather than having all of the CPU's parts on one big die, Zen 2 will be fabricated using separate dice for the CPU cores and for what Intel would call the "uncore"—comprising the high-speed I/O connections, the memory controller, and so on.

It's perhaps a bit ironic that the company which originally introduced the integrated memory controller in desktop CPUs would be de-integrating it in this way. Make no mistake, though—it's still part of the CPU, just a separate chip. Splitting things out into "chiplets" this way should allow AMD greater flexibility in terms of core configuration and product segmentation. It also helps alleviate overstressed 7nm foundries.

All of that was known, though. What you may not have known is that reknowned Thai leaker APISAK apparently got his hands on a Zen 2-based engineering sample with 16 cores. Rumors have swirled for some time now that AMD's next-generation desktop chips would stuff 16 cores into the same AM4 socket used on existing motherboards, but this would seem to be the most solid confirmation of such a thing that we've seen.

The leaker's tweet goes on to indicate that the chip has a base clock of 3.3 GHz and a boost clock of up to 4.2 GHz. Neither of those numbers set our hearts alight, but recall that engineering samples of the original Ryzen clocked some 800 MHz slower than the finally-released top-end chip. It's possible these engineering samples could be a lot slower than the final third-generation Ryzen CPUs.

The different dies on the Ryzen 3000 chips are all but assuredly still connected using Infinity Fabric (IF). That's the same interconnect that AMD uses currently to connect the "core complexes" (CCXes) on existing Ryzen processors, as well as the different processor dies on Threadripper and EPYC CPUs. Ryzen's memory controller and IF links share a clock domain. That's the main reason that Ryzen struggles to hit high memory clocks in comparison to the competition, and if TechPowerUp's Yuri "1usmus" Bubliy is to be believed, that's going to continue to be the case on Ryzen 3000-series CPUs.

If anyone would know, it's the guy who wrote Ryzen Timing Checker—oh, that's 1usmus.

The site goes on to say that Zen 2-based processors installed in motherboards with 500-series chipsets will support a divider mode that runs the on-package IF links at just one-half the rate of the memory clock. In other words, taking your RAM all the way up to 5000 MT/s will apparently only require running the IF link at 1.25 GHz, rather than 2.5 GHz. If that's correct, it should be much easier to reach stratospheric memory clocks on the next-generation Ryzen CPUs. We'll be interested to see what such a setting does to multi-processor performance.

Finally, on the topic of 500-series chipsets and the motherboards they go on: just today, Biostar posted up the latest version of its product catalog (PDF). In said catalog, the company lists a very interesting motherboard: the Racing X570GT8. It doesn't take a genius to realize that this motherboard is apparently based on the as-yet-unreleased X570 chipset from AMD.

Overall, the specifications on the board aren't too different from the previous-generation Racing X470GT8 listed right alongside, but a few things stick out. Biostar says the new board will take its memory up to 4000 MT/s, which is not bad at all. The board will also have three M.2 sockets—a real rarity on Ryzen boards—and apparently, they'll be wired up with four lanes of PCI Express 4.0. That could be a typo, but it's been expected for some time that Ryzen 3000 chips would have PCIe 4.0 connectivity. It's not impossible that the new chipset does as well.

Further reinforcing that idea is that the picture of the Racing X570GT8 appears to have a rather large heatsink with a fan over the motherboard chipset. All three of the board's M.2 sockets are visible, so that fan is just for the chipset. Now, we haven't seen active cooling on a motherboard chipset in quite some time, and we're not too keen on the idea. However, if the AMD-designed chipset's TDP is high enough to warrant active cooling, it may indeed support PCIe 4.0 as well. The relatively weak high-speed I/O on offer from AMD's X370 and X470 chispets was a sore point for some users, so it's pretty exciting to think that X570 could leapfrog ahead in this way. Thanks to Videocardz for pointing out Biostar's gaffe.

Ryzen's launch made CPUs exciting again after totally stagnating for most of the early part of this decade, and Zen 2 looks to take things up another notch. While we are—as ever—keenly interested to see how AMD's Zen 2 processors fare in single-threaded workloads compared to their predecessors, it's hard not to get excited about the idea of having sixteen CPU cores handling 32 threads at once without plunking down for an expensive HEDT motherboard. We expect AMD to announce the third-generation Ryzen CPUs at Computex starting on May 28, and hopefully products will actually launch by the time of E3 on June 11.

Comments closed
    • anotherengineer
    • 6 months ago

    One thing I have noticed about the first gen zen boards, it seems Gigabyte was lacking on BIOS memory compared to other manf. and had to cut out some support.

    see F30
    [url<]https://www.gigabyte.com/Motherboard/GA-AX370-Gaming-K3-rev-10#support-dl-bios[/url<] Seems odd though, wouldn't a chip be 8MB or 16MB?

    • wierdo
    • 6 months ago

    Thanks for the updates Zak, and stay safe while mother nature’s running on an overclocked climate!

    • anotherengineer
    • 6 months ago

    I know the PCI slot is archaic, but I wish the mobos had one for my sound card.

      • JustAnEngineer
      • 6 months ago

      [url=https://www.amazon.com/Sienoc-3-3Vaux-Version-Express-Windows2000/dp/B00NV0EJXU/?tag=techreport09-20<]$14[/url<] or [url=https://www.newegg.com/Product/Product.aspx?Item=9SIA6V85DD5236<]$16[/url<] can add an obsolete PCI slot to a modern motherboard.

      • DoomGuy64
      • 6 months ago

      Or just upgrade to pci-e or usb. I have a strong suspicion most PCI soundcards have limited driver support on w10 anyway, as even older pci-e cards like the xonar have poor support, and there are plenty of good cheap working pci-e soundcards on ebay, like the OEM x-fi. That said, I don’t dislike USB devices, but decent ones seem to be vastly overpriced and overrated.

      USB DACs seem to be the worst of that bunch too, as there are multiple instances of companies rebranding a cheaper model as a more expensive device, and even after that is revealed some people still defend it. Audiophiles have completely ruined the market with overpriced snake oil, and it is a damn chore to research what device is actually worth using. So I stick with cheap used soundcards and know what I’m getting, plus better 3d via drivers. :p

    • Aranarth
    • 6 months ago

    This is interesting: [url<]https://wccftech.com/amd-12-core-zen-2-ryzen-cpu-engineering-sample-benchmark-leaked-higher-ipc-vs-threadripper-1920x/[/url<] Looks like memory and cache latency is much better. IPC is definitely up though we don't know by how much. I can't wait till we have actual chips to test out!

      • Anonymous Coward
      • 6 months ago

      I haven’t read the link, but theoretically speaking, why should we expect lower latency? I suppose if Threadripper is your point of comparison and we consider remote accesses, then it seems obvious. However I’d be pretty impressed to see AMD reduce latency on local RAM access while also going to chiplets (while also holding costs and power efficiency etc).

      That team can do no wrong, these days, it seems.

        • Aranarth
        • 6 months ago

        Here is a shot someone put together comparing the 2700x to the ES sample.

        [url<]https://uploads.disquscdn.com/images/ed3eec137093b646f40f8d802594e85dca98be7a0392b3b838745af3d647ad3e.jpg[/url<] The gif is in the comments quite a distance down in the comments for the article I linked. Trouble is there are no further details on the 2700x being used in the graph.

          • Anonymous Coward
          • 6 months ago

          Looks like they are using their L3 as a unified cache, not two separate L3’s. Perhaps the spike around 32MB can be due to queries to the L3 on the other CPU die. (Assuming the L3 caches are on the CPUs dies … I feel kind of bad I haven’t tried to research that.)

    • msroadkill612
    • 6 months ago

    “The board will also have three M.2 sockets…and apparently, they’ll be wired up with four lanes of PCI Express 4.0. That could be a typo, but it’s been expected for some time that Ryzen 3000 chips would have PCIe 4.0 connectivity.”

    Yes, but they also list them as 32Gb/s – which afaik is the pcie 3 speed. It may just mean it is pcie 4 compatible – as in, you can run a pcie 4 device on it, but not at pcie 4 speed.

    That seems most likely – why bother specifically specifying the data rate otherwise?

    “Further reinforcing that idea is that the picture of the Racing X570GT8 appears to have a rather large heatsink with a fan over the motherboard chipset. All three of the board’s M.2 sockets are visible, so that fan is just for the chipset.”

    hmm… – just the extra ports and upping the chipsets pcie 2 lanes to pcie 3 would add a lot of heat to the chipset from increased data traffic.

    I am not saying the major pcie slots on the mobo wont be true pcie 4, but as above, I have my doubts about the chipsets lanes and ports.

    Further, chipset ports tend to be more remote from the center, and pcie 4 traces require a complicating booster beyond 7 inches.

    • audentity
    • 6 months ago

    Yikes @ all those red sparks and arcing in the picture of AMD Series Motherboards! That’s surely gonna play havoc with other components!?!

    I guess these are only engineering prototypes, so I hope AMD can solve these electrical issues before launch, as I’m really hoping to build a new office machine to replace my Pentium IV (not even kidding!).

    Oh, first post BTW from this long time lurker 🙂

      • K-L-Waster
      • 6 months ago

      That’s why they put all those armored bits on the mobos — they say they’re “heat sinks”, but they’re actually arc shielding.

        • Anonymous Coward
        • 6 months ago

        Good call.

      • Krogoth
      • 6 months ago

      (TR Gerbils enter into Chuck’s inter sanctum with Krogoth in the lead)

      Chuck notices and replies: Master Krogoth, I take that AMD has launch Ryzen 3xxx sooner then expected

      Krogoth: In the name of the online forums, I placed you under arrest, Chuck

      (Gerbils prep their banhammers)

      Chuck: Are you threatening me Master Gerbil?

      Krogoth: The mods will decide your fate.

      Chuck: I AM THE FORUMS!

      Krogoth: Not yet

      Chuck: It’s shilling then.

      (Chuck unleashes red-colored shill lightning and troll bait)

    • psuedonymous
    • 6 months ago

    Northbridge: Check! (though it’s on-package so a shorter FSB/EV6 with greater bandwidth, but the function of the chip is the same)
    Beefy actively cooled Southbridge: Check!
    PS/2: Check! (Still hanging in there!)

    Bring on the slot-mounted CPUs and off-die cache!

      • NTMBK
      • 6 months ago

      Off-die cache would actually be pretty sweet for the APUs…

        • psuedonymous
        • 6 months ago

        Haswell had Crystalwell as an on-package off-die cache, but L4? Bah! L1 or bust!

        • Anonymous Coward
        • 6 months ago

        I want to see an option where one of the two CPU chiplets is swapped for a block of memory. Surely that would be good for something.

          • freebird
          • 6 months ago

          I’ve been waiting for on chip package memory for awhile.

          no worries, die stacking with memory is only a couple years away…
          maybe 2020 holiday season.

      • chuckula
      • 6 months ago

      We here at Intel are declaring bankruptcy…… CHECK!

      (I think I may move to using check instead of confirmed)

      • audentity
      • 6 months ago

      [quote<]PS/2: Check! (Still hanging in there!)[/quote<] Yay for us Model M users 🙂

        • Shobai
        • 6 months ago

        Huh. Mine’s DIN.

      • Gastec
      • 6 months ago

      Check a couple of PATA sockets too and we’re good to go!

    • crabjokeman
    • 6 months ago

    Oh no, not active mobo chipset cooling. Are we going to have to go back to the days where we have to choose between buzzy 40mm fans or large heatpipes?

      • JustAnEngineer
      • 6 months ago

      I remember sticking a [url=http://www.thermalright.com/product/hr-05-sliifx/<]Thermalright HR-05-SLI/FX[/url<] on an NForce2 chipset, back in the day.

        • crabjokeman
        • 6 months ago

        Yeah, I remember replacing the fan on my nForce4 board with some kind of bigger passive heatsink, but wow, did that thing get hot.

        • derFunkenstein
        • 6 months ago

        I went with something a little less enormous. Might have been [url=https://www.amazon.com/Zalman-ZM-NB47J-Fanless-Northbridge-Heatsink/dp/B000292DNQ<]this exact Zalman heatsink[/url<] or at least something very like it. I definitely remember those swinging adjustable arms and the blue color.

          • JustAnEngineer
          • 6 months ago

          I had a couple of those Zm-NB47J coolers, too, but I had to cut off a few pins to fit under the graphics card, and they never worked as well as the awesome Thermalright cooler did.

        • plonk420
        • 6 months ago

        yeah, i’ve been spamming reddit with the IFX and/or SLI and how i had to put it on my shitty MSI X58 that would hit 100C and shut down at load (not even OCing)

      • Krogoth
      • 6 months ago

      It is probably because of PCIe 4.0 controller the southbridge/PCH. That extra bandwidth comes at a cost.

      PCH/southbridge have been running on the toasty-side on Intel/AMD side for a while now.

        • freebird
        • 6 months ago

        In a Buildzoid video on the mobo, he claim it is under certain circumstances when probably several or all of NVMe slots/PCIe lanes are being utilized. He claimed on some lower end X570 boards, some won’t have fans.
        [url<]https://www.youtube.com/watch?v=w5UrTzMg2RU[/url<] All of that I guess could be assumed when the PCIe 4.0 lanes are used on the chipset, it might get toasty. I guess will have to wait and see on how all the thermals play out and maybe someone can find out what process the chipset is built on. Leads you to wonder how much of the TDP is going to be used up by the Ryzen 3000 IO chiplet and will OCing be more successful with less activity on the PCIe 4.0 lanes?

        • sconesy
        • 6 months ago

        Out of curiosity, do you know any temps? And what is the threshold of dangerously toasty? This active cooling strikes me as overkill, like m.2 drive heatsinks for SATA drives. Sure, they’re hotter than 2.5″ drives but you have to really hustle to get to the danger zone. I know nothing about where the chipset might have trouble, though.

          • Krogoth
          • 6 months ago

          FYI, according to sensors on PCH on my previous Z77 PCH and current Z390 PCH it hovers around 40-60C. They are equipped with large, flat but simple heatsinks. It isn’t too much of a stretch that PCIe 4.0 may push it over to needing active cooling or heatpipes territory. PCIe 4.0 operates at double of PCIe 3.0’s frequency.

      • audentity
      • 6 months ago

      [quote<]active mobo chipset cooling[/quote<] Where is the facepalm button? 🙁 Design flaw, clearly not using large enough heatsinks.

        • JustAnEngineer
        • 6 months ago

        Apparently, the AMD-designed X570 chipset has a 15-watt TDP vs. 5-watt for the ASMedia-designed X470.
        [url<]https://www.techpowerup.com/255729/amd-x570-unofficial-platform-diagram-revealed-chipset-puts-out-pcie-gen-4[/url<]

    • cygnus1
    • 6 months ago

    I wonder if in the PCIe 4.0 generation/era of motherboards, we are going to start seeing NICs faster than 1G more common place finally. Would be pretty nice and a single 10G port could reasonably be handled by a single PCIe 4.0 lane.

    edit: I’m talking about eventual integration on low end and mid level consumer boards. I think needing only one PCIe lane is a big factor in what kind of NIC gets integrated on those.

      • Redocbew
      • 6 months ago

      A PCIe 3.0 1x bus can already do roughly one gigabyte per second, so I don’t think the lack of 10G networking on the desktop is a bandwidth thing.

        • chuckula
        • 6 months ago

        Yeah, it might have a marginal improvement on adding a bunch of high-end NICs to a system since you could, for example, have twice as many 100 Gbit NICs assuming you have the same number of PCIe 4.0 lanes vs. 3.0. But PCI 3.0 is doing nothing to hold back the spread of 10 Gbit NICs.

          • cygnus1
          • 6 months ago

          But it does. You just don’t have 10G NICs on a single PCIe 3.0 lane. For an integrated NIC on a consumer board, I think only using one lane is probably a big design consideration for integration on lower end and mid level motherboards.

        • cygnus1
        • 6 months ago

        That’s true regarding the speeds, but I disagree. The 1GB/s of one PCIe 3.0 lane is less than 10gb (1.25GB/s) in a single direction. I believe that’s why all the motherboards I’ve seen with integrated 10G NICs use more than one PCIe 3.0 lane to feed them. It would be a really bad look to include a NIC that could end up dropping packets just because of bottleneck on the PCIe bus and most consumer motherboards don’t have tons of PCIe lanes to dedicate to NICs. But if a single lane can service a 10G NIC, I think it gets more likely to be included as basically a drop in replacement for a 1G NIC. Obviously heat, power, and cost have to be considered for 10G hardware, but a motherboard wouldn’t have to drastically be redesigned to change PCIe layout to incorporate it.

          • Redocbew
          • 6 months ago

          Yeah, that’s true. Right now any desktop board that has 10G onboard is entirely an enthusiast thing, so that changes the picture a little bit.

          I think heat, power, and cost are a bigger barrier though. There’s plenty of good reasons to keep releasing new revisions of PCIe. That’s going to keep going on regardless. The same can’t really be said for 10G networking on the desktop. New controllers that fit the thermals and economics of a low/mid end PC don’t come for free either, and there’s no compelling use case that’s very widespread.

          • BobbinThreadbare
          • 6 months ago

          A NIC should not drop packets because the bus is saturated.

          Also, 2.5 and 5G standards exist they could easily be doing those.

            • cygnus1
            • 6 months ago

            What else is a NIC supposed to do with full buffers and nowhere for the packets to go?

            Agreed on the 2.5G and 5G, those should be fine on PCIe 3.0. But I think those standards are new enough to just have not caught on yet.

            • BobbinThreadbare
            • 6 months ago

            It’s not supposed to request packets it can’t receive

            • cygnus1
            • 6 months ago

            Umm, ok. You do know not all packets coming into a NIC are “requested”, right?

            • Waco
            • 6 months ago

            Sure, but who uses UDP these days? TCP has a plethora of congestion control mechanisms to avoid dropping packets except in the most extreme cases.

            • Redocbew
            • 6 months ago

            Live broadcasting is one place where UDP still has some popularity. Lack of IP multicast, and re-transmission delays make TCP problematic for live events.

            • Krogoth
            • 6 months ago

            UDP is used for online gaming for twitch shooters because it is faster (less latency) and server/client prediction can clean-up the occasional dropped packet.

            • Waco
            • 6 months ago

            Are we pretending that 10G networking has anything to do with this?

            • Redocbew
            • 6 months ago

            We must be inventing whatever this “prediction” thing is also.

            • Voldenuit
            • 6 months ago

            Would you like to hear a TCP/IP joke? Yes/No.
            Yes.
            Confirm acknowledgement that you would like to hear a TCP/IP joke. Yes/No.
            Yes.
            Error protocol 504 gateway timeout. Would you like to hear a TCP/IP joke? Yes/No.

            WouldyouliketohearaUDPjokeTogettotheotherside.

            • Redocbew
            • 6 months ago

            You told a UDP joke, but I didn’t get it.

            • chuckula
            • 6 months ago

            I will not acknowledge your UDP joke.

            • Redocbew
            • 6 months ago

            You must re-tell the UDP joke. It’s important.

            • Mr Bill
            • 6 months ago

            “WhyDidTheChickenCrossTheRoad?”

      • Krogoth
      • 6 months ago

      Not going to happen anytime soon. The primary issue with mainstream adoption of 10Gbps Ethernet and beyond is media. You need at least STP or fiber if you want any significant distance with 10Gbps and beyond. The alternative is 802.11bz a.k.a 2.5Gbps/5Gbps Ethernet however the spec is not even a year old yet so equipment for it is still kinda pricey/scare.

      Besides, the masses have a far greater preference for wireless ethernet solutions.

      • Waco
      • 6 months ago

      Hell, I’d just be happy to see *anything* wired faster than 1G. 2.5/5G will be a godsend if switches would just drop in price.

      Unfortunately until that standard becomes ubiquitous 1G will be what we’re stuck with. Even at my tech-heavy job, I’m one of the few with wired networking at home. 🙁

      • Wirko
      • 6 months ago

      Faster than 1G also includes 2.5G and 5G, and I wonder why these two interfaces still only exist on a couple motherboards and network switches.

        • chuckula
        • 6 months ago

        The problem with those solutions are that the high-end is already way above those speeds and the low-end gets by fine with gigabit Ethernet. That’s not to mention the huge amount of wireless devices or systems that primarily use Internet connections with < 1 Gbit of bandwidth anyway.

        2.5 and 5Gbit will become common when and if they ever become “free” as in built-in to motherboards by default and baked into consumer-grade switches by default.

        • Krogoth
        • 6 months ago

        It is because that spec only was finalized less than a year ago. It’ll start to distill into SMB market which aren’t too keen on upgrading their entire wiring infrastructure to support 10Gbps and beyond. I suspect in 2020 and on-ward that 2.5Gbps/5Gbps Ethernet will become an option found in most mid to high-end-tier enthusiast motherboards.

      • leor
      • 6 months ago

      It’s actually closer to 5 years old…

      [url<]https://en.m.wikipedia.org/wiki/2.5GBASE-T_and_5GBASE-T[/url<]

      • Leader952
      • 6 months ago

      Not likely because the infrastructure costs for everything else would cost a fortune.

      Take a look at just the cost of 10G Switches:

      [url<]https://www.newegg.com/Product/ProductList.aspx?Description=10g%20switch&Submit=ENE&IsNodeId=1&N=100158106%20601295935[/url<] $538.90 for the cheapest new one.

        • Srsly_Bro
        • 6 months ago

        The biggest hurdle is getting companies to upgrade the PCs to use the benefits of 10G. Many large companies I’ve worked for in Seattle are long over due in the PC/peripheral upgrade cycle and my daily tasks don’t require much bandwidth. Why the push for 10G for most office tasks?

      • Bauxite
      • 6 months ago

      Dual 40GbE (and QDR/FDR infiniband) x8 3.0 cards have been on fleabay for years, down around $30 now if you look hard. QSFP cabling can be cheap depending on what you need. Mellanox has excellent drivers.

      Cheaper than doing copper 10GbE for home server stuff, an x4 3.0 1 port retail card is still at $100. BoM for putting those on boards (I think they tend to run on a x2) is likely $50 or so. No 4.0 chips out there yet.

      Highly recommend dual port so you can keep the actual data network as a P2P design. The switches are at a huge range of prices from a few hundred to insane, but keep in mind NONE of them are anything approaching quiet as all are 1U, some are downright jet engines.

        • Waco
        • 6 months ago

        Technically you could get pretty tricky and build a ring network with dual port cards…assuming all of your boxes are always on and they all support routing.

      • Gastec
      • 6 months ago

      “common place”…Try telling that to a my boss, we’ve been hanging on to 10 Mbps network speeds for our mission critical endeavors for more than a decade. If the NIC’s or the controller cards won’t just fry one day, then 15 years from now we’ll still be using them. But at least the whole factory will be painted in bright pink and blue tones.

    • hungarianhc
    • 6 months ago

    I’m more interested in Navi.

      • Goty
      • 6 months ago

      I was the same way until I heard the rumor that it is still based on GCN. My expectations have been greatly tempered since.

        • Waco
        • 6 months ago

        That’s like being sad that CPUs are still x86…

          • Srsly_Bro
          • 6 months ago

          x86 instruction set

          GCN architecture

          I think it is a little bit different, bro. Go ahead and try again.

            • Goty
            • 6 months ago

            To be fair, there is an associated instruction set.

            • Srsly_Bro
            • 6 months ago

            I think it’s fair to assume he was referring to a new GPU architecture. The context was overlooked.

          • chuckula
          • 6 months ago

          No, it’s more like saying that you’d be fine with a tweaked Pentium 4 vs. an Opteron since both use x86. GCN is kinda-sorta an instruction set, but is more importantly an architecture with plenty of baked-in limitations that Vega already ran into.

            • Goty
            • 6 months ago

            Yes, the four compute engine limitation being one of the major ones.

            • Waco
            • 6 months ago

            Yeah, that’s true.

            Architectures can be tweaked dramatically while still being largely derivative of their base, though.

            • blastdoor
            • 6 months ago

            Like how Core is based on the pentium Pro, not netburst

            • Waco
            • 6 months ago

            Exactly.

            • Goty
            • 6 months ago

            AMD has actually addressed this and mentioned that it is possible to extend the 4 CE limit of GCN, but that it would require a tremendous amount of work. Given the apparent struggles RTG has faced recently with engineering capacity, I’m not expecting any miracles.

        • hungarianhc
        • 6 months ago

        It’s not going to change the world, but it looks like the price / performance sweet spot should be one that suits me.

          • Goty
          • 6 months ago

          That’s a good point to make. I’ve always argued that there are no bad products, only bad prices, but I would still like something a little more interesting than the nth iteration of GCN.

        • DoomGuy64
        • 6 months ago

        Consoles are GCN too. IMO, that and well optimized dx12/Vulkan games show the AMD is more bottlenecked by their drivers. They may have improved a lot since windows 7, but their edge case scenarios are clearly inferior to nvidia. Multimon, crossfire, wattman, VR, mobile gaming, ignoring new hardware features in dx11 for dx12, etc.

        No new hardware will improve AMD’s driver deficiencies, which means only basic use will improve, and Nvidia will still be superior in every edge case. Not to mention anything not GCN will suffer GCN’s driver launch problems, and users will have to wait a year for the “fine wine” to kick in. No thanks, improving GCN is probably the best we can hope from AMD at the moment, although I doubt we will see any extraordinary gains, much like VLIW4 was to VLIW5.

          • synthtel2
          • 6 months ago

          Nvidia doesn’t win all the edge cases. IME, AMD falls over on some recent Unity/Unreal stuff with tons of geometry and weak culling, but Nvidia falls over on a lot of DX9 stuff (the Nvidia fall-over being much harder as a percentage, so it’s annoying despite the games being theoretically weightless on modern hardware).

          How is Wattman an Nvidia win / AMD loss? Nvidia’s got nothing close. For that matter, what’s “ignoring new hardware features in dx11 for dx12” supposed to mean?

          • Action.de.Parsnip
          • 6 months ago

          Drivers are fine. AMD dx12 is more efficient than nv dx12 hence the discrepancy with dx11 vs dx12.

          The gcn problem is low clock speeds in absolute terms and low perf/watt *past a certain threshold*. That threshold being quite low.

        • cynan
        • 6 months ago

        The [url=https://wccftech.com/rumor-amd-navi-debuting-at-e3-launching-july-7-with-rtx-2080-class-performance/<] leaks/rumours [/url<] that purport Navi 10 performance to top out at approx in line with GTX 2070, and Navi 20 to top out around GTX 2080 or a bit faster (but comfortably below a 2080 Ti) seem about right to me. What is interesting is whether AMD will be gluing Navi cores together to come up with a single package dual GPU offering. Either way, June 11 is less than a month away.

    • chuckula
    • 6 months ago

    Oh crap.
    We’re gonna have to cancel yet another product line with all these chiplets coming at us!

      • Krogoth
      • 6 months ago

      Chiplets? Where we are going we aren’t going to need glue……..

        • chuckula
        • 6 months ago

        We need to get Back… to the FUTURE!

        #SkylakeLaunchedIn2015AfterAll

        • Mr Bill
        • 6 months ago

        “My God, Its full of stars!”

      • Srsly_Bro
      • 6 months ago

      Save the announcement for June. We already got the IMFlash cancellation for this month. Intel needs to stick to a realistic cadence of cancelled products.

        • NovusBogus
        • 6 months ago

        Maybe the new CEO will try to upstage his predecessor by pushing for a more aggressive cancellation schedule.

          • K-L-Waster
          • 6 months ago

          Could save time by announcing and then cancelling in the same speech.

          • moose17145
          • 6 months ago

          This more aggressive cancellation schedule is sure to not bode well for their discrete graphics card that is in development…

Pin It on Pinterest

Share This