Some Zen CPUs may pack 32 cores and eight memory channels

Every time there's an impending CPU or GPU release, a lot of rumors start floating around. AMD's upcoming Zen architecture is no exception to that rule, but now there are a few tasty morsels of actual information.

A couple weeks ago, folks from CERN's IT Technical Forum delivered a talk titled "Technology and Market Trends for the Data Center." In that talk, a CERN engineer revealed that the upcoming AMD CPUs will pack as many as 32 physical cores on a single package, spread across two 16-core modules with an on-die interconnect. The presenters said that AMD is also introducing its own take on Hyper-Threading into the mix, called Symmetrical Multi Threading.

This tidbit of info from CERN corroborates some earlier information derived from a Linux kernel patch which contains a reference to a maximum of 32 cores for an upcoming AMD "Zeppelin" CPU architecture.

32 isn't the only big number being thrown around, though. The presenter repeated AMD's promise of a 40% increase in IPC, which is a welcome improvement and could help the company's CPUs become more competitive with Intel's offerings.

Zen looks to be packing serious memory bandwidth, too. The CERN slides mention that the new CPUs can make use of as many as eight channels of DDR4 memory. Given the 16+16 core architecture described above, it's possible that the eight-channel total is implemented as four channels connected to each 16-core physical module.

Comments closed
    • phaxmohdem
    • 4 years ago

    32 cores , 64 threads * two sockets = 1 bulge in pants

      • tipoo
      • 4 years ago

      But it’s 32 cores across two modules via an interposer.

      16 * 2 * 2 threads = half the bulge in pants

      • ronch
      • 4 years ago

      It made you poop?

    • DarkMikaru
    • 4 years ago

    Lost me at “spread across two 16-core modules” blah blah blah….

    AMD… I love you, but please let the Module foolishness go. It’s making me nervous!! Though I will say this story makes the new blood sound impressive.

    [url<]http://vrworld.com/2016/02/12/cern-confirms-amd-zen-high-end-specifications/[/url<]

      • Waco
      • 4 years ago

      It’s two dies on an interposer. The word “module” shouldn’t scare you…

        • DarkMikaru
        • 4 years ago

        I’m going to be honest guys… I love AMD, as you all know. But I am quite nervous about this setup. Sounds good, maybe they learned their lesson. Go for innovative but something tangible. I’ll listen to ya..thanks.

      • ronch
      • 4 years ago

      Imagine a 16-core module with a shared front end and FPU. Haha.

        • DarkMikaru
        • 4 years ago

        Ok…maybe I shouldn’t be worried about this setup. Thanks for the explanation guys.

    • Bensam123
    • 4 years ago

    I assume this is for their top of the line server chip and not desktop models?

    • LightenUpGuys
    • 4 years ago

    Sounds like AMD and Intel are both about two years behind SPARC XIfx. 34 cores, HMC RAM, integrated interconnect.

      • Anonymous Coward
      • 4 years ago

      Not many CPU cores left that can make [i<]any[/i<] claim to competing with the fastest desktop cores from Intel, or even with something in the Bulldozer lineage.

        • Anonymous Coward
        • 4 years ago

        Replied to myself… idiot.

      • robliz2Q
      • 4 years ago

      And this SPARC CPU costs how much, is affordable and suitable for mass market deployment, right. Runs the games and transcoding software multi-core enthusiasts have? SPARC was historically never fast single threaded, they tried many core, that failed due to scalability issues, lock contention & Amdahls law becomes a serious problem.

      The real reason, there’s not 8 core desktop CPUs is because very few users need it. A laptop i7, manages to run 2 games simultaneously well (one with Intel GPU other Nivida), whilst a browser is downloading. There’s diminishing returns.

        • LightenUpGuys
        • 4 years ago

        You think the 32 core Zen is for consumer desktops?

          • robliz2Q
          • 4 years ago

          lol no. But what becomes affordable depends a lot on mass market economics.
          x86 won the main server market from SPARC/PA RISC/Alpha/Power etc, by being good enough and cheap, not by being the fastest thing on the planet.
          They could share the huge development costs around huge volumes, unlike the vertically integrating system manufacturers, who one by one shelved their inhouse architectures.

            • LightenUpGuys
            • 4 years ago

            Thats nonsense. A Haswell 22nm E7 Xeon is still $7,000 despite Intel having no competition for xi6 CPUs. That SPARC is 20nm and uses HMC.

        • Anonymous Coward
        • 4 years ago

        It seems you take offense to the [i<]possibility[/i<] that some other processor might have something about it that is better than the chip in... say my laptop, for example. Relax.

          • robliz2Q
          • 4 years ago

          Huh?? What planet are you on.. actually my desktop was a SPARC for years and years 🙂
          Most readers are wanting Zen to solve the CPU performance stagnation issue, which is a non-problem for seemingly the 90% of the world who rather smaller/lighter/more portable than faster given what they do on the machine. I was pointing out even a laptop 4 core CPU, is hard to really load unless you search for Maresenne primes.

            • Anonymous Coward
            • 4 years ago

            I wanted a Sparc desktop for a long time… was a little late to that party.

    • tanker27
    • 4 years ago

    [quote<]Some Zen CPUs may pack 32 cores and eight memory channels[/quote<] And still wont perform well as a i7-920.....Ba dum dum pish 😛

      • Krogoth
      • 4 years ago

      The Zen chip will outpace the old Bloomfield chip in every way.

      i7-920 was fast in its heyday, but it is completely outclassed by modern and future silicon.

        • tanker27
        • 4 years ago

        As I was being facetious we shall see.

        As for the i7-920 being outclassed by modern CPUs, sure, as far as evolution goes when its benchmarked for side by side comparisons. However, in real world application its still viable.

    • Krogoth
    • 4 years ago

    [quote<]AMD "Zeppelin" CPU architecture.[/quote<] Anybody else get "Hindenburg" vibes from this?

      • Unknown-Error
      • 4 years ago

      Talk about bad name selection.

      • chuckula
      • 4 years ago

      I was more thinking LED Zeppelin.

      Just don’t give it too many screwdrivers and let it pass out on its back.

        • AJSB
        • 4 years ago

        Taking in account the lame trend of MoBo OEMs fill their MoBos with LED effects, ‘ LED Zeppelin ‘ sounds pretty appropriate…

      • Chrispy_
      • 4 years ago

      Thankfully hard drive manufacturers have chosen to fill drives with inert helium rather than explosive hydrogen, so at least your data won’t go up in flames. Let’s hope AMD follows suit.

    • peaceandflowers
    • 4 years ago

    Wouldn’t it be far more logical if they just put four eight-core, dual-channel chips in the package? I mean, 8 cores and 2 channels is where it will likely be at for the high end consumer case anyway – not sure if they’d go and develop a new die just so that they have less separate dies here.

      • smilingcrow
      • 4 years ago

      Other sources are suggesting just that but how feasible is having 4 chips in one MCM package?

        • chuckula
        • 4 years ago

        Right now AMD is shipping products with 5 different components on a single MCM.
        It’s the Fiji parts in the Fury graphics cards: 1 huge GPU chip + 4 HBM stacks.

        So it’s definitely not impossible to put four discrete parts on a single substrate. Doesn’t mean that’s what’s actually happening, but not impossible.

          • the
          • 4 years ago

          For a traditional MCM using wire bonding, it is more of a cost issue but there are been several designs with over 10 elements in a single package ([url=http://www.retrocomputingtasmania.com/home/projects/unisysaseries<]example[/url<]). Interposers are a bit different. They're cheaper than wirebonding numerous elements in a single package but the size of the base interposer is a limiting factor. Silicon interposers are limited by lithography processes which limits them between 700 mm^2 and 850 mm^2. Despite the large area of an interposer, they're relatively straight forward to manufacture as they don't have any transistors. Since transistors are not needed, a silicon interposer can use an older manufacturing process. This two factors result in high yields of a silicon interposer. So for a 700 mm^2 square shaped interposer, four 170 mm^2 square dies could be comfortably mounted on to it. An organic interposer is also possible which doesn't share the same area limitations of a silicon based one but the electrical properties are not as good. Still if AMD wanted to put four or more ~300 mm^2 dies into a package, an organic interposer would be the way to go. And then there is [url=http://www.intel.com/content/www/us/en/foundry/emib.html<]EMIB[/url<] from Intel which is similar to an interposer. This only bridges the gap between chip edges so conceptually there is no hard area limit here. Intel's literature on this technique typically shows 10 dies put together, though it is unclear if they've actually shipped such a configuration. Cost and yields are the major limiting factor here.

            • smilingcrow
            • 4 years ago

            Thanks for the info.

    • vargis14
    • 4 years ago

    Seems like a awful lot of cores in my eyes…with 32 cores I do not expect high clock speeds but I hope I am wrong.

      • ImSpartacus
      • 4 years ago

      It’s an hpc/server part, so yeah, low clocks.

      The consumer desktop zen will likely have only 8 cores.

    • Krogoth
    • 4 years ago

    Looks like AMD is pursuing the HPC and server markets. Makes strategical sense as desktop and laptop markets are firmly in Intel’s grasp. The embedded market is ARM’s stronghold.

    The desktop, laptop and embedded markets do not care for cores and multi-socket platforms. Their only concern is platform cost and power consumption. AMD has nothing compelling in their R&D table.

    • snowMAN
    • 4 years ago

    [quote<] its own take on Hyper-Threading into the mix, called Symmetrical Multi Threading[/quote<] Guys, SMT is the generic term, and Intel calls its version HT.

      • ImSpartacus
      • 4 years ago

      Little things like that are kinda scary.

      I admit that I missed that sentence, but once you bring it to my attention, even I know that hyper threading is a marketing term for smt and I’m just a layman.

      • Beahmont
      • 4 years ago

      Umm… you know, you might want to actually read the article better and remember what SMT actually stands for before saying that.

      Symmetrical Multi Threading may or may not be the same as Simultaneous multithreading. However even if it is, Symmetrical Multi Threading, if the slides and advertising are right, is likely AMD’s brand name for SMT, and thus the comparison to Hyper-Threading is still more apt.

    • Welch
    • 4 years ago

    Interesting about the memory channels. From my reading the Zen architecture is modular in that in comes in packs of 4 cores (actual cores of corse not bulldozer fake “cores”). Each with dual channel memory serving it. If that were the case a 32 core Zen should have 16 channel memory.

    Since that doesn’t seem to be the case it looks as though this is one clear difference from Zen the server part and Zen the consumer/enthusiast part. I’m not familiar with the tradeoffs of keeping the higher number of memory channels for server parts, although I’m sure 8 channels is likely enough before hitting a deminishing returns.

    This also means that we will likely see a cap in the number of cores for consumer products, my guess is at Octacore with quad channel memory to compete with Intels E series. They will also support ECC, something requiring Intels E series or Xeons to get.

    Interesting development none the less.

      • the
      • 4 years ago

      Servers are very hungry for memory capacity to host VMs. Intel and IBM side step the capacity issue by using memory buffers (think FB-DIMMs but the buffer chip is on the motherboard). The downside is that there is a latency hit which impacts performance.

      Going with directly connected memory is faster but it also bloats the size of the die due to the additional IO pads.* This is an odd choice as memory bandwidth isn’t that big of a bottleneck considering the large caches server chips are shipping with today.

      Edit: One thing on the horizon from Intel is 3D Xpoint memory which will boost memory capacity on the SkyLake-EP/EX platform. Going with 8 memory channels may make sense in the face of this competition. I haven’t heard of any plans for AMD to adopt any sort of NV DIMM strategy.

      *Though this isn’t an issue with an interpose since the IO pads for the external memory would be on the interposer itself, not the processor die. Case in point the 4096 bit wide memory interface on Fiji took less space than the 512 bit external GDDR5 bus on Hawaii.

        • Theolendras
        • 4 years ago

        I think you could try to differentiate yourself with more core logic and processing power with a leaner cache if you have a decent access to main memory.

    • Deanjo
    • 4 years ago

    Any wonder why MS is going to implement a per core licensing model for Microsoft Windows Server 2016?

      • psyph3r
      • 4 years ago

      mother of god…

      • Krogoth
      • 4 years ago

      It was a foregone conclusion as soon as quad-core, single-socket CPUs and beyond became commonplace.

    • HisDivineOrder
    • 4 years ago

    Even though the last few times AMD has had a new architecture coming they’ve disappointed me (ie., Phenom, Bulldozer), I must confess…

    …I’ve really missed this feeling of cautious optimism that AMD might turn it around and actually try to compete with Intel on just a level of performance instead of performance per dollar or performance per watt.

    Sure, I don’t think they’re going to win the match-up. If only because the deck is stacked pretty heavily in Intel’s favor.

    At this point, though, I feel winning for AMD is just showing up to the fight and not getting creamed as soon as the bell rings.

      • forumics
      • 4 years ago

      to be fair, phenom was pretty evenly stacked against the core2duos/quads of its time
      it was bulldozer that was utterly disappointing

        • robliz2Q
        • 4 years ago

        Actually it was Phenom II that was competitive. The original Phenom was disappointing, IIRC was issues with cache hierarchy.

        AMD has to compete on a performance/watt basis against a vastly bigger Intel, who maintain at least a 1 node process advantage, because they can re-invest huge margins into process technology and design.

        So many on this kind of forum basically whine about AMD not being able to pull off a miracle.

      • Anonymous Coward
      • 4 years ago

      They did well with Bobcat and Jaguar, presumably due to modest goals. I’m expecting Zen to be a big Jag. Nothing risky, delivering neither impressively high nor low performance. Well suited to building out 16 cores on a die, much more effective than looking to high clock speeds because probably they don’t have the resources to address that. We shall see.

    • jihadjoe
    • 4 years ago

    New Opterons confirmed?

      • ronch
      • 4 years ago

      Yes. By WCCFtech no less.

    • tipoo
    • 4 years ago

    So they’re probably not going to catch up to Skylake, but if they do reach Haswell or even down to Ivy Bridge IPC levels and offer 8 cores at a competitive price, that could have enough appeal for the enthusiast side.

      • maxxcool
      • 4 years ago

      Enthusiasts will not save AMD. They need mass volume, and or super expensive server parts that people want.

        • Airmantharp
        • 4 years ago

        If they get competitive IPC and clocks, i.e. competitive performance at every slot, they can easily start chipping away at Intel dominated markets. If they get competitive at IPC/watt, they can *really* chip away at mobile and enterprise.

        They’ll still have to over-deliver and under-price, but at least they’d be gaining market share and some revenue to show for it.

          • blastdoor
          • 4 years ago

          I think the workstation market might be a good target. For that market, performance/watt isn’t quite as critical as in mobile or server. If Zen could win big in $/performance, it could lose in performance/watt so long as the loss isn’t a blow out.

          • maxxcool
          • 4 years ago

          Anything better than current will be ‘good’. Granted. But taking market share from Intel at this point is going to be VERY hard for AMD even if ZENs a win due to having to ramp up production, and prove their worth to more than us collectively.

          Convincing OEMS and server farms to switch over from Intel will be very difficult when Intel starts price cutting to keep AMD out, more so in that it will be a untested architecture from a company that has a very real ‘end of the line’ outlook if things do not dramatically pick up.

          Sever farms hate EOL hardware from a defunct company..

          • Kretschmer
          • 4 years ago

          “If they come up with competitive processors they could increase their marketshare.”

          Of course. But no one expects AMD Zen to be competitive, especially on power draw.

          • Theolendras
          • 4 years ago

          They aim for 40%, in many situations they are about 60% below Skylake already and they will probably have to face Kaby Lake, not the sky one. Don’t get me wrong, they will have quite a good overall package if they deliver, but they will still be behind in single thread performance, even if more like arms reach distance. Espw A rumored HBM enabled SKU would make really nice solutions.

        • travbrad
        • 4 years ago

        I agree. Having tons of cores doesn’t really make sense for “mass volume” stuff though. That market is all about efficiency/power consumption and IPC (ie laptops)

      • Kretschmer
      • 4 years ago

      Not many tasks scale well from 4-cores to 8 “cores”. Just because you’re an enthusiast doesn’t mean that you’re going to ignore single-threaded performance.

        • tipoo
        • 4 years ago

        Of course not, but the single threaded difference gap would be substantially closed. You don’t gain a whole lot going from Haswell to Skylake, however the gap from high end Hasewell to any AMD FX in gaming and most other tasks is huge.

        So no enthusiast worth his salt would give up a lot of single threaded performance for more cores, but if we’re talking about it being a Haswell-Skylake gap, the multithreaded advantage, even if rarely used well, could be more substantial than what you lose in single threaded.

    • torquer
    • 4 years ago

    CERN eh? Sounds like a super collision of memory bandwidth and processing power.

    *cough*

      • ronch
      • 4 years ago

      I hope this collision doesn’t result in a violent explosion of itty bitty particles, otherwise AMD is about as smashed as an atom in a particle collider.

        • torquer
        • 4 years ago

        The fact that you got that totally makes up for the repeating sentences

      • Mr Bill
      • 4 years ago

      Corque jets resulting in discrete core formation.
      *cough*

        • Mr Bill
        • 4 years ago

        -1 for the pun or because you don’t know that quark jets at CERN result in the creation of hadrons (protons and neutrons) along with other particles. CPU core being a “hadron”.

          • chuckula
          • 4 years ago

          Really? I thought it was the smashing of the hadrons that produced the quark jets?

          And besides, we all know that using “hadron” in a sentence is the ultimate pun material.

    • ronch
    • 4 years ago

    Die shots!

      • vargis14
      • 4 years ago

      I want to see it also!

    • ronch
    • 4 years ago

    Um, not sure, but I SEEM to have run across this bit of news from some other website not too long ago.

    I think it’s… um, er… WCCFTech?

    Surely, TR didn’t just follow WCCFTech on this rumor?

      • ImSpartacus
      • 4 years ago

      I believe it’s the same rumor, so yes.

      • just brew it!
      • 4 years ago

      The article cites a talk from CERN. CERN should have more of a clue than a random enthusiast site, so I’d say the rumor has some legs now.

        • ronch
        • 4 years ago

        This news bit was in a recent short bread.

      • RtFusion
      • 4 years ago

      IIRC, the first rumour of a 32-core Zen part I came across was from Fudzilla:

      [url<]http://fudzilla.com/news/processors/37564-the-next-generation-opteron-has-32-zen-x86-cores[/url<] Then the rumour of a 16-core part, also from Fudzilla: [url<]http://www.fudzilla.com/news/processors/37494-amd-x86-16-core-zen-apu-detailed[/url<] Then the paper on AMD's Exascale Heterogenous Processor, also with 32-cores but also with GPU cores and HBM from bitsandchips: [url<]http://www.bitsandchips.it/9-hardware/5858-amd-exascale-heterogeneous-processor[/url<]

      • NeelyCam
      • 4 years ago

      I want more links to Fudzilla

    • chuckula
    • 4 years ago

    Wow, some a-hole just went through and downthumbed every post in the story.

    Maybe the days of anonymous down & upthumbs should come to an end.

      • ImSpartacus
      • 4 years ago

      Or maybe comment rating should come to an end? I mean, it’s just unnecessary vanity, right?

        • Pwnstar
        • 4 years ago

        You’re so vain!

          • BIF
          • 4 years ago

          Hey, that was a song.

      • ronch
      • 4 years ago

      If you say anything remotely negative about AMD in an AMD-related article, you’re bound to be down thumbed. I should know.

      How could an AMD fan downthumb a fellow AMD fan??? I thought we were brothers!!!

      Edit – Just got smacked by a downthumb. Told ya.

        • chuckula
        • 4 years ago

        Whoever did it also downthumbed the pro-AMD comments too.
        Probably couldn’t even be bothered to figure out what the posts said.

          • ronch
          • 4 years ago

          Just goes to show fanboism is bad for your mental health if you don’t keep it in check.

      • Flatland_Spider
      • 4 years ago

      -1 to keep it going. 🙂

      • MOSFET
      • 4 years ago

      I never bothered to check, but until you posted that chuckula, I was under the impression that you [i<]could[/i<] go somewhere around here and see who thumbed what and in which direction.

        • chuckula
        • 4 years ago

        If you find that place, then let me know. I’d like to do some statistics!

          • Duct Tape Dude
          • 4 years ago

          [i<]Worriedly contemplating a gold subscription to undo all my downthumbs[/i<]

        • morphine
        • 4 years ago

        Nope.

      • cegras
      • 4 years ago

      Who cares? Downthumbs don’t change the rank of the comments. I guess they’re convenient for you to use as a strawman. You spend a lot of effort in the forums challenging the 32 core news, got anything to say now?

      • Bensam123
      • 4 years ago

      Coming from someone who did this to me for about a year… Hrmmm…

        • chuckula
        • 4 years ago

        No Bensam, the difference is that I have no problem telling you when I downthumbed you, and I’ve never done it without a good reason.

      • TopHatKiller
      • 4 years ago

      Dearie,
      Removing anonymity probably wouldn’t make any difference. Possibly people might feel that’d have to justify their voting position, but as most people don’t seem to justify their posted statements either I can’t see it’ll make much diff.
      I became upset about the ridiculous negative-voting wave I suffer, but I now just think: sodoff, who cares?

    • brucethemoose
    • 4 years ago

    I wonder what models will trickle down to lowly consumers like us. I know some people would love a 16 core CPU for their desktop, and an 8-core APU with a sizable HBM GPU would also be pretty interesting.

      • w76
      • 4 years ago

      Same here. I could get over Zen not Skylake in IPC, much less whatever Intel has out when Zen finally lands, but if Zen threw 1.5-2x the number of cores at me — real, full cores — then it’d probably still take the crown for some of the tasks I care personally about most. It would at least make me consider it, which nothing from AMD comes close to doing currently.

      • Kretschmer
      • 4 years ago

      Besides “moar cores” marketing, a vanishingly small part of the market would benefit from more cores over stronger performance per-thread. Most tasks that are 1) Highly Parallelizable and 2) Appealing to Consumers are also 3) Easily Left Running Overnight.

      • the
      • 4 years ago

      A SoC with 32 GB of HBM in the package can forgo external memory completely in the consumer space. I’d love to see such a design introduced but the economics works against it.

        • Waco
        • 4 years ago

        No chance. HBM is too damn expensive for the consumer market at this point. The interposer ends up huge…

          • Airmantharp
          • 4 years ago

          Expensive, and with extremely high latencies, which kills IPC because the CPU spends too much time waiting while executing branching code. Would work if they found a way to put low-latency memory on the interposer though!

            • brucethemoose
            • 4 years ago

            Seeing how far it overclocks, I get the feeling HBM1’s timings are a little loose compared to what it’s capable of.

            AMD did the same thing with APUs at first: Llano was clocked VERY conservatively (like, 50% overclock with an undervolt conservatively), but the later SKUs and APUs were clocked much closer to the limit.

            • robliz2Q
            • 4 years ago

            Predictable branching code is no issue. Caches, OoOE engine and anticipatory loads from main memory into caches, mean faster memory doesn’t help very much.

            The HBM2 has been developed for the next generation graphics, early adoption is expensive, but that kind of integration putting more in less area, in long run lowers costs.

          • the
          • 4 years ago

          “At this point” is correct but prices do decline over time as volume ramps up. This will not always be true. Certainly won’t happen in 2016 but I could see this easily happening in 2018.

        • robliz2Q
        • 4 years ago

        it is going to happen, the question is when?
        Perhaps it won’t be the high-end, but battery saving APU which will drive this kind of integration as manufactoring & development costs are reduced by GPU pioneers, GDDR5’s issue was the power budget for large RAM.

    • Theolendras
    • 4 years ago

    Ironic, throw out a design optimized for throughput (Bulldozer) when the multi-threading of most software an SK are mostly ignoring or use multithreading for rare low-hanging fruits only. Then release a design to improve IPC and single thread performance, when VISC, Direct X12 (which will aleviate single thread performance hegemony quite a bit in overall performance picture) and software in general.

    I have to admit that do approve the Zen philosophy anytime over Bulldozer. The module design just a weird and bizarro timing mistake on all accounts. Even throughput advantage to the design aren’t much better from SMT to begin with.

    I really hope a Zen APU design with HBM target the 2 in 1convertible market eventually.

      • geekl33tgamer
      • 4 years ago

      A fully-loaded FX-8350 overclocked still gets it’s *** handed to itself on a plate by an Ivy Bridge i5 (and in many workloads, actually an i3).

      I’m personally glad it’s going – I had one. It was troublesome with a heavy data workload and I don’t miss it for one second.

        • Waco
        • 4 years ago

        *Sandy Bridge

        • Krogoth
        • 4 years ago

        FX-8350 is faster than Ivy Bridge and Sandy Bridge in anything that involves tons of threads and VMs (a.k.a not gaming or mainstream usage patterns). FX-8350 is a league ahead of i3 and i5 chips while i7 chips aren’t that far behind (thanks to HT)

        Ivy Bridge and Sandy Bridge are only faster when clockspeed and IPC are king (a.k.a gaming and mainstream usage patterns). It isn’t that much of a difference though.

        The primary advantage of Ivy Bridge and Sandy Bridge is that consume almost half the power at full load. Ivy Bridge has PCIe 3.0 support.

      • brucethemoose
      • 4 years ago

      One idea behind the bulldozer approach is that you can have 1 giant core for single threads, or split it into 2 for more parallel workloads.

      I was a good idea at the time, but in practice, that didn’t work out so well… The “big” core turned out to be pretty bad at single threads, and multicore performance wasn’t great either until later generations, where the “modules” more closely resemble 2 separate cores.

      • BIF
      • 4 years ago

      Do you realize that you just used the words “ironic”, “optimized”, and “Bulldozer” in the same sentence?

      If I was the downthumbing type…

      Wait a minute, I am! 😉

        • Theolendras
        • 4 years ago

        Lol

    • EndlessWaves
    • 4 years ago

    [quote<]The presenters said that AMD is also introducing its own take on Hyper-Threading into the mix, called Symmetrical Multi Threading.[/quote<] I was going to correct you on that, then I realised AMD had substituted Symmetrical for Simultaneous.

      • Flatland_Spider
      • 4 years ago

      Interesting. Symmetrical seems to imply the threads will be equal.

      Is this a descendant of Bulldozer, or an inverse of Bulldozer with two pipelines and one core?

        • just brew it!
        • 4 years ago

        Or maybe it merely implies that AMD doesn’t want to be perceived as copying Intel on SMT.

        • ronch
        • 4 years ago

        Don’t get too caught up with it. Probably just AMD marketers not knowing what SMT stands for.

      • nico1982
      • 4 years ago

      I suppose they want to underline that their SMT implementation threats both logical threads more equally than, let say, Hyperthreading(tm).

      Equally good or equally bad is another matter 😛

      Edit: beaten 😛

        • ronch
        • 4 years ago

        They’re.. threatening logic threads? Why?

        “Have few dependencies and branches or else I won’t execute you and kick you out into the L3 cache!”

          • nico1982
          • 4 years ago

          Ahah, not native english speaker here (obviously) XD

          Yeah, AMD should have been more strict with those unruly Bulldozer cores. They will not repeat the same mistakes with Zen 😛

            • ronch
            • 4 years ago

            [quote<]Ahah, not native english speaker here (obviously) XD[/quote<] So?

      • the
      • 4 years ago

      The symmetrical term reminds me more of instruction replay from Intel’s Itanium chips than Hyperthreading. One aspect of instruction replay was to fill up empty execution slots in the VLIW architecture that would duplicate work and compare the output to ensure integrity of the result. Basically dynamic mirroring of execution to increase reliability.

      In reality, AMD probably means simultaneously multithreading like the rest of the industry uses.

        • Mr Bill
        • 4 years ago

        Reminds me of [url=https://en.wikipedia.org/wiki/SMP_%E2%80%93_Symmetric_Multiprocessor_System<]SMP Symmetric Multiprocessor System[/url<].

    • CampinCarl
    • 4 years ago

    Definitely intriguing, but until they can provide tools that rival the Intel Compiler Suite (Parallel Studio yadda yadda, or whatever they’re calling it these days), it’ll be hard for them to gain too much traction.

      • Theolendras
      • 4 years ago

      Ironically sometime Intel Compiler benefited AMD even more then Intel. Well AMD is pretty much using similar implementation to intel when it comes to instruction support. Remember Fuse-Multiply-Add with 4 operand ? Yep they’re using FMA3 now, just like Intel. The same could be said with most other extension 3Dnow vs SSE etc.

        • Mystiq
        • 4 years ago

        Unlike GPUs, which tend to have vastly different architectures, as long as the CPU supports the same instructions, any compiler could optimize as well for any CPU, right? Even in the wild case where, for example, Apple began making x86 CPUs. An i86, if you will.

          • Theolendras
          • 4 years ago

          You could technically target specific architecture with a compiler, but it’s a futile exercise when you have so varied SKU like Intel does

        • robliz2Q
        • 4 years ago

        Well FMA4 was something Intel were involved with planning, then they changed their minds figuring that FMA3 with their register renaming worked well.

        More important really, was AMD64, which Intel have been forced to adopt, in the sinking of the Itanic.

        Software optimisation is expensive, so the main market is what is tuned for, meaning AMD need to provide compatability or their CPUs will look worse, in performance with many popular programs.

    • chuckula
    • 4 years ago

    [quote<] In that talk, a CERN engineer revealed that the upcoming AMD CPUs will pack as many as 32 physical cores on a single die, [/quote<] Not exactly. More like 32 cores on a single package and the presenter says as much although he is a little loose with the word 'die' when he should have said 'package'. Note that the presenter is *not* an AMD employee, so we are assuming that he's sharing some information that AMD has provided him but has not otherwise published to the public at large quite yet. Here's the original video, the AMD section begins at about 12:32: [url<]https://mediastream.cern.ch/MediaArchive/Video/Public/WebLectures/2016/471040c0/471040c0_mobile_480p_1000.mp4[/url<]

      • Jeff Kampman
      • 4 years ago

      Boneheaded editing mistake. Fixed.

        • chuckula
        • 4 years ago

        It’s fine. I always go back to the original source when I can and at least CERN has some real engineers presenting the information.

        • ronch
        • 4 years ago

        You don’t have to kick yourself every time you make a mistake around here, Jeffy ol’ boy. This is TR! We always make mistakes here and that’s OK. What’s not ok is not admitting you’re wrong and even being an a$$ about it.

        We also have a lot of those folks around here too, don’t we? 😉

      • maxxcool
      • 4 years ago

      Random though: dual module could also mean dual IGPu on a module. Technically intriguing.

      • the
      • 4 years ago

      Dual die wouldn’t be too different than what AMD did with their last x86 Opteron. Though I’m optimistic that AMD will go all in with interposers for this to enhance scalability and yields.

        • BIF
        • 4 years ago

        I must be getting tired. I first read “interposers” as “imposters”, and so my brain put this together from your comment:

        “Double-death to impostor Opterons!”

        Okay, coffee has officially dropped below therapeutic levels… 🙂

    • christos_thski
    • 4 years ago

    At this point in time, even intel fanboys are wishing for a competitive CPU from AMD. CPUs have been stagnating badly for the better part of a decade.

      • Roo5ter
      • 4 years ago

      I don’t think there is any point where it would make sense to not want a competitive CPU from either manufacturer.

      We will probably see great gains in cpu power again once we move past silicon.

        • robliz2Q
        • 4 years ago

        it’s not just “competition”, it’s that Dennart scaling stopped, and wider CPUs with more cores and higher throughput as more transistors become available despite Moore’s law slowing, don’t mean faster single-thread performance. There’s a law of diminishing returns, the low hanging performance fruit were taken long ago.

      • ImSpartacus
      • 4 years ago

      Are there even cpu fanboys anymore? The last couple years haven’t left much for anyone to be happy about.

      There are the uneducated core fanatics that insist that many cored budget amd solutions are obviously better than dual core budget Intel solutions (when reality is more complex, as usual). But aside from that minor anomaly, I think most people get it.

        • xeridea
        • 4 years ago

        The i3 suffers in many games. Even though the higher single threaded performance is better for DX11, its lack of cores hurts the rest of the game logic. Many other tasks benefit greatly from more cores, though it is dependent upon the user. It’s not cut and dry, but it isn’t hard to come up with reasons for why a 6 or 8 core CPU for similar price isn’t better than a 2 core cpu with higher single threaded performance.

          • chuckula
          • 4 years ago

          Not buying it.

          Here’s how the AMD-biased Eurogamer benchmarked the i3-6100: [url<]http://www.eurogamer.net/articles/digitalfoundry-2015-intel-core-i3-6100-review[/url<] Go look at the game benchmarks where the worst-case scenario i3 is beating the 6-core 4.2GHz FX-6300 by a small margin. Oh, BTW, "worst case" means lower detail levels. As the complexity cranks up, the i3 is easily capable of beating that 6-core part by 50%. Allow me to quote: [quote<] The same set-up also sees Skylake beat the AMD FX-8350 (paired with 1600MHz DDR3) in every game we tested bar Crysis 3 and The Witcher 3. Of course, those chips beg to be overclocked in a way that the i3 never can, but the bottom line is that in many gaming scenarios, the new i3 is capable of performance that belies its dual-core status. In short: choose your components carefully - get your board choice right, buy the right RAM, and the Core i3 6100 forms the basis of a great, easily upgradable gaming platform.[/quote<] Yeah that's right, they even compare the i3 favorably to the fabled 8-core Piledrivers. Here's Techspot that also includes 99 and 99.9 percentile frame rate figures: [url<]http://www.techspot.com/review/1087-best-value-desktop-cpu/page4.html[/url<] Notice how an 8-core FX-8320E overclocked to [b<]4.6GHz[/b<] manages to win -- wait for it -- one gaming benchmark compared to the i3 that supposedly "suffers" in so many games. Oh and the game where AMD is winning? That would be Arkham Knight, where "winning" is kind of pointless. DX12: The Core i3's best friend.

            • BobbinThreadbare
            • 4 years ago

            There doesn’t seem to be a compelling reason to get an i5 or i7 if all you care about is gaming either.

            That Skylake i3 flies (so does Devil’s Canyon for that matter).

            Edit: TR’s article is interesting [url<]https://techreport.com/review/26977/intel-core-i7-5960x-processor-reviewed/6[/url<] Looks like Crysis and Watch Dogs really did well with multithreading. So there are some cases where more cores is helpful.

            • chuckula
            • 4 years ago

            Yeah, for most gamers the i3 is probably the best CPU available at a lower price point.

            Of course it’s possible to buy a higher end chip and if multi-threaded workloads are really important to you then the i3 can be outclassed. However, there’s been this constant buzz that games absolutely need “moar coarz” and it’s not true for modern games.

            Furthermore, from what I’ve seen of DX12, the need for moar coarz may actually go down, not up, in the future. That’s because while newer APIs make multi-threading easier to pull off, their really big advantage is in reducing the need for the CPU to do extra bookkeeping and other overhead processing. Relatively speaking, a chip with fewer cores like the i3 will see a big boost from not having to perform all of that overhead compared to a chip where the overhead wasn’t that big of a deal already.

            • ikjadoon
            • 4 years ago

            They don’t need more cores, but I’d love to see any dual-core from the past 5 years run BF4 on a 64-player server, lol.

            BF4 @ 64-players is incredibly CPU-intensive that even overclocking your RAM boosts minimum frame-rates.

            • AJSB
            • 4 years ago

            LOL, i played Origin FreeTime of BF4 with a A6-5400K OC to 4GHz, iGPU OC to 1013 Mhz , RAM at 2133Mhz…and using custom resolution, i actually got playable frame rates 😀 :p

            The only s****y thing is that freaking game server browsing via Web Browser, that was insanely SLOW and the game would took a LOT of time to load (besides audio during that stage was screwed) and enter in the server…but AFTER i got in and spawn and the action started, it was all good…

            • f0d
            • 4 years ago

            just tried it with my 3930k but reduced the number of cpu’s to 2 in the bios too see how it went
            sure it was slower than the 6 core but it was still playable with 2+hyperthreading

            • LightenUpGuys
            • 4 years ago

            Even with disabled cores, wouldnt it still be using all the L3 cache?

            • ImSpartacus
            • 4 years ago

            Check out the techspot review where they overclock their 6100. It’s brutal.

            • Kretschmer
            • 4 years ago

            A well-crafted post!

            I really, really wish that TR would do more benchmarks of the i5s and i3s alongside the flagship i7 CPUs.

            • ImSpartacus
            • 4 years ago

            Yeah, plenty of other sites do the “halo” stuff. I love learning about the other options as well.

            • xeridea
            • 4 years ago

            I stand corrected. I was remembering from some older reviews where it sometimes had issues compared to the 4 core chips.

            • psyph3r
            • 4 years ago

            I compel you to actually build an i3 system. load up everything you need and use it day to day and see why an 8 core is always better. I build these systems all the time and i would never choose an I3 over an 8350 for an actual gaming machine. Quad-cores are a different story but we talking about a 150 dollar cpu cost.

            • ImSpartacus
            • 4 years ago

            I feel like we’re going down a rabbit hole the moment we start talking about personal “day to day” use cases – a lot of this stuff is situational.

            So avoiding that, so you know if any reviews or benchmarks that paint a more accurate picture of the dynamic between amd’s many cored processors and Intel’s dual core Skylake parts? I think most of us would be most interested in gaming workloads, but any data is helpful.

            • Sabresiberian
            • 4 years ago

            The same graphs prove the dual-core chips don’t perform as well as the 4-core chips. You’re just picking the example of a weak architecture that can’t bring competition to the table no matter how many cores it has – but that’s the architecture of the AMD offerings. When you compare apples-to apples by using the same architecture, dual-core suffers in modern games (or at least clearly demonstrates lower performance).

          • ImSpartacus
          • 4 years ago

          I’m really interested in this topic. You sound like you’ve read a lot about this topic. Could you share some recent reviews/benchmarks about it?

          Unfortunately, my data is limited and [url=http://www.techspot.com/review/1087-best-value-desktop-cpu/<]what little data I have[/url<] shows modern "i3" caliber cpus keeping pace with many cored alternatives. There's no question that this can often be a situational issue, so more data is better.

      • ronch
      • 4 years ago

      OK, I admit it. I’m really an Intel fanboi who’s pretending to root for AMD so AMD will put out a competitive CPU and Intel will be kept on their toes and keep releasing faster CPUs so I can continue being an Intel fanboi and AMD will again look bad so I will have reason to pretend rooting for AMD so they will put out a competitive CPU and Intel will be kept on their toes and keep releasing faster CPUs so I can continue being an Intel fanboi and AMD will again look bad so I will have reason to pretend rooting for AMD so they will put out a competitive CPU and Intel will be kept on their toes and keep releasing faster CPUs so I can continue being an Intel fanboi and AMD will again look bad so I will have reason to pretend rooting for AMD so they will put out a competitive CPU and Intel will be kept on their toes and keep releasing faster CPUs so I can continue being an Intel fanboi and AMD will again look bad so I will have reason to pretend rooting for AMD so they will put out a competitive CPU and Intel will be kept on their toes and keep releasing faster CPUs so I can continue being an Intel fanboi and AMD will again look bad so I will have reason to pretend rooting for AMD so they will put out a competitive CPU and Intel will be kept on their toes and keep releasing faster CPUs so I can continue being an Intel fanboi and AMD will again look bad so I will have reason to pretend rooting for AMD so they will put out a competitive CPU and Intel will be kept on their toes and keep releasing faster CPUs so I can continue being an Intel fanboi and AMD will again look bad so I will have reason to pretend rooting for AMD so they will put out a competitive CPU and Intel will be kept on their toes and keep releasing faster CPUs so I can continue being an Intel fanboi and AMD will again look bad so I will have reason to pretend rooting for AMD so they will put out a competitive CPU and Intel will be kept on their toes and keep releasing faster CPUs so I can continue being an Intel fanboi and AMD will again look bad so I will have reason to pretend rooting for AMD.

        • torquer
        • 4 years ago

        Dude. Paragraphs.

          • ronch
          • 4 years ago

          It was a repeating sentence, in case you didn’t notice. I could’ve made it a lot longer. All I had to do was Ctrl+V. But I’m kind and merciful.

            • torquer
            • 4 years ago

            See it was so bad I didn’t even take time to read it. Next time please place your repeating sentences into neat paragraphs so I can read them just enough to ignore them.

            • BIF
            • 4 years ago

            That was not kind and merciful.

            I regret only that I have but three downthumbs to give.

            • Anovoca
            • 4 years ago

            Solution:

            I’m really ||: an Intel fanboi who’s pretending to root for AMD so AMD will put out a competitive CPU and Intel will be kept on their toes and keep releasing faster CPUs so I can continue being :||

      • Peldor
      • 4 years ago

      Competitive isn’t really enough. They need to break new ground on absolute performance. Otherwise Intel cuts prices 10% and nothing else changes.

        • ronch
        • 4 years ago

        I’ve been saying that all along. Even if Zen manages to match Intel in terms of performance and energy efficiency, they need to price lower otherwise folks, save for those who would like to support AMD, will simply go with Intel. To price like Intel, AMD needs to deliver a better product. I suppose decades of being the cheaper alternative gives you an image that’s simply too hard to get rid of, not just with end users but with OEM partners as well. And that’s just too bad for AMD, but I guess it’s good for us.

        • blastdoor
        • 4 years ago

        I don’t think that’s correct.

        Intel’s profit-maximizing strategy (even setting aside anti-trust concerns) may not be to price so as to run AMD out of business. There are several reasons for this:

        1. Intel serves many more market segments than AMD. If Intel prices too low in the segment where they compete with AMD, they might end up hurting themselves in other segments.

        2. AMD is likely to be capacity constrained.

        3. Intel has brand and infrastructure advantages that allow them to sell at a higher price, even if performance is equal.

        • BIF
        • 4 years ago

        So true.

        • Anonymous Coward
        • 4 years ago

        Hah! Hah! Hah! Odds of beating Intel are more or less zero.

      • maxxcool
      • 4 years ago

      i’m going to disagree slightly. what we need is better coding. 16 cores will be utterly wasted on a consumer box.

        • just brew it!
        • 4 years ago

        You can’t fix stupid, and there’s an awful lot of stupid going on in the field of software development.

          • travbrad
          • 4 years ago

          It’s not just “stupid” that is causing a lack of well-threaded applications. Making a program use more threads is actually more difficult. Games seem to be especially difficult in this regard because any minor interaction by the player can suddenly change what has to be computed next and may rely on other stuff being computed first. That being said it CAN be done, but most of the examples of truly well-threaded game engines we have seen have come from developers with teams of hundreds of people and large budgets (DICE, Epic, Rockstar, Crytek, etc). Engines from smaller developers are almost never optimized for a large number of threads.

          Making something like video encoding multi-threaded is much easier because there is no interaction and you can easily predict what is coming next and/or break it into parts. There are also a lot of programs that just don’t really need more performance too, where even a single bulldozer core is plenty fast. Why waste development resources if it won’t make a noticeable difference?

            • Mr Bill
            • 4 years ago

            How about a well threaded operating system? Every game out there basically locks up the entire PC resources. One could envision a dual video card/monitor/mouse/keyboard setup, with two people playing the same or even different games on the same PC box. I know of a couple husband & wife gaming couples that play back to back or side by side. Why have to duplicate the entire computing resource when you have all these cores?

            • BobbinThreadbare
            • 4 years ago

            Windows hasn’t had the problem of locking up the system with exclusive access since Vista came out.

            I think memory management would be a huge PITA in your suggested setup, not to mention disk access.

            • robliz2Q
            • 4 years ago

            Economics are a factor, which mean a shared setup does not make sense anymore.
            A CPU with 3 channel RAM, costs > 2x cost, of a standard i5 rig. Again there’s only so many lanes of bandwidth out of the CPU.

            Then there’s all the configuration and software issues, someone needs to solve, for a small end-market with little prospect of profit.

          • BIF
          • 4 years ago

          It’s not stupid.

          Part of it is as Travbrad says, some applications don’t lend themselves to massively parallel CPU threading. But some do. F@H loves it.

          Graphic rendering with biased and unbiased render engines love it too. Some (mostly unbiased) will use the GPU cores too, and I love that.

            • just brew it!
            • 4 years ago

            No, I’m sorry there *is* a lot of stupid. Even when parts of a problem *could* in theory be parallelized effectively, more often than not the implementation sucks, with race conditions that introduce difficult-to-fix intermittent bugs, and inefficiencies that kill most of the potential performance gains from parallelism.

            Tip to developers: If you don’t grok parallel programming, don’t attempt it. The rest of the world (or at least the subset of it that buys your product) will be eternally grateful.

            • robliz2Q
            • 4 years ago

            That’s economics. Marketing people set fixed dates for features and usually the engineering & test effort is under-estimated. Hence the ship now, fix later may be so common.

            Quite often it’s just pointless multi-threading, making the software an order of magnitude more complicated, in order to finish a CPU intensive program run in 5 rather than 10 seconds. Who really cares, if it can run in background when you can distract the end-user with some eye-candy?

            • Anonymous Coward
            • 4 years ago

            What? You’re floating around in fantasy land. Parallel programming of any non-trivial problem requires careful thought and a whole lot of companies are not interested in paying for that, and for good reason. Even after careful thought, parallelism can make maintenance more difficult. Any custom-coded parallelism is a [i<]terrible idea[/i<] unless the problem really calls for it from an [i<]economic perspective[/i<].

            • just brew it!
            • 4 years ago

            How am I in a fantasy land? I said if you don’t understand how to do parallel programming properly, don’t try. That’s not at odds with what you just said. (And understanding how to do it properly also means knowing when it is and isn’t an appropriate solution.)

            • Anonymous Coward
            • 4 years ago

            Its not about how smart anyone is, its about economics. We need to look to languages and libraries which are carefully programmed and well tested, to provide parallelization out of sight where regular old deadline-chasing programmers don’t need to think about it. Parallel programming [i<]costs money[/i<].

            • just brew it!
            • 4 years ago

            …in part, because people who have a clue don’t come cheap. 😉

            I’ve seen a lot of code that has been outsourced to the lowest bidder contract house for development. This invariably results in a train wreck. The most recent example I can think of had several dozen threads, with no (or in some cases, incorrectly implemented) synchronization for access to shared resources. Unsurprisingly, the application was full of bugs, with many unexplained, difficult to reproduce (and therefore difficult to fix) random behaviors.

            Yes, you could say that’s a money issue. But it’s a money issue in the sense that the company wasn’t willing to pay for competent developers, and the developers they did hire weren’t smart enough to realize they were in way over their heads when they tried to implement a massively multi-threaded approach (or to realize that having that many threads in the first place was probably a really bad idea).

            • Anonymous Coward
            • 4 years ago

            Don’t blame the code moneys, blame the people who managed the project. Perhaps the programmers were too inexperienced to be aware of the mess they were making, but that doesn’t excuse whoever was paying the bills. The programmers were neither working for charity nor can they be expected to be on a personal mission to generate only good results.

      • w76
      • 4 years ago

      Intel fanboys? I’ve heard Intel EMPLOYEES say they wish AMD was giving them more fight. (Well, not personally, but I’ve seen it quoted in the press)

      • blastdoor
      • 4 years ago

      I think there are some pretty great CPUs from Intel — they are just insanely expensive. For example, you can buy a 14 core Xeon but it will cost you about $2500.

      If AMD could sell a 16 core Zen for $1,000 I think they’d have a hit (assuming the IPC and clock speed are decent…. say, Sandy Bridge IPC and a clock of 2.5 GHz).

      • Kretschmer
      • 4 years ago

      Dude, x86 CPUs have been settled for years. Fanboys are an endangered species. Do you think anyone wants to see AMD fail more than they want to see cool new tech?

      • travbrad
      • 4 years ago

      I want some real competition in the CPU market but I hope AMD can have much better single core performance than they do now. Just having MOAR CORES isn’t very exciting when most games still don’t really use more than 4 cores (some aren’t even using that many). Nice for video encoding I guess though.

      • Krogoth
      • 4 years ago

      Laws of physics and diminishing returns are the real culprits.

      There’s no big $$$$ in desktop, laptop and embedded chips. The massive R&D and manufacturing costs are creeping up. The small players are gone and sold off their assets when they saw the writing on the wall. Intel is fighting a difficult battle as ARM eats up low-end and embedded markets while demand continues to drop on the “traditional” desktop and laptop markets.

      • LightenUpGuys
      • 4 years ago

      Only consumer level stuff. High end CPUs double in power or better every couple years still and the trend hasnt slowed, although the architectures have changed toward manycore to accomplish it.

        • robliz2Q
        • 4 years ago

        But that’s wide throughput .. those same CPUs won’t impress on a single-threaded benchmark.
        It’s simply true that a system with CPU that’s 2 core, will perform much better than a 4 core 1/2 speed CPU.

      • JustAnEngineer
      • 4 years ago

      It’s been [b<]half[/b<] a decade. I bought a Core i7-2600K Sandy Bridge processor on January 8, 2011.

      • UnfriendlyFire
      • 4 years ago

      The people holding large amount of Intel stocks would like to see Intel as the sole CPU provider.

Pin It on Pinterest

Share This