AMD gives us our first real moment of Zen

Who would have expected AMD to reveal the most exciting bit of news for the PC enthusiast at the Intel Developer Forum? Last night, the company invited a small group of journalists and analysts to the St. Regis Hotel in San Francisco to give us our first detailed taste of its upcoming Zen CPU architecture.

Zen is a make-or-break moment for AMD. The last high-performance, clean-sheet x86 CPU design from the company was the troubled Bulldozer “module” in the FX-8150, followed up by the Piledriver refinement of that design in 2012’s FX-8350. Those chips trailed their Intel contemporaries when they were new, and they’ve soldiered on in AMD’s model lineup for an eternity while Intel has delivered continuous (if slowing) performance improvements and process advancements with its Haswell, Broadwell, and Skylake CPUs. AMD has been continuously refining its APUs in the intervening time, to be sure, but those products have never captured the enthusiast PC builder’s imagination in the same way that a Core i7-4790K or Core i7-6700K does.

AMD CEO Lisa Su says the company wants to make high-performance CPUs as much as we want to see them, and the first Zen consumer part, Summit Ridge, may be just the thing to quench our thirst. Summit Ridge is an unabashedly high-end desktop chip fabricated on GlobalFoundries’ 14-nm FinFET process, the same as the recently-released Polaris graphics card family. It’ll have eight cores and sixteen threads, courtesy of simultaneous multi-threading (better known as Hyper-Threading in Intel CPUs). Unlike Bulldozer and Piledriver, architectures that friend-of-TR David Kanter characterizes as favoring throughput at the expense of single-threaded performance, Zen squarely puts the focus back on strong cores with high single-threaded performance.

To make that happen, AMD CTO Mark Papermaster says each Zen core gets a better branch predictor, its own micro-op cache, wider instruction scheduling, and a doubling of floating-point execution resources.

Simultaneous multi-threading also helps “keep the beast fed,” as Papermaster puts it. Each Zen CPU will also have 8MB of shared L3 cache, 512K of L2 cache per core, 64K of instruction cache, and 32K data cache to that end, as well.

This admittedly squishy graph suggests a sense of the efficiency progression from Bulldozer to Excavator to Zen. It may be a folly to try and make sense of this graph, but in the context of desktop parts, Excavator only saw a release in the form of the Athlon X4 845, a 65W, quad-core CPU. If AMD really has delivered 40% more IPC than Excavator with Summit Ridge, we could be looking at a significantly cooler-running and power-sipping chip, even in the form of a high-end desktop part.

Summit Ridge will ride in on the AM4 platform that we first learned about at CES this year. AM4 will offer a number of modern features that are missing from the grizzled 990FX platform and friends, like DDR4 RAM support, PCIe 3.0 connectivity, USB 3.1 Gen2 support, and compatibility with NVMe and SATA Express storage.

I would love to dive deeper into Zen, but these are early days, we don’t have a lot of details yet, and time is short as I write this. To my admittedly green eyes, AMD has made sensible design decisions to produce the kind of high-performance core that our CPU testing tends to favor in both gaming and traditional workloads.

Nitty-gritty details aside, the real question on everybody’s mind is whether AMD met the 40% IPC improvement goal that it’s publicly committed to over the past few months. To make the point that it has, AMD put a Summit Ridge engineering sample running at 3GHz up against an eight-core, sixteen-thread Core i7-6900K artificially limited to the same 3GHz speed. AMD ran the same Blender 3D rendering workload on both chips at the same time. Watch the video above for a sense of how Summit Ridge stacks up to Broadwell-E.

While it’s worth remembering that this is only one data point, the Zen chip kept pace with or slightly beat the Broadwell-E CPU in that test. If that performance level holds across a range of workloads, AMD appears to have made some of the large strides it needs to make toward closing the performance gap with Intel CPUs. Tantalizingly, AMD says the 3GHz figure isn’t the final clock speed it expects production Zen chips to top out at, either. Final clock speeds for Zen, along with TDP figures and pricing, are still under wraps, but we should learn more as we draw closer to the Summit Ridge launch in the first quarter of 2017.

Zen isn’t just coming to enthusiast desktops. AMD wants to regain lost ground in the data center, as well, and its Naples SoC will spearhead that effort. Naples is a 32-core, 64-thread server SoC, and AMD demonstrated a dual-socket server platform with a pair of these chips running Windows Server at the event. AMD expects that Naples will begin showing up in servers in the second quarter of 2017. AMD is also confident that Zen can scale to mobile and embedded devices, all on the same 14-nm GloFo process. We’ll begin learning more about Zen-powered APUs and embedded parts in the second half of 2017.

Even with this brief glimpse of Zen, it seems like AMD is starting to turn a corner. The Polaris graphics card lineup may not offer world-beating performance, but it does offer compelling values for the PC gamer, and sales of those products appear to be strong. If the company can deliver a similarly “good enough” high-end desktop CPU family with Zen and Bristol Ridge, it may be well on its way back to health. We’ll have to see what the next six months bring, but I see plenty of reason to be cautiously optimistic about AMD’s future, and that’s a welcome change of pace.

Comments closed
    • richardjhonson
    • 3 years ago
    • ronch
    • 3 years ago

    Something I remembered just now, again:

    A Zen engineer once said that they were given full freedom to do their best with Zen. Given how AMD has been having so much trouble with the past two generations of CPUs (K10 and BD), it just begs the questiin:

    [b<]WHAT THE HELL WAS AMD MANAGEMENT DOING SINCE BARCELONA DEVELOPMENT??!!? TELLING THEIR ENGINEERS TO SLACK OFF?!?!?[/b<]

      • tipoo
      • 3 years ago

      Telling them to aim for a high clocking racehorse design for marketing. Apparently they saw the Pentium 4 Heatburst and wanted a slice of that.

      Now, funny enough, I hear they’re following the 1:2 ratio Intel set themselves for, that a feature has to increase performance by twice as much as it increases power draw for it to be added. We’ll see if it nets them similar results.

        • Meadows
        • 3 years ago

        Doubt that was the reason. They’ve known the Pentium 4 longer than that, in fact they did their own counter-marketing back in the day whereby AMD processors were labeled with “intel equivalent” MHz numbers such as “2800+” or “3000+” or what have you.

        • ronch
        • 3 years ago

        Apparently AMD has been listening to their marketing eggheads a bit too much. What does marketing say they need to sell stuff? [b<]Clock speed and core count![/b<] And aren't those the main design targets for Bulldozer? Good grief, AMD. Stop listening to Roy Taylor and start listening to your engineers! See what your competition is doing! You CANNOT change the rules! You play in Intel's market SO you play by THEIR rules! When that last article on Zen came out here on TR a few days ago it did give the impression that AMD is now also following Intel's design philosophy, that 2:1 thing. I don't know if they've following it for a while now though, but if this is the first time they're doing it, just 3 words: IT'S ABOUT TIME!!! (OK, that's like 4 words.)

      • derFunkenstein
      • 3 years ago

      Most of what they [url=https://techreport.com/news/23753/amd-posts-q3-loss-announces-15-layoffs<]told their engineers[/url<] was to [url=https://techreport.com/news/29133/more-cuts-amd-to-reduce-global-workforce-by-5<]clean their desks[/url<]. Remember that Jim Keller went to AMD in 2012, and they've spent eons working on this thing. They've had freedom, but basically ever since Bulldozer, Piledriver, and Excavator, they've been doing this. (and I know Barcelona is way older, and I have no idea what they were doing from 2008 when the Phenom II came out and 2011 when Bulldozer shipped. My guess is they were doing lots and lots of drugs).

    • rutra80
    • 3 years ago

    But:

    CAN

    SHE

    COOK

      • Unknown-Error
      • 3 years ago

      SEXIST!!!

        • rutra80
        • 3 years ago

        Yes! Thank you.

    • seeker010
    • 3 years ago

    that FPU looks kind of weak against Intel chips if it’s really only 1 256bit FMAC…; I guess unless each subunit is itself ported so it can drive up to 4x128bit instructions. I don’t think I ever remember AMD having a better chip than Intel without also having a comparable or better FPU….

      • tipoo
      • 3 years ago

      Only certain top bin Xeons (and Xeon Phi) have AVX-512 so it’s more than moot on the enthusiast desktop side. Zen will have at minimum twice the theoretical FPU output as Bulldozer clock for clock.

        • seeker010
        • 3 years ago

        AVX and AVX2 are both 256bits. Intel can execute up to 2 256bit FMAs per clock right now, without AVX512 (which is a different set of extensions). If AMD has only 1 256bit FMAC that means it’s still half of Intel with AVX instructions.

        Now if you’re saying FMA isn’t exactly a must use for everything, that is fair. I’m just pointing out AMD has never been considered to have a faster processor without also having a FPU that was at least as good as Intel on the basic FP instructions (ADD, MUL now FMA)

          • Rza79
          • 3 years ago

          True but AMD has double the cores. So the theoretical AVX throughput will be the same as Intel’s quad cores.

      • ronch
      • 3 years ago

      The original K7 beat the Pentium III back then in terms of Integer and FPU.

        • Krogoth
        • 3 years ago

        K7 was a better architecture hands down. It was held back by shoddy Via chipset platform and motherboard vendors fearing the wrath of Intel (shipping Slot A motherboards with blank cardboard boxes).

        K7 really came together by the time Thunderbirds came around and Intel only had lackluster P4 Williamette. P3 Tualatin did managed to compete but were a limited release being a beta-test for 130nm process.

          • ronch
          • 3 years ago

          Thunderbird.

    • yd1
    • 3 years ago

    me & amd are done.
    give misleading information (CFX) followers of nvidia tech leadership
    cheep knockoff that does not worth its cheaper price.

    all my GPU’s beside 1 (GeForce3 Ti200) ware ATI/amd (i forget the old one but 4870 & 7970)
    current is 7970.

    AMD can go whack themselves for all i care.
    that doesn’t mean nvidia are perfect but blatant misleading lies they don’t spread.
    and don’t lurk the forums with hidden AMD PR users trying to mess your derision up so you buy their inferior product.

      • ronch
      • 3 years ago

      You should build a bomb shelter too, ya know. And always wear a tinfoil hat.

        • Redocbew
        • 3 years ago

        Don’t forget to give the shelter a periscope so you can watch for contrails.

    • Welch
    • 3 years ago

    I’m hoping for at least a 3.4ghz base and 3.8-4.0ghz turbo. Of course if they want to push it a bit further I’m OK with that too :). Considering these are 8 core parts, I don’t expect them to out clock Intel’s existing 4 core parts AND beat/match IPC. I’m trying to stay realistic.

    This being such a fresh and immature process, I’m expecting that they will pretty much push their highest end parts to their clock limits and leave little room for OCing. It may be easier to pickup a chip one step down that is capable of clocking like their higher end part to “save money” for those wanting the gamble. Still… can you imagine the odds of getting all 8 cores to play nice at a specific voltage/clock!? Unless someone builds some really intelligent per-core OCing features into their motherboards I’m thinking these are not going to be as simple of an OC as previous 2 and 4 core processors.

    If their performance in IPC and with final clock really takes on Intel’s current offerings, I’m wondering if there will be any real “value proposition” to these chips like everyone hoped, short of a price war.

    Here is to hoping :)!

    • ronch
    • 3 years ago

    Crazy how quickly 5 years can just [url=https://techreport.com/review/21813/amd-fx-8150-bulldozer-processor<]zip by[/url<].

      • tipoo
      • 3 years ago

      “An all-new microarchitecture initiates a new era for AMD”

      *shudder*. Well, I hope the door on that era is very nearly firmly shut.

    • TheJack
    • 3 years ago

    What is interesting in this thread is that chuckula has been under heavy attacks and he is not even fighting back. You guys have caught him by the balls.

    • Klimax
    • 3 years ago

    There’s interesting discussion on RealWorldTech:
    [url<]http://www.realworldtech.com/forum/?threadid=160066&curpostid=160066[/url<]

    • Klimax
    • 3 years ago

    There is one strange thing. AMD still uses 64kb L1i 4-way, while Intel still uses “standard” 32kb L1i 8?-way. AMD’s large L1i dates way back, while Intel’s smaller L1i started with Core. I don’t remember AMD ever benefiting from such massive L1i, so why do they keep it?

    • anotherengineer
    • 3 years ago

    So 40% over bulldozer, I guess is about 40% over PII. So a 40% IPC increase over my quad, and an additional 4 cores and 16 threads over my CPU sounds nice.

    Just wonder how the motherboards/bios and chipsets are going to be?

      • BaronMatrix
      • 3 years ago

      Bristol Ridge not PileDriver… It will end up 100%+ faster than my 8370… It’s almost impossible it WON’T hit 4GHz+…

    • revcrisis
    • 3 years ago

    Here’s a good chart showing increase over Sandy for each Intel generation: [url<]http://images.anandtech.com/doci/9483/Generational%20CPU%20IPC.png[/url<] Skylake didn't entice me to upgrade from Sandy. I'll be on the fence about Kaby Lake. If not Kaby, then Icelake for sure.

      • PBCrunch
      • 3 years ago

      The real upgrades come from decreased power consumption, improved integrated graphics, and features like improved USB3.0 / NVMe / M.2 support (chipset stuff).

      I know “integrated graphics” is like a dirty word in these parts, but it is important to a lot of people.

        • tipoo
        • 3 years ago

        Even just base user experience. OS’s rightly push more to the GPU these days and I think people have warmed over memories of what graphics heavy OSs past XP were like on the horror days of Intel IGPs. Now even the lowest end is fairly smooth on them.

        • Ifalna
        • 3 years ago

        Maybe, but having a 3570K (ivy), nothing of that would make me want to spend 500€+ on new core components.

      • tipoo
      • 3 years ago

      Ivy to Haswell was more impressive than initial reviews gave it credit for. Like that chart shows, if you used Haswells enhanced SIMD instructions, it could really trounce older gens in certain workloads.

      That’s not showing increase over Sandy each generation though, that’s showing increase per generational jump. It would be cumulative, Sandy to Ivy to Haswell to Broadwell to Skylake.

      • travbrad
      • 3 years ago

      I just hope Zen makes Intel feel enough pressure that they release some Kaby Lake desktop parts with eDRAM/L4. The benefits of that on a part that is already Skylake+ performance would be enough to finally make me upgrade from my 5 year old Sandy Bridge CPU. Or better yet Zen beats Kaby Lake/Skylake outright (I can dream anyway)

        • tipoo
        • 3 years ago

        What work do you do that would benefit from the eDRAM? From what I saw it was mainly only things like fluid body simulation that benefitted. Other things saw marginal increase.

          • travbrad
          • 3 years ago

          Games that don’t make good use of multiple cores/threads mostly. Many tests have shown the 5775C matching or even exceeding the performance of a 6700K in games despite the 5775C being clocked 700mhz lower (and turbo boost that is 500mhz lower). If Kaby Lake has some architectural improvements over Skylake, can reach similar clock speeds, AND had eDRAM it would theoretically be the king of single-threaded performance. Such a CPU would only be maybe 15-20% faster than Skylake, but that would push it over the “threshold” in my head where upgrading from Sandy Bridge is finally worth it.

          In particular ARMA 3, Kerbal Space Program, and Planetside 2. Kerbal Space Program especially can drop to below 20FPS on my OCed Sandy Bridge when building really complex spacecraft, and is 100% CPU limited.

          The 5775C was also similarly impressive compared to higher clocked Skylake/Haswell in video encoding which I do a fair bit of, but not enough to get a “-E” 6+ core CPU and give up some single-threaded performance.

    • smilingcrow
    • 3 years ago

    Considering that AMD are releasing a 32 core server CPU using 4 Zeppelin 8 core modules it seems likely that the design of the 8 core module and the GloFo process is more for low frequency/power than high frequency/power.
    But if you look at the Xeons 8 Cores the nearest competitors are:

    Haswell:
    E5-2628 V3 85W 2.5 – 3GHz, Max Turbo @8 Core = 2.8GHz ~$700 (OEM only)
    E5-2667 V3 135W 3.2 – 3.6GHz, Max Turbo @8 Core = 3.4GHz, $2,057
    Broadwell:
    E5-2620 V4 85W 2.1 – 3GHz, Max Turbo @8 Core = 2.3GHz $417
    E5-2667 V4 135W 3.2 – 3.6GHz, Max Turbo @8 Core = 3.5GHz, $2,057

    Note: Broadwell is underwhelming because the focus is more on 10+ cores where they impress much more.
    So a Zen with a max boost of 3GHz @8 Core at 95W with a seemingly decent IPC would be an amazing comeback. Even at 125W it would be hard to complain if the price is right and turbo for 2 or 4 cores was around 3.5 or more.
    One difference maybe that Xeons are quad channel DDR4 whereas will Zen for desktop be dual channel which will help keep power down?
    Those expecting 3.5GHz with all 16 threads under full load are being very optimistic even at 125W.
    Looking at pricing versus a Xeon a Zen 8 core @ 3GHz is probably close to an E5-2630 V4 85W (10 core @ 2.4 Max) which is $667.
    Of course the AMD motherboards should be a lot cheaper than the X99 V3 board.

      • Srsly_Bro
      • 3 years ago

      Nobody buys retail unless they have a business. Compare the price of es E5 V4 xeons. 28 threads @ 2.8ghz for a few hundred makes quick work of the 6950x. I’m watching for Zen es on flea bay in the coming months.

        • smilingcrow
        • 3 years ago

        The amount of Engineering Samples (ES for those not clear what he meant) available is minuscule and is completely irrelevant to the market price for a processor.
        I have seen some great prices on Xeon ES but you need to make sure you pick a stepping that your board supports as some are quite fussy and only officially work with retail steppings whereas many ES are earlier revisions.
        So a bit of a risk if you buy a dual socket board and two ES chips and find out they don’t work.
        There’s also a risk of bugs with early steppings I presume!
        So you are really talking out of your ass bro, seriously.

          • Srsly_Bro
          • 3 years ago

          There is risk. I know plenty of people running DP Xeon processors es and haven’t reported any issues. The risk needs to be acknowledged.

            • smilingcrow
            • 3 years ago

            What are they using them for?

    • albundy
    • 3 years ago

    wait, i thought zen was to be released in october? now they are pushing it half a year forward? i was really hoping to build a new rig this year but was holding off for new releases from both sides.

    • DragonDaddyBear
    • 3 years ago

    The real question is how does this preform against Sandy Bridge? I’m guessing a lot of people are running that generation and could use an upgrade.

      • Firestarter
      • 3 years ago

      if it’s a worthwhile upgrade versus sandy bridge, it’s going to be competing head-on with the best that Intel has to offer.

      • Srsly_Bro
      • 3 years ago

      That includes me. 2700k currently.

    • Srsly_Bro
    • 3 years ago

    Is Chuckula on Intel’s PR Payroll?

      • chuckula
      • 3 years ago

      No, Intel’s PR representatives are almost as desperate for Zen to be good as the usual suspects around here who think Zen has been proven to be a Skylake-E killer:

      [url<]https://twitter.com/FPiednoel/status/766674138236723200[/url<] You see, if there's a viable competitor then Intel keeps any potential anti-trust monkeys off its back.

        • raddude9
        • 3 years ago

        That’s exactly what an intel-paid shill would say!

          • chuckula
          • 3 years ago

          Yeah, that was kind of the point of linking to what a paid Intel schill said on twitter.
          To show what a paid Intel schill says.

            • Meadows
            • 3 years ago

            Cool your tits, he’s just taking the piss. Kind of ironic how you missed that, considering whenever you say something forced and “funny” we’re supposed to get it right away.

            • Srsly_Bro
            • 3 years ago

            Poor Chuck lol

    • ronch
    • 3 years ago

    Just a thought: if Zen doesn’t offer markedly better performance than Skylake or more realistically, just matches it in some cases but generally trails it by a few points, what would compell most folks to upgrade unless AMD decides to price them a lot lower.

      • Beelzebubba9
      • 3 years ago

      One would hope Zen’s performance is good enough it can beat or exceed Skylake in heavily threaded workloads and provide the kind of value proposition Bulldozer tried, and failed, to do.

      That said it’s really impossible to predict the value proposition this early on. If AMD were able to get Summit ridge within 20% of Skylake in single threaded performance and power consumption, but offered twice the cores for about the same price I can see it being a strong enthusiasts option.

        • ronch
        • 3 years ago

        I mean, my point is, after all this time Intel has failed to compell people to upgrade, and if AMD will merely match Intel with Zen, what makes us think these same people would suddenly be compelled to upgrade UNLESS Zen is significantly cheaper or they SPECIFICALLY are waiting for Zen because they’re AMD fans (admittedly I fall in this category)? And in my case, even if I want to get Zen I’m not gonna buy until I really need it, which is the case with most folks: they don’t feel the need to upgrade. And fans who NEEDED better performance from AMD probably already went with Intel anyway.

          • Redocbew
          • 3 years ago

          They’ve failed to convince most of us to upgrade, because we’re aware of how easily a CPU from yesteryear can handle today’s applications. The average person is not.

          AMD doesn’t need to compete against Skylake as much as they need Zen to be good enough to get a few big OEM wins. Even if it’s just in mid-range beige boxes with ok-but-not-great performance that’ll still do them more good than upgrades from enthusiasts.

            • Geonerd
            • 3 years ago

            An OEM like…. Apple?

            • Redocbew
            • 3 years ago

            I was thinking more like Dell or HP, but a deal with Apple would do the job also.

            • Kretschmer
            • 3 years ago

            Given Apple’s obsession with minimal form factors and AMD’s history of excessive TDPs, Apple is a very unlikely OEM.

          • MOSFET
          • 3 years ago

          ronch, you’re thinking for people again. I think a lot of people are clamoring for an AMD upgrade path – because I am also clamoring (trembling even). Oh gosh, now I’m thinking for people.

      • BaronMatrix
      • 3 years ago

      Was there not a test against Broadwell-E in this article…?

        • Klimax
        • 3 years ago

        AMD’s PR. Numbers not relevant till independently verified. Also they limited Intel chip to same frequency as their chip.

        • ronch
        • 3 years ago

        Which showed them matching Intel. And then?

        People have been ignoring Intel then they suddenly wanna buy AMD? Will AMD really price significantly lower? Because if they’re not, why would people suddenly buy Zen unless they prefer to wait for AMD? And just how many people fall into this category?

          • Beelzebubba9
          • 3 years ago

          Well AMD was comparing Summit Ridge to Broadwell-E 8C, which sells for $1100 IIRC so depending on final clockspeeds and performance, they have a lot of margin in the market.

          For the sake of this discussion, lets assume AMD doesn’t have a history of failed execution, and can get Summit Ridge out the door at reasonable clock speeds (3.2 Ghz base / 3.6 Ghz Turbo or so) and can deliver -25%/+5% of Broadwell’s IPC in real world workloads. At that performance level AMD can credibly claim that Summit Ridge can provide value closer to Broadwell E than Kaby Lake or Skylake, and make a strong argument that it is worth a substantial price premium over AMD’s current lineup if not over Intel’s. I’m not saying AMD can reasonably charge $1000+ for Summit Ridge, but $400 doesn’t seem out of the question considering what Zen [i<]could[/i<] be. Remember AMD, like Intel, is really competing with CPUs from 5 years ago and mostly needs to provide a justification of an upgrade from Sandy Bridge/Bulldozer era hardware rather than supplant Sky Lake and its ilk. AMD's ability to execute fills me with deep pessimism, but maybe just maybe they'll prove us wrong and get at least once launch correctly.

            • ronch
            • 3 years ago

            Even if we can get Broadwell-E performance for $400 would most people buy? Many say they just don’t need anything faster. Or are they just being a bunch of sour grapes?

            • Beelzebubba9
            • 3 years ago

            Tech Report is an enthusiast site; I’m pretty sure anyone reading this tread does not represent a mainstream user. 🙂

            If Summit Ridge is as good as it [i<]could be[/i<] and lines up with the scenario I outlined I think a good portion of these forums - myself included - would strongly consider replacing their 3+ year old CPUs with one. To be quite clear the chances of AMD hitting those performance targets is pretty low, but pessimism is boring so I choose hope.

            • ronch
            • 3 years ago

            Um, I’ve been reading those “I don’t need anything faster” comments HERE on TR. 🙂 Even enthusiasts, apparently, think their 2600K or 2500K is good enough for a while. Like I said, if they suddenly jump the gun on a $400 Broadwell-E equivalent, what does it suggest when they say they don’t need anything faster? Is it really that or they just don’t want to admit they don’t want to pay money for more performance, even if they need it? Sounds like sour grapes to me, if that’s the case.

            • Beelzebubba9
            • 3 years ago

            There are people on these forums who brag about how their Core 2s are good enough, and yet I feel like they’re the minority among enthusiasts considering the relatively brisk sales of enthusiast parts. The QQ’ing of that portion of the market certainly didn’t stop both Skylake-K and the 14nm GPUs from being in short supply long after their launches.

            Plus it’s easy to fall into the trap of over-emphasizing the opinions of a very vocal minority. Like how Chuckula tilts at AMD fanboy windmills constantly because of a handful of users who have too many feelings about PC hardware brands. No one wants to be like that part of Chucky.

            • CaptTomato
            • 3 years ago

            I urled a video last week showing the dramatic gains the right CPU/RAM combo can have even at 1080p, so the idea that old CPU’s are good enough is only valid if you’re still using a weak GPU, but 1070/1080/1080ti/titan2016 will be bottlenecked without the right CPU/ram combo, but one doesn’t need 6/8 cores, it’s cores+HT.

            • synthtel2
            • 3 years ago

            That’s not what we arrived at. If you think I’m wrong, you should give some logic to back it up, not just keep insisting you’re right.

            Here’s a [url=https://techreport.com/news/30514/rumor-amd-zen-engineering-samples-leaked-and-benchmarked?post=995902#995902<]link[/url<] for the record. tl;dr: [quote<]the idea that old CPU's are good enough is only valid if you're still using a weak GPU[/quote<] That's often true, but there are plenty of people for whom 2500k + 1080 is fine. Your generalization is a generalization, but you're treating it like pure truth, which it isn't. Just add a fudge word or two and you'd be fine.

            • CaptTomato
            • 3 years ago

            Did you ignore the minimum FPS?…..the right CPU/RAM combo can have a huge effect, as such, it behooves one to look carefully and consider spending the extra dosh on i7 CPU and fast ram combo.

            I’ll be doing this on my next build and will know that my GPU won’t be bottlenecked by my system.

            • synthtel2
            • 3 years ago

            In my last post, I forgot to warn you that you used up all your credibility last time. Now, if something looks like trolling, I’m going to call it trolling. If that’s how things are going to be, I’ll enjoy picking apart your techniques. 😉 If you bring anything of actual substance, I’ll give it the consideration it deserves, I’m just not going to interpret things charitably anymore.

            Yes, I remembered the “minimum FPS”, and your use of the term marks you as not actually being familiar with TR.

            [quote<]the right CPU/RAM combo can have a huge effect, as such, it behooves one to look carefully and consider spending the extra dosh on i7 CPU and fast ram combo.[/quote<] We both know I've already agreed with this stuff numerous times, but you're acting like I haven't. IOW, your trollface needs work. [quote<]I'll be doing this on my next build and will know that my GPU won't be bottlenecked by my system.[/quote<] Aside from the logical problems with this statement (obvious if you've been reading what I've been writing), it's an unsubtle attempt to make more GPU-heavy/CPU-light systems sound inferior and indirectly put down their proponents. IOW, your trollface needs work.

            • CaptTomato
            • 3 years ago

            You’re not listening dummy, it’s been proven that non HT CPU can bottleneck GPU’s, so if one is going to buy a powerful GPU{1070 and higher} it’s important to have the right CPU/ram combo to ensure maximum FPS.

            Of course people can still buy i5’s and 460/470/1060 GPU’s, but one must be aware that to maximize the full power of 1070/1080/Titan and presumably strong XF/SLI systems, you must have the right i7/ram combo.

            • synthtel2
            • 3 years ago

            I’m listening, but I’m only going to give your words as much consideration as they deserve, and that’s at an all-time low as of this post – you’re practically in spambot territory now.

            • CaptTomato
            • 3 years ago

            Your retardo is mildly amusing, but you’re poor value at a tech site.

            • synthtel2
            • 3 years ago

            Good job describing yourself. [+11 irony]

            Also, you’re still trolling, and you’re still bad at it.

            • CaptTomato
            • 3 years ago

            You can’t troll if you’re telling the TRUTH…and I’m simply informing people that GPU bottlenecking is a factor in certain circumstances.

            • synthtel2
            • 3 years ago

            It’s entirely possible to troll with truth, and truth isn’t all you’re telling. You’re taking a few kernels of truth, extrapolating well beyond what your knowledge will actually support, and ending up with a fair bit of wrong and/or misleading stuff.

            Whether the excessive extrapolation was originally purposeful or accidental, I don’t know. If purposeful, you’re either 100% troll or worse and should leave. If accidental, you’re unfamiliar with the limits of your own knowledge and acting in a way that won’t improve your own knowledge, yet take it upon yourself to spread what bits of it you have found. If you really want to help people via this sort of thing, you should first become more familiar with your own mind and adjust your habits for better learning (and then work on charisma/politeness/positivity a bit).

            Have some [url=https://nkanaev.github.io/zen101/en/001/<]Zen[/url<].

            • CaptTomato
            • 3 years ago

            Look shitspeaker, YOU made a goose of yourself by ASS-uming i didn’t know this or that, however, what was important to me and anyone interested was the fact that CPU/ram combo’s do matter, perhaps have always mattered, but it’s more prominent now due to the existence of the big HZ LCD’s….

            It’s also possible and probable that XF/SLI configs can be easily bottlenecked especially the MINIMUM frames, so that’s 2 counts where this info is valuable, yet you seem to be some hero for 30-40fps “gamers” and their shitty gaming experience and want me to acknowledge them…..but to hell with them, LOL.

            As I said before, the very best LCD are almost always high HZ variety, and given their expense, it’s not unreasonable to assume that a high HZ gamer will want strong component matching to maximize his gaming experience even with the extra bill.

            • synthtel2
            • 3 years ago

            100% troll it is then. If you actually cared, you’d not be so dismissive of people who enjoy their games differently from yourself as to make recommendations that are actively bad for them without specifying who those recommendations are for.

            Well then, have you noticed what else has been going on in this convo? I know of at least two other things.

            • Redocbew
            • 3 years ago

            Someone who has chosen to call themselves captain tomato is calling someone else “dummy”. Sometimes the internet is just awesome.

            • CaptTomato
            • 3 years ago

            Don’t underestimate this Tomato

      • Chrispy_
      • 3 years ago

      Are you forgetting that the Broadwell-E 10-core comes in at an eye-watering $1700, and the 8-core 6900K is almost $1100?

      [quote<]unless AMD decides to price them a lot lower.[/quote<] Yes. Intel's prices on the S2011-3 platform are effectively an insult to consumers that they can get away with only because there's no competition from AMD to police them. [b<]SEVETEEN. HUNDRED. DOLLARS.[/b<]

        • Klimax
        • 3 years ago

        Depending on performance, I don’t expect AMD to go much lower either.

          • Firestarter
          • 3 years ago

          they’ve shown before that they’ll charge a premium if they have the best product

            • Klimax
            • 3 years ago

            I remember that time too well.

          • ronch
          • 3 years ago

          Yup. How sure are we that AMD will price much lower this time? Lisa did say that they no longer want to be seen as the cheaper alternative.

            • Spunjji
            • 3 years ago

            The flip side of that is that – for the time being at least – they will need to buy market share.

        • chuckula
        • 3 years ago

        Yeah, that’s also not the part that AMD used in its comparison.

        Instead they selected the 6900K that’s priced at about the same level (or less considering 10+ years of inflation, and massively less on a $/core basis) as the FX-62 when it was new.

        [url<]https://techreport.com/review/10073/amd-socket-am2-processors/2[/url<]

          • BurntMyBacon
          • 3 years ago

          [quote<]they selected the 6900K that's priced at about the same level (or less considering 10+ years of inflation, and massively less on a $/core basis) as the FX-62 when it was new.[/quote<] Your $/core comparison is meaningless as you are comparing processors with over a decade of age difference and exactly zero concurrent time on the market. The FX-62 had just as many cores as its competition. Large numbers of cores did not exist in the x86 consumer market back then as process technology had not advanced sufficiently to provide the necessary transistors. Your comment had a lot more merit without such a baseless comparison. I'm going to read it as: [quote<]they selected the 6900K that's priced at about the same level (or less considering 10+ years of inflation) as the FX-62 when it was new.[/quote<] That's a good point, but all is not equal in this comparison. The FX-62 was the top consumer processor in the AMD stack when it was new. Intel's top processor at the time (Pentium XE 965) was priced at a very similar $999. Side note: It couldn't keep up with the X2-5000+ ($696), much less the FX-62 ($1032). The 6900K is not Intel's top consumer processor. That honor goes to the 6950X. As such, it would be more appropriate to compare the 6950X to the FX-62 and the 6900K to the X2-5000+. Of course, the Athlon 64 prices at the time were suffering a bit from lack of competition, but that is even more true of the current Core i7 lineup. Furthermore, the 6950X does provide 25% more cores than its competition (Haswell-E) which can arguably justify some of the price premium. Of course an argument can also be made that once a process matures past the early stages, the cost to fab a chip more or less follows with die size so consumers are still getting the raw end of the deal. Thanks for the link. I used it to formulate my opinion: [quote<]https://techreport.com/review/10073/amd-socket-am2-processors/2[/quote<]

        • Mr Bill
        • 3 years ago

        Very succinct argument +3.

      • smilingcrow
      • 3 years ago

      1. A cheap octo-core would be a big selling point and it would be on a mainstream platform so cheaper mobos. But it would have to hit a reasonable price point otherwise it would be a low volume halo product.
      2. If the fastest Zen quad core matches a Skylake i7-6700K in all respects it still lacks a GPU so would have to sell at a lower price.
      3. AMD have to balance the price differential between the quads and the octos. The octos should be small chips so no reason except for supply issues why they couldn’t price the ocotos to match the i7-6700K and then the smaller quad core dies against the i3 and i5. If they have the production volume and the product they could really go out and attack Intel aggressively.

      • Kretschmer
      • 3 years ago

      There would be no intrinsic reason to upgrade. The question answers itself.

      • tipoo
      • 3 years ago

      There’s a lot of space between Intel margins and AMD margins, I think if it comes out under Skylake (which I think it probably will) they could still charge better than their current margins but significantly lower than Intel. Could be a win win, if say, they match haswell IPC.

      • srg86
      • 3 years ago

      For me personally, nothing. AMD would have to make something substantially faster than what I can buy from Intel. Situations would be:

      * Original PIII vs original Athlon
      * Athlon 64 vs Prescott.

      The reason for this is mainly the chipsets. I’ve always had issues with chipsets for AMD based systems where Intel chipsets, though far from perfect, are generally reliable. I’ve also had an AMD memory controller have faults in the past (AMD 760 chipset for Athlon).

      It would take an Intel reliability level chipset and/or much higher performance to make me want to switch back to AMD (and I used AMD CPUs for the majority of my time owning and building PCs).

      I will balance that though with being very impressed with the look of Zen, it feels to me like they are heading in the correct direction.

      • OneShotOneKill
      • 3 years ago

      Lower margins = death. They can’t price them too low.

      They will try the over-hype strategy they pulled off with the RX 480. Some fools will buy, other not so fools will sit on the side and admire the fallout on prices and reap the the fruits of their patience.

        • tipoo
        • 3 years ago

        There’s a whole lot of space between their current margins and Intel margins. Higher than AMD but lower than Intel margins could be a win win.

    • OneShotOneKill
    • 3 years ago

    And the HYPE begins… Would love to believe they will compete with Intel, but will wait for an actual release.

    If they don’t solve the efficiency they might be able to compete for a shrinking desktop market, but will be left out of the bigger server sphere.

      • Khali
      • 3 years ago

      Yup, when it comes to AMD I advise waiting until the product is released and third party reviews are available. AMD over does the marketing hype and tends to put out test results that are skewed in their favor with odd ball settings to get the results they want. I don’t know why they do it because it turns around and bites them on the butt every time.

      • BaronMatrix
      • 3 years ago

      Actual test results isn’t hype…

    • chuckula
    • 3 years ago

    Anandtech has an interesting article showing some server motherboards here: [url<]http://www.anandtech.com/show/10578/amd-zen-microarchitecture-dual-schedulers-micro-op-cache-memory-hierarchy-revealed[/url<] The layout of the motherboards strongly implies that the 32-core server chips are actually 4-chip packages of 8-core Zeppelin parts. While unconfirmed, the rumors are that AMD has codenamed each die Page, Plant, Jones, and Bonham, with special handling instructions to make sure the Bonham die is not placed face-down. The other interesting -- although maybe premature -- note is that there does not appear to be a traditional southbridge anywhere on the server motherboard. Of course, this is a preproduction board, but there is speculation that the southbridge is actually on-package. What makes that interesting is that this is a two-socket server motherboard which implies that there would be two southbridges present on the board (even if one of them might not actually be enabled).

      • Prion
      • 3 years ago

      Are they still using HyperTransport for chip-to-chip communication on these multi-socket boards? Or, if the 32-core chips are MCM, for communicating between the modules on package?

        • chuckula
        • 3 years ago

        I don’t know if Hypertransport is still being used in any meaningful way.
        For communication between chips on an MCM, you can implement communication using hypertransport or another proprietary communication scheme since the signals never need to leave the package. It will be interesting to see if AMD goes for the expense of a silicon interposer or if they just mount the chips to a more traditional PCB.

        • BaronMatrix
        • 3 years ago

        No they have moved to GMI (Global Memory Interconnect) because they need cache-coherency between CPUs and GPUs…

      • End User
      • 3 years ago

      [quote<]with special handling instructions to make sure the Bonham die is not placed face-down.[/quote<] I'm ashamed to say I laughed (briefly).

        • TheJack
        • 3 years ago

        Oh, dear

        • ronch
        • 3 years ago

        Perhaps it should face north?

    • joselillo_25
    • 3 years ago

    +5% up in the markets now because of Zen News.

    • mkk
    • 3 years ago

    Just north of 3GHz clock would be fine for the 8 core 95W part. I just hope that their quest for power efficiency doesn’t end up putting a steep limit on the overclocking as well, or that they’re too scared of getting a 125W power edition out later if that what it takes.

    • techguy
    • 3 years ago

    This is encouraging. I’ll be happy to buy my first AMD processor in over a decade when Zen comes out. IF it’s close in performance to an Intel 8-core at a significantly lower price. I don’t feel right buying $1000 processors for desktop use. Final clocks matter as well of course. If 8-core parts “only” clock to 3.2GHz or so @ stock and can’t be overclocked to 4GHz+ then I’ll keep using my 5820k @ 4.4GHz until someone releases a reasonably-priced replacement.

    • TheJack
    • 3 years ago

    Suckers leave, others answer the question in post number 185.

    • TheJack
    • 3 years ago

    IF these dramatic improvements hold up ( ), then a good question would be: How come?

      • deruberhanyok
      • 3 years ago

      Er… because AMD engineered a new processor architecture that has a much more efficient, capable design than their last one?

      Is this a serious question?

        • TheJack
        • 3 years ago

        YES it is!
        How come they tried in vain for so long without success and now all of a sudden a 40% bump in efficieny?

          • NTMBK
          • 3 years ago

          -Because after Bulldozer, they had to go back to the drawing board and come up with a completely new design, and CPU design takes a [i<]long time[/i<]. -Because they haven't had access to improved transistors since 2011. (28nm was only an improvement in cost, not performance or efficiency.) -Because AMD is strapped for cash right now, so it takes them a long time to make this sort of big change.

            • TheJack
            • 3 years ago

            I can live with that answer.

          • Meadows
          • 3 years ago

          Far from it, my friend. You don’t develop a processor “all of a sudden”. These things take several years from planning to actual sales (a rule of thumb I heard once is “about 5 years”), which means most likely AMD started planning this CPU very shortly after having released Bulldozer. Even so, they’re slightly late with delivery but there was probably no other way of doing a complete re-design any faster.

          You’re thinking of the incremental updates and improvements to Bulldozer – now Excavator – in the intervening years. I’d wager those happened in parallel with Zen development, since incremental updates don’t need years of prerequisite work anymore.

          • muxr
          • 3 years ago

          It takes a long time to design a new architecture. Intel’s current CPUs are iterations on an architecture which started in the late 90s (Pentium Pro).

          AMD failed with Bulldozer and for the last 5 years they’ve been stuck on an inferior architecture. They’ve been working on Zen for the last 4 years.

            • jackbomb
            • 3 years ago

            Wasn’t the last P6-based design Core Duo?
            I always thought Core 2 Duo was a completely new architecture. It was 64-bit, had much faster FP/SSE, a wider design, etc.

            Other than x86 and OoOE, what does any Sandy Bridge or newer processor have in common with Pentium Pro? Just curious.

            • smilingcrow
            • 3 years ago

            I thought Core 2 Duo was a relatively mild reworking of Core Duo with x64 being maybe the most significant addition?
            That’s why they could release it so soon as it was an evolution of the Core Duo mobile chip that the Israeli team developed.

            • PBCrunch
            • 3 years ago

            It isn’t like Intel sat on ass that whole time. They came out with a completely new architecture as well, Netburst.

            Netburst was terrible, and it took Intel a while to come out with something new. Their new idea was to dust off Pentium Pro, put two cores on die, give it lots of cache, and come up with extremely efficient memory controllers in the system chipset. The first Netburst chips were released in November 2000, and the first Core 2 chips were released in July 2006. It took Intel over five years to come out with an architecture to replace Netburst, and it was something they basically had in storage the whole time.

            Pentium Pro was never really gone; the mobile division had been selling it as Pentium M and Core Solo/Duo for a while.

            Actually, Intel had another crappy architecture called Itanium during the Netburst era. Intel tried and tried to get datacenter customers onboard, but it never worked out. I’m not terribly well versed on what was good or bad about Itanium, but Intel spent a lot of money on it, but it never caught on.

            • Master Kenobi
            • 3 years ago

            Itanium was a RISC type processor, not x86. As a result they were competing in the same market with the well known and established brands of Solaris SPARC and IBM Power who more or less ruled supreme during that timeframe in the data center. It’s been a long slow migration to x86-64 processors in servers over RISC, but we are finally there hence why Xeon is so popular in racks these days.

    • travbrad
    • 3 years ago

    It really seems like it will all come down to what clock speeds it can hit. Looks like a nice boost to IPC but if it can’t clock as high as Bulldozer/Intel stuff then they will lose some of that extra performance.

    Power consumption is important for laptops/OEM desktops too if they want to get their CPUs into a lot PCs. Less important for enthusiasts though.

      • f0d
      • 3 years ago

      if it doesnt have decent clocks at release i hope it can at least overclock well otherwise as you said its performance will be low at low clocks

    • Fonbu
    • 3 years ago

    I think the AMD engineers practice ZEN. Group Hug 🙂

    • maroon1
    • 3 years ago

    I do not trust any test by AMD

    I’ll wait for independent unbiased reviews. people who blindly trust AMD will always get disappointed (and history proves it)

      • Srsly_Bro
      • 3 years ago

      Yes, it is true things have happened in the past and because of that, things will always be like that and there is zero chance things could be any different because the past always and forever dictates everything what happens in the future.

      u srs bro?

      p.s. i hope youre not

      I have to give an example to further lol at your lunacy.

      The last three coins i flipped were heads, therefore the next one is going to be heads! HISTORY PROVES IT!!@!#@#$#$@#%$@#^*^

    • RdVi
    • 3 years ago

    If they can clock it reasonably high (3.5GHz boost at least) and performance is this good accross the board, then I think the prices a lot of people are hoping for will not happen.

    Hopefully pricing is at least much more reasonable than Intel, but AMD’s 8 cores will likely cost more than a 4 core i7. That would still be a great deal, but I think a lot of people are expecting i5 prices for 8 cores like the FX lineup.

      • Theolendras
      • 3 years ago

      I would expect around i7 family pricing, if it’s somewhat in the same ballpark in efficiency. They have to regain mindshare, marketshare, mobo partner trust, enterprise influence that is completely gone and a roadmap that indicate a challenge for upgradabilty that has been an ace acard for AMD in the past.

    • wingless
    • 3 years ago

    David Kanter must be feeling tingly in his nether regions. I can’t wait to read his assessment. Zen doesn’t seen like it’ll be a steaming pile of poo like the Phenom was 9 years ago. I suppose I’ll wait to refresh my aging i7 2600K system and see what Summit Ridge has to offer.

    Today is a good day for PC enthusiasts!

      • tipoo
      • 3 years ago

      The podcast with him should be called Kanters Nethers

    • USAFTW
    • 3 years ago

    I’m looking forward to Zen and though the hype train is afoot, I’m also cautiously optimistic.
    Since the only CPU-limited workload for me is gaming, I want Zen in either 4c/8t or 8c/16t SKUs to be:
    1. ~Haswell-level performance, Skylake/Kabylake i7s are nice but not a necessity, at least for me.
    2. Relatively power-efficient, around the same as high-ish end Phenom IIs at the time, say 95-125w.
    3. Decent chipsets with no major faults in their USB implementation, etc.
    If AMD delivers on these,
    [url<]https://www.youtube.com/watch?v=Bj0ui5ESTGA.[/url<]

    • cldmstrsn
    • 3 years ago

    I really hope Zen works out. Would love to buy a Zen processor for 249-299 on the new AM4 platform when I upgrade next.

    • TheMonkeyKing
    • 3 years ago

    While I am optimistic and hopeful that AMD can produce a real challenge again in the CPU market…

    I will reserve this spot for an eventual disappointment.

    • puppetworx
    • 3 years ago

    This is encouraging, now for the love of all that is competition, please deliver.

    • Unknown-Error
    • 3 years ago

    So they have reached their IPC target. Well for once they seem to have done what they promised. Which is admittedly excellent news. But the IPC gains are only part of the story. I am hearing that the max boost clock will not reach 3.5 GHz. Regardless of 4C/8T or 8C/16T best case scenario is going to be 3.0 GHz base, 3.4 GHz boost. Limitations of the Global Foundries density-optimized process.

    I am wondering for the Zen Vs. BDW-E demo while down-clocking BDW-E to 3.0 GHz IS clearly mentioned, did they OVERCLOCK the Zen sample from 2.8 GHz to 3.0 GHz or is it a base clock of 3.0 GHz?

      • mesyn191
      • 3 years ago

      The best info we have suggests ~4Ghz peak clocks. [url<]http://dresdenboy.blogspot.com/[/url<] Where are you seeing 3.5Ghz? Given that AMD was able to get over 4Ghz with TSMC's 32nm it'd be very strange if they weren't able to get close to 4Ghz with GF's 14nm. As near as we can tell so far the issue with GF's 14nm process is that has high leakage not that it can't reach high clocks. FWIW the 3Ghz 8C/16T Zen that AMD demo'd was rumored to be a 95W TDP part.

        • Unknown-Error
        • 3 years ago

        TSMC’s 32nm???? I assume that was a typo?

        Bulldozer/Piledriver could clock so high because of the 32 nm SOI process at GF. The same doesn’t apply to the new 14 nm FinFET. In-fact clock speed has been going down ever since GF dumped SOI. Only time they came close was the 28 nm A10-7890K. But that 28 nm process is very mature unlike GF’s 14 nm process. Maybe in mid 2017 AMD might be able to release 4.0+ GHz Zen skus.

          • mesyn191
          • 3 years ago

          Whoops my bad! Yes you’re right it was GF’s 32nm and not TSMC.

          Since GF’s 14nm process is supposedly a near exact copy of Samsung’s I don’t think we’ll have long to wait for process maturity.

            • smilingcrow
            • 3 years ago

            Does Samsung fab any high power parts at 14nm?

            • mesyn191
            • 3 years ago

            Not that I know of. They tend to focus on stuff like smartphone SoC’s so most of their stuff is low power oriented and tends to have sub 3Ghz clock speeds.

            I don’t think there is any reason to believe AMD would be power limited here though given how much power a RX480 can pull. The real issue with be what the design can do and at what TDP it’ll do it.

            • smilingcrow
            • 3 years ago

            Power Limited! Not sure what that means in this context!

            • mesyn191
            • 3 years ago

            I thought you were talking about electricity use when you were talking about ‘high power’.

            If you mean something else by ‘high power’ you should probably be more clear instead of using euphemisms here.

            • smilingcrow
            • 3 years ago

            Okay, let me try again.
            So are you saying that Samsung has a (relatively) high power process partly suitable for desktop/server CPUs but that it might be power limited in the sense that it is too limited in the number of watts it can handle? I.e. it can’t handle as high as the 125 – 190W+ that Intel and IBM are using.

            GloFo has taken over IBM’s chip manufacturing facilities and has an exclusive 10 year production contract for IBM chips. Considering that Power 8 CPUs have a TDP of 190W and possibly higher it safe to say they have experience of high TDP chips although using a different process node.

            • mesyn191
            • 3 years ago

            I’m saying there are no indications that power is a problem for either GF’s or Samsung’s 14nm process.

            If GF/AMD can put out RX480 230mm dies that use 150W without issue I have no clue how you can even believe this is a possible issue at all. If it was they couldn’t do that. They just flat out couldn’t. You certainly wouldn’t be able to get them to tolerate the 200W+ that they’ll use when overclocked either. RX480’s would be dying left and right irregardless of degree of overvolting if that was true.

            Now Samsung doesn’t seem to try to compete in any markets where high power parts are needed. They’re focused on producing phone SoC’s and memory for their cutting edge processes. That still isn’t a indication that high power usage designs are an issue or impossible for their 14nm process though. That is just a indication of what market they’re targeting.

            Basically you have to show some evidence that somehow proves GF’s or Samsung’s 14nm process has some sort of problem with high power usage designs. Given that there are no logical reasons to believe this a problem and the fact that AMD is using GF to produce high power usage parts on this process I don’t see how you can do that.

            Real world buyable parts as evidence beats unfounded conjecture every time.

            • smilingcrow
            • 3 years ago

            I never suggested it would be limited by wattage! You brought up the topic for some reason which seemed odd so I was trying to understand why in case I’d missed something. Anyway, we both agree that it’s seemingly a non-issue.

            “Since GF’s 14nm process is supposedly a near exact copy of Samsung’s I don’t think we’ll have long to wait for process maturity.”

            Now that seems pure speculation as Samsung don’t appear to be fabricating high power chips so no reason to assume the process is close to maturity.

            • mesyn191
            • 3 years ago

            Process maturity has nothing to do with power usage of a given design.

            Process maturity comes from tuning the tooling to get better yields at better clock speeds.

            By all accounts GF was able to get a near exact copy of Samsung’s 14nm process which had already been in production in Samsung’s fabs for a while before AMD got any parts out. Volume production for Samsung’s 14nm process began in 1H 2015 at their fabs according to them.

            • smilingcrow
            • 3 years ago

            You are still going on about power usage for some reason!

            If Samsung have no high power parts to fab what are they using to fine tune the high power process to ramp yields and clock speeds? Surely yields/clocks can only really be tested on a real world part?
            That’s why I was wondering what you are saying is possible.

            Volume production in 1H/15 was for mobile SoCs on a different process surely?
            I’ve been led to believe that you shouldn’t extrapolate too much between a low power mobile SoC process and a high power one for CPUs just because it’s the same node.

            • mesyn191
            • 3 years ago

            Because you clearly still think its an issue somehow despite there being no indication that is one.

            You don’t need a shippable part to check for viability of a given design, be it high power usage, high temps, or large die size, or some other feature that you suspect might make it difficult to fab. You run test wafers through the process and then check them every step of the way to see the results and determine what needs to change in the design or fab process to produce the part at a given yield.

            The exact details of how and what they’re testing aren’t public from any fab but test wafers of some sort are typical.

            The process itself doesn’t “care” about the power usage of a given design that is being fabbed. There will be limits of course and there are specialty processes (which don’t really factor in here at all, AMD isn’t doing crazy stuff like on die optic buses here) too but there is no reason to believe power is one of them for Samsung’s 14nm process.

            Again we already have high power usage shipping products from GF on this process!! What more proof could you want?! You simply can’t claim there is an issue here with no evidence when that is true.

            • smilingcrow
            • 3 years ago

            Try reading back this thread and telling me where I said there is a power issue? You seem to be confusing me with someone else! I was asking why ‘you’ brought up the power issue as there is zero info on Zen in that regard AFAIK so it’s all idle speculation.

            Some might extrapolate from the RX 4xx series that the GloFo process is not great for CPUs but that’s a stretch as no data to back that up.
            One thing we can say for sure is that AMD have high enough yields to release the RX 4xx series.
            The power efficiency is poor versus Nvidia but that seems to be more of a design issue.
            Clock speeds are low versus Nvidia but that seems also to be more of a design issue.
            The process may be limiting the GPUs as well but that would be speculation.

            When it comes to yields and clock speeds though the actual design of the CPU will certainly have an influence. So I don’t think you can just say that because a generic test wafer is looking good that a production run of a given design will automatically go well.

            • synthtel2
            • 3 years ago

            I think y’all are confusing “high-power” as in chips that draw a lot of power with “high-power” as in a process with coarser metal layers. GPUs can draw tons of power as in watts, but don’t have much need for HP processes.

            • smilingcrow
            • 3 years ago

            You aren’t the first to make erroneous deductions on this thread, it seems to be full of them.

            • BaronMatrix
            • 3 years ago

            High power is really only for sever and workstation… Intel calls it’s processes GP and LP… GPUs are much more power hungry than CPUs and Polaris shows the process is good for high PERFORMANCE chips…

            • Unknown-Error
            • 3 years ago

            Again, that “14 nm” is just a label. This is not like Intel’s 14-nm process in many respects. 14-nm high performance node can easily reach 4.0 GHz. They can handle the currents. Intel 14-nm process for low power products are different from the high performances products like i7-6900K. Samsungs “14-nm” is manly used for developing SoC like Exynos. Electrical specs are not the same. What is the highest clock-speed Samsung/GF have achieved even on a test chip?

            The last major High-performance node by GF was the 32-nm SOI. One good piece of news for GF is the acquiring of IBM Microelectronics. IBM makes very high performance SOI nodes. IBM POWER & z Series CPU can reach insane speeds (even reaching 5.0 GHz). I am not sure about the exact figures but Power series can reach 5.0 GHz, and z Series can reach 5.2 GHz.

            But there is a reason why “Global Foundries” is referred to as “Global [b<]Floundries[/b<]". So, despite the acquisition of IBM Microelectronics, I am quite skeptical about they turning that tech into something useful.

            • mesyn191
            • 3 years ago

            You’re being a tad pedantic here about a issue I didn’t bring up and doesn’t even really matter. Everyone, including Intel, has been playing marketing games with their process names since at least the 32nm era.

            GF, TSMC, or Intel could call their respective processes 60000000nm and so long as they hit their target clocks at target yields no one will really care that much.

            I have no idea where all the worry is coming from about GF’s 14nm process not being able to handle high currents. The RX480 shows this isn’t a issue at all. Heck the RX480 at stock clocks will use considerably more power than almost any Zen chip and will do it with consistently high temps. That is a real world shipping part too that you can buy in stores right now.

            Now it doesn’t overclock that well but GCN never overclocked well no matter the iteration.

            • synthtel2
            • 3 years ago

            The potential issue is more with current density than absolute current. GPUs draw a lot of power, but it’s nicely spread out across the chip. Modern CPU cores are pretty tiny, and the bits that need lots of power really need tons of the stuff. Despite the relatively low TDPs, it’s a lot easier to run into clock walls that way.

            • mesyn191
            • 3 years ago

            Modern CPU chips generally aren’t all that small though.

            The cores are small but no the chips due to the on die memory controllers, multiple cores, large caches, and buses.

            You’re still talking about at least around 200mm squared chip for a 8C/16T Zen. And it’ll be using much less power too. TDP for Zen is still rumored to be around 95W on the desktop chips. RX480’s have a TDP of 150W and a die size of 230mm squared. There is no factual reason at all to believe energy density will be a issue for Zen.

            Now if Zen were a ~50mm squared die product with 100W+ TDP’s then yes energy density would be a problem. Even the dual or quad core Zen variants will probably all have die sizes double or more than that and probably have lower TDP’s (ie. 35-65W) too.

            • Unknown-Error
            • 3 years ago

            Well, we will know for sure beginning of next year (maybe nov/dec 2016) whether the 4C/8T (let alone 8C/16T) Zen can go past a boost clock of 3.70 GHz or even 3.50 GHz. I am happy to be proven wrong, but looking at AMD’s past Marketing and the way they carried out the comparison with BDW-E makes me very skeptical but I am now quite confident that they have at least in some ways reached the 40% IPC increase (maybe even exceeded it on certain workloads).

            • mesyn191
            • 3 years ago

            Probably by year end yes but maybe the Hot Chips presentation by AMD coming up soon will give that information too.

            Its fair to be skeptical of AMD’s marketing but they pretty much always kept a lot of details quiet about future products unlike Intel who you can usually count on to leak almost everything 6-12 months ahead of time.

            • synthtel2
            • 3 years ago

            Most of the chip’s power draw is concentrated at the cores though, and the cores really are small.

            • mesyn191
            • 3 years ago

            You could say the same about the ALU’s in Polaris 10. No hotspotting issues there at all though.

            • synthtel2
            • 3 years ago

            How do you know there aren’t hotspotting issues? It does seem to have trouble clocking much higher. 😉

            Even there, that’s 2304 ALUs (actually more) in P10 versus no more than 64 in 8C Zen.

            • mesyn191
            • 3 years ago

            Because there has been no evidence of them having hotspotting issues which would’ve been apparent by now.

            And it doesn’t clock all that high because its a GCN based design. GCN has never overclocked well. Not without LN2. Even water did little to help increase clockspeeds.

            The ALU’s are much more numerous in a GPU than a CPU but the ALU’s are still a relatively small portion of the chip and are where most of the work is done and GPU’s are doing a TON of work in parallel. They get HOT and use quite a lot of power for a reason. If RX480’s aren’t dying left and right while using 150W+ and run just fine when using over 200W with 230mm dies then there is no possible reason why anyone should believe there are any heat or energy density issues with GF’s 14nm.

            Any problems like what you’re mentioning would’ve already been apparent in Polaris 10 and since they’re not you’re at best being stubborn with a baseless WAG and at worst pumping out FUD.

            • synthtel2
            • 3 years ago

            I don’t mean literal hotspots, I mean one piece of the chip being driven hard enough that it becomes the limiting factor for clocks (which is normal, chips aren’t homogeneous). This shows up as a bit of a kink in a power/clock curve, better known as a clock wall. GCN doesn’t OC well in large part because AMD likes to run stuff all the way up at the base of that clock wall from the factory.

            When I say power density, I mean local, not chip-wide. Skylake, for instance, has <10 mm[super<]2[/super<] cores IIRC, but they're still responsible for the majority of the power draw of the chip. That's ridiculous local power density.[super<]1[/super<] Compared to that, GCN doesn't even know what local power density is. Local power density matters because it hints at how much stress individual parts of the chip might be under, which can be mitigated for other purposes but is very important for clock walls (and to tie it back to the main story, the clocks Zen will be able to achieve). If chip power density is even close, then local power density is going to be higher for CPUs than GPUs, both because CPUs at a macro level are a lot less homogeneous and because cores are internally a lot less homogeneous than CUs. [quote<]Any problems like what you're mentioning would've already been apparent in Polaris 10 and since they're not you're at best being stubborn with a baseless WAG and at worst pumping out FUD.[/quote<] You forgot the scenario where humans are imperfect at communicating. Accounting for such things before dropping to insults is polite. 😉 [super<]1[/super<] Intel can avoid issues with this because their process has better metal layers for this than most.

            • mesyn191
            • 3 years ago

            That still doesn’t make sense and quite frankly comes off as goal post shifting by a attempt to redefine terms. You can’t talk about ‘local’ parts of the chip being heat limited somehow and not be talking about hotspotting.

            You also can’t tell me that a chip that can tolerate 200W of heat with a die size of 230mm is somehow a indication of a hotspotting issue related to the process. Until you produce some hard evidence we don’t have anything to talk about.

            And there is no communication issue here at all. You and others are inventing this issue whole cloth with 0 evidence.

            • synthtel2
            • 3 years ago

            Other than the term “hotspot”, which was in response to your use of it, where did I ever mention heat? I haven’t been talking about heat at any point here. With that in mind, do you still say there’s no communication issue? With that in mind, you might also re-read what I’ve already written and see if it makes any more sense.

            You can make a small chip produce pretty near arbitrary amounts of heat, if you have good enough cooling to keep it from melting down. The question is whether you get appropriate performance gains for your trouble. With GCN, you generally don’t.

            Why would I invent an issue like this? I give it a 90% chance I’ll be a buyer of Zen – they’d pretty much have to fumble it like Bulldozer for me not to. I hope this is good, I’d just rather be realistic about the issues AMD is probably facing right now.

            For a good jumping-off point if we still have anything to talk about: where do you think clock walls come from? If you’re instead going to continue to be this automatically dismissive, we don’t have anything to talk about.

      • puppetworx
      • 3 years ago

      Given that the 6900K has a boost speed of 4GHz it does make you wonder why they didn’t compare processors at that speed, unless either Zen wasn’t capable of matching it or AMD didn’t want to disclose if it is just yet.

      Other posters tell me that engineering samples are often clocked lower than production samples and AMD is signalling that also. So maybe we can’t read too much into it yet.

        • terranup16
        • 3 years ago

        Blender running on all cores is something where the 6900K is never going to hit its boost speed. If this were a pure single-thread test (which I guess you could be asking that question, in which case I’d guess that yes they’d want to bench @4GHz and the engineering sample may not be there just yet), then 4GHz would be in play. But the all-cores-saturated and running at clock parity is the better real-world example.

        • mesyn191
        • 3 years ago

        Early BD engineering samples were around 2Ghz. I think they didn’t hit close to 3Ghz until B0 stepping. The shipping BD chips were all initially B2 or B3 stepping.

        Silicon is still all rumored to be A0 or A01 revisions at this point for Zen. So its reasonable to expect clocks to be low. Hard to say what the peak clocks will be though which is probably why AMD is being cagey with those numbers. They probably won’t be sure what clock/TDP’s they can launch with without also making sure they can supply parts that hit those numbers for another few months.

        • Srsly_Bro
        • 3 years ago

        It’s an ES CPU. They provided a clock-for-clock comparison. If both CPUs are at the same clockspeed it shouldn’t matter, provided other variables are held relatively constant.

      • Unknown-Error
      • 3 years ago

      I went through dresdenboy’s blog about the leaked ES specs- [url<]http://dresdenboy.blogspot.com/2016/08/some-zen-leaks-es-clocks-pci-info.html[/url<] Base clock: 2.80 GHz All core boost: 3.05 GHz Maximum boost: 3.20 GHz. All core boost is 3.05 GHz. Maybe this why AMD fixed the speed at 3.0 GHz for the Zen Vs. BDW-E comparison. Dual socket boards running 2x 32C/64T Zen "Naples" would give a whopping 128 threads. For high-margin enterprise products, looks like Zen will finally put AMD into a somewhat competitive stage. Remember the the high margins are in the enterprise sector where AMD market-share has hit low single digits. The colossal bulldozer blunder, and the ensuing market loss, any improvement is welcome.

      • BaronMatrix
      • 3 years ago

      Why do people pretend the laws of physics don’t work the same for AMD…? They got an octo at 32nm into 125W… Why couldn’t they get a power-optimized octo into 95W at 4GHz…?

      We saw they raised efficiency from Llano to Carrizo by 5X… Now they have 14nm FinFet…

        • smilingcrow
        • 3 years ago

        Because it would be staggering if they could.
        Intel has a mature 14nm process and architecture and they manage a Xeon 3.5GHz @135W with 8 cores at full load.
        So assuming a similar IPC as per the Blender Demo and that it scales the same a Zen 4GHz @95W would offer ~160% of the performance per watt of that Xeon which is asking for a miracle.

    • Voldenuit
    • 3 years ago

    Don’t forget the chipset for Zen is being designed by ULi/ALi.

    Sigh. There is no way in heck they won’t **** it up badly. Why oh why?

      • Meadows
      • 3 years ago

      As long as it’s not worse than their previous chipset(s), people will still buy it and not mind much.

      I mean, I’m on a chipset with PCIe 2.0 and borderline AHCI drivers and this is 2016 we’re talking about.

      • RAGEPRO
      • 3 years ago

      Huh, you sure AsMedia staff came from ALi/ULi? NVIDIA bought them awhile back.

      Would be kinda awsome if true. My first custom-build used an ALi Aladdin V chipset, M1541 I believe. Why do I remember that?!

    • Mat3
    • 3 years ago

    Why does the block diagram show the FP unit has 2 MULs and 2 ADDs? Shouldn’t it be two FMACs?

      • terranup16
      • 3 years ago

      I was trying to figure that out too. I am wondering if the staggering of them is supposed to imply they are FMACs and they were just trying to make the diagram more casual-friendly.

        • Mat3
        • 3 years ago

        Maybe each FMAC unit is now capable of separate add and multiply operations simultaneously per cycle?

          • terranup16
          • 3 years ago

          [url=http://www.anandtech.com/show/10578/amd-zen-microarchitecture-dual-schedulers-micro-op-cache-memory-hierarchy-revealed<]Anandtech has the answer[/url<]: [quote<]The FP side of the core will afford two multiply ports and two ADD ports, which should allow for two joined FMAC operations or one 256-bit AVX per cycle. The combination of the INT and FP segments means that AMD is going for a wide core and looking to exploit a significant amount of instruction level parallelism.[/quote<]

            • Mat3
            • 3 years ago

            [quote<]...which should allow for two joined FMAC operations or one 256-bit AVX per cycle...[/quote<] How is that different than BD?

            • Voldenuit
            • 3 years ago

            Twice as many FP units per core?

            • tipoo
            • 3 years ago

            Bulldozer was one 256 bit unit per 2 cores…If nothing else at all was improved you’d still be looking at twice the theoretical AVX rate.

            • Mat3
            • 3 years ago

            Right, now that there is one FP unit per core, it will have twice the throughput per core as BD.

            But that doesn’t explain the block diagram. In BD there were two FMAC pipes and the block diagrams always showed them as such. This looks more like the Phenom/Athlon block diagrams now (no FMAC, just separate ADD and MUL pipes) but doubled-up.

            The picture is suggesting that Zen can potentially do up to 4 FP math ops per cycle (2 ADDs and 2 MULs). A BD module could only do 2 (each BD FMAC was capable of either an ADD, a MUL, or a Fused mul-add). That would mean Zen actually has up to 4X the max throughput per core as BD.

        • ronch
        • 3 years ago

        They could very well be two 256-bit FMAC units. Remember, FMA stands for Fused Multiply Add. Also, historically, AMD doubled the FP datapath every generation. K8 had a 64-bit FPU, K10 had 2 x 64-bit FPUs, and BD had 2 x 128-bit FPUs that it can fuse together to form one 256-bit unit that could either do one 256-bit AVX or one FMA (256-bit) operation. So with Zen, those two ADD and two MUL units there are presumably 128-bit each, which means they can possibly be tied together to do 2 x 256-bit AVX or 2 x 256-bit FMA operations per cycle. This would bring AMD up to par with Intel’s FPUs which were capable of two 256-bit AVX for a while now, at least on paper, although I reckon Intel has already gone wider with Skylake.

      • BaronMatrix
      • 3 years ago

      Check out Dresenboy. They do it to save power… They can fuse the operations and make 128 bit FMAC or 256 bitFMAC…

    • djayjp
    • 3 years ago

    I think Intel could just choose to release an 8 core i7 at a reasonable price at pretty much anytime it wants to. Even if Zen’s only real contribution is to make 8 cores the new standard (and I mean real cores, not shared execution nonsense like Bullcrap), then I’ll be thankful. I’ve forgotten what Moore’s law looked like in the CPU space for quite some time (aside from efficiency increases of course– the 10W core Ms are pretty amazing). Anyway, I think AMD chose the correct competitor in their little rendering demo. Heck, I’d be truly happy with 6700k performance for $199.

      • terranup16
      • 3 years ago

      I think it may actually give Intel fits to do right now. I think we’d have octo-core Xeon E3s if they could manage it. The only thing that suggests to me they might be able to do it without too much pain are the Xeon Ds, as those were based on the latest mainstream rather than HEDT/E5 architecture when they were released and scaled core counts above what mainstream offers.

      But it would definitely be interesting to see if it caused Intel to change course post-Cannonlake. I’m not sure we’ll see a shift before then.

    • smilingcrow
    • 3 years ago

    So is Blender using SSE or AVX in CPU mode?
    I ask because AVX tends to be more power hungry which is why Intel reduces both the base and turbo boost clock speed ratings with AVX code by 2 to 4 bins (2-400 MHz).
    This might feed into the clock speed conspiracy theorists interpretation of the low clock speed.
    With only a 95W TDP I wonder how much AVX performance we can expect from Zen?
    I don’t know how widespread its usage is!

      • willmore
      • 3 years ago

      Don’t the Xeon chips have a ‘lower base’ speed they use for AVX2 execution? I.E. don’t even think of looking at boost clocks for AVX2 as it can’t even guarentee the normal base clock for those instructions.

        • smilingcrow
        • 3 years ago

        That’s what I was saying although I didn’t know it was limited to Xeon chips!

        • the
        • 3 years ago

        AVX2 code would slow down the entire Haswell-EP/EX chip but Broadwell-EP/EX does that on a per core basis. The result is a vastly improved AVX2 performance in mixed workloads even though base clock does decrease for the AVX2 code.

        The consumer Haswell-E and Broadwell-E chips didn’t have a separate AVX clock due to higher thermal headroom (140W). Most Xeons of these generations have lower TDP than the consumer parts.

      • chuckula
      • 3 years ago

      We know for a fact that Lisa Su showed an 8 core Zen running at 3 GHz in that demo.

      The 95 watt TDP part is still very much a rumor though.

        • raddude9
        • 3 years ago

        What’s not a rumor is that these were engineering samples, as you well know.

    • DragonDaddyBear
    • 3 years ago

    Wasn’t there supposed to be an APU based on current stuff for the AM4 platform? Is AM4 still coming out this year? It would be really awesome to build a system for my TV with that and upgrade to Zen and Vega later on.

      • mesyn191
      • 3 years ago

      Yea and AM4 mobos were supposed to be out by May or June.

      The mobo’s got delayed and so the AM4 APU has either been delayed too or has been cancelled. Dunno which yet.

    • ronch
    • 3 years ago

    DIE SHOTS DIE SHOTS DIE SHOTS!!!!

      • Meadows
      • 3 years ago

      </Reaper>

      • PrincipalSkinner
      • 3 years ago

      You really want those shots dead.

    • ronch
    • 3 years ago

    With the way it’s becoming more and more difficult to make CPUs faster, AMD was bound to eventually catch up. It’s been that way for cars, TVs, lawnmowers, etc. Of course not all cars are created equal but a top dog BMW is pretty much on par with a top cat Mercedes.

      • maxxcool
      • 3 years ago

      ^

    • ronch
    • 3 years ago

    By the time Zen is out I hope finances are much better. Right now it’s a good thing I still am fully satisfied and happy with my Piledriver.

    • Chrispy_
    • 3 years ago

    Looks promising but there are two caveats:

    1. Efficiency is important these days. GloFo aren’t helping AMD, with the GTX1080 being 60-80% higher performance/Watt.

    2. Clockspeeds; 3GHz is acceptable for an engineering sample but end-user performance is IPC x clockspeed. Intel are shipping 3.6GHz in their lowly i3 processors and the i7 boosts to 4.2 before overclocking is even considered.

      • Voldenuit
      • 3 years ago

      [quote<]Efficiency is important these days. GloFo aren't helping AMD, with the GTX1080 being 60-80% higher performance/Watt.[/quote<] GloFo's process is probably contributory, but architecture has a lot to do with it too. Since Maxwell (and to a lesser degree Kepler), nvidia has been chasing a 'mobile first' paradigm where power efficiency is king. Power gating and very granular power states (toms showed this in Maxwell with plots of aggressive micro-seconds long powerdraw drops) as well as presumably efficient scheduling and rendering (see: realtime tiled renderer). This has allowed them to not only make efficient mobile parts (their power efficiency on laptop GPUs is *much* better than intel GPUs when gaming), but also to raise performance on dekstop by boosting clocks at any given average TDP.

      • mesyn191
      • 3 years ago

      Just because their demo chip is @ 3Ghz doesn’t mean that is Zen’s top clock speed you know.

      Generally its still expected Zen will top out around 4Ghz at default clocks. What is totally unknown is the core configuration and power draw at that speed.

      If they can do a 8C/16T ~4Ghz Zen that pulls ~100W and competes with Broadwell-E for several hundred less they have a winner on their hands that will still fair well against Intel’s Skylake or Purlylake. If they need to drop down to a 2C/4T Zen in order to get ~4Ghz with 100W+ power draws then yes they’re in some trouble and will have to compete HARD on price.

        • faramir
        • 3 years ago

        AMD doesn’t need to beat Broadwell-E in both performance per watt (which your 100W target implies) and price if they can match its performance (IPC and frequencies).

        One of the two is enough to ensure sales over Broadwell-E; doing both would be a poor economic‘ALL SATA CONNS CONNECTED TO P1’ which indicates the first processor has direct business decision on their part.

          • mesyn191
          • 3 years ago

          For server markets perf/watt is a big deal so if they want those high margin products to sell at decent prices then yes AMD does need to compete with Broadwell-E in that metric fairly well.

          For the desktop/enthusiast market you’re right that perf/watt doesn’t really matter much.

      • terranup16
      • 3 years ago

      1. I’m not sure how much we can draw from Pascal v Polaris here. Maxwell and Pascal have both been tuned to amp up clock speed- to the point where I recall nVidia stating that they were okay with dropping a little IPC going from Maxwell -> Pascal because the clocks on Pascal made up for it. Given nVidia’s history with TSMC and TSMC generally being more geared than GloFo for mass production, we don’t even know what kind of viable shot that GloFo 14nm had to get Pascal production and what, if any, difference that would have made.

      2. This is for the 8-core/16-thread part. Intel’s $1K comparable in this arena is a 3.2GHz base, 4.0GHz boost CPU. I’m assuming the Blender test used all 8 cores and 16 threads, so boost clock likely didn’t matter at all, so to reach clock parity with Intel, AMD is only 200MHz behind right now.

      Given that AMD seems to be pushing the 8-core part hardest despite AMD putting a lot of emphasis on Zen improving single-threaded performance (for which most of us probably would have been ecstatic to see quad core parts clocked at or above Intel’s offerings with Broadwell+ IPC and I feel like AMD knows that). I would be really surprised if AMD doesn’t manage to crank 3.6GHz base, 4.0GHz boost out of its top-end octo-core offering for somewhere around 95W TDP. Which would be a real winner if they could do that.

      But in any case, I’d probably look at the quad core and lower and see what clocks they can hit. If it is just the emphasis on eight cores that is holding Zen back, that seems to be addressable.

        • Chrispy_
        • 3 years ago

        Yeah, this is what I’m hoping too. Prosumers and multitasking enthusiasts have been stuck with 4C/8T for so long it’s silly. The S2011 stuff is just rebadged Xeon kit and lags behind in features, clockspeed and architecture. S2011 is a band-aid to sooth that market demographic but the fact a 6700K is so often a better product than a 6800K or 6900K for the majority of tasks speaks volumes about the brick wall we’ve hit with Intel’s lack of competition – it’s affecting not just hardware progress but also software development progress.

        Quad [s<]core[/s<] [i<]thread[/i<] is where it's at for the bulk of consumer and OEM purchases though, so I'm just hoping that AMD can manage to beat i3/i5 performance at low TDP and low cost.

        • smilingcrow
        • 3 years ago

        “2. So to reach clock parity with Intel, AMD is only 200MHz behind right now.”

        We have no idea how close they are. Broadwell E i7-6900K seems to max out around 4.2 when over-clocked which is quite a bit less than the Haswell E octo core managed.
        AMD could match Intel or only have a few binned chips that go beyond 3GHz; it’s a pure guess.

      • BaronMatrix
      • 3 years ago

      The issue s thatnVidia went with a tiled render stage that cuts power and helps framerates… Polaris was built to finish the path to Fusion where the GPU has C++ pointers, pre-emption, cache coherency and shared global memory… That uses some power… And AMD also stuck with GDDR5 and not X which is more efficient… But it’s obvious that Pascal isn’t really brand new as it didn’t take long to ramp…

      Vega will go more to graphics perf… We can assume that hey can get Vega in at 200W which may be a it more than nVidia, but i should beat it like they usually flip flop…

      I do find it upsetting that no one tests the games for 460 and no one has done a 480 XFire test…

    • DPete27
    • 3 years ago

    I wonder what kind of power the i7-6900K pulls at 200MHz below its base clock. Sure its a 140W TDP chip, but that’s what it needs to hit 4GHz. Just wondering about perf/watt comparison.

    • ronch
    • 3 years ago

    What I find interesting about that architectural diagram is how Zen seems to be a 10-issue design with presumably all those ALUs and AGUs equally capable of executing any ALU instruction and any AGU instruction, respectively. That also probably allows the schedulers to be a bit less sophisticated. Also, it’s curious how the Integer side has all those schedulers. Isn’t that kinda similar to K7/K8/K10 where each ALU lane had its own scheduler and instruction window?

    While Zen may seem to be throwing more hardware to achieve the same performance as Intel (AMD always seems to need more transistors to achieve the same performance) it may prove to be more flexible and ‘evolvable’ (?) in the long run. Remember, Intel has been refining their microarchitecture for a long time while this is just the first iteration of Zen. Skylake is bumping up against its architectural limits while Zen likely has more headroom for advancement. So while Intel may squeeze the last drop of performance from their current cores, Zen might take it from there. Interesting speculation : could Intel also be working on a completely new architecture in light of Zen?

    All this is interesting but for most of us, it doesn’t matter as long as AMD can match Intel’s performance and efficiency at competitive prices, with a reliable and efficient chipset to pair with it. And of course, be able to continue evolving the design in significant ways, not how they’ve evolved Bulldozer.

    Good grief. I haven’t felt this way since the original K7 came out 17 years ago! I might be compelled to upgrade even if my FX-5350 is perfectly fine for the things I do.

      • tipoo
      • 3 years ago

      1.5x the issue width of Bulldozer they said, Bulldozer was 4. Looks like a 6 wide design to me?

        • ronch
        • 3 years ago

        Depends on how you look at Bulldozer. Each Integer Cluster is 4-issue but the FPU’s issue width is kinda vague. If you take the whole module, that’s 8-issue for Integer plus the FPU.

          • tipoo
          • 3 years ago

          Then 1.5x that would be 12 issue. Guess we don’t know how or what AMD was measuring with that multiplier.

          But, it would be odd for them to start comparing to modules instead of cores.

            • ronch
            • 3 years ago

            Well, admittedly a lot of the things in those slides are vague. I don’t think it’s really 1.5 or 1.75.

            • Redocbew
            • 3 years ago

            Yeah, it’s hard to tell at this point, but the micro-op cache and fact that the I-cache is bigger than the D-cache does seem to indicate some attention paid to single thread performance, like they said. It’ll be interesting to see exactly how they implemented SMT and how many components are shared when using it.

            • ronch
            • 3 years ago

            Well, in the past 10 years or so it was common for cores to have bigger instruction caches than data caches. The last core to have same-sized caches was K10, IIRC

            • Redocbew
            • 3 years ago

            Intel’s been using a 4x32K cache for both the i-cache and d-cache for a while now.

            • ronch
            • 3 years ago

            Oops. Ok, it wasn’t K10, but having instruction caches that were bigger than the data caches has also been quite common.

            • Redocbew
            • 3 years ago

            Yeah it’s been done before. AMD must think the extra space is worth it in Zen to avoid time spent decoding instructions which is probably a decision made to help improve IPC.

            Didn’t the caches in Bulldozer also have really bad latency? Hopefully they’ve worked on that also.

        • BaronMatrix
        • 3 years ago

        No it’s 1.5 the issue width and execution resources… That means more instructions queued up… It should improve the SMT functionality and the single-threaded functionality… It uses a write back L1 to limit AGU pressure…

      • rechicero
      • 3 years ago

      I think is impossible for AMD to match Intels performance and efficiency. Intel fabs are so superior. Actually, what AMD does with don’t know how many times less budget for R&D and subpar GloFo fabs is amazing, even if not enough. If its enough, then is nothing sort than a miracle.

        • ronch
        • 3 years ago

        For a company that has practically no more money it’s amazing how AMD can still put something as sophisticated as Zen together that comes this close to Intel, a company with all the money in the world. It’s nothing short of a miracle, and demonstrates the quality of their (AMD) engineers. Never mind their marketing dept.

          • tipoo
          • 3 years ago

          It boggles my mind. They may not have been on top of their game for…Oh, 11 years, but come on, CPUs are still some of the most complicated designs we fleshbags make, it’s mind boggling how tiny AMD makes them and they were even remotely competitive with giant Intel.

            • ronch
            • 3 years ago

            Ditto.

            • muxr
            • 3 years ago

            This is why my money for the next CPU goes to AMD. Those engineers need more work.

          • smilingcrow
          • 3 years ago

          They are rather a bi-polar company as they seem to swing from excellent to rubbish with their CPUs albeit over a long time scale.
          Maybe they should strain their Silicon with Lithium!

            • ronch
            • 3 years ago

            There were rumors that Barcelona was a quickly hacked replacement for the original (and cancelled) K9 or K10. And when Bulldozer was conceived it was back in 2005, a year when AMD and Intel were very excited about the prospects of multi-core computing. So if you think about it, Phenom and FX were both done under Hector’s watch. The guy doesn’t know how to beat Intel.

        • BaronMatrix
        • 3 years ago

        And there was no original Opteron designed by the guy who made Zen…

        • flip-mode
        • 3 years ago

        Agreed. If people have that expectation then they will be disappointed. If AMD puts out a processor that is even able to keep it interesting people should applaud.

        AMD has made huge mistakes over the last decade and it hurt them very, very badly. It is miraculous they are still in business. Hopefully they make much better decisions over the next decade and manage to significantly strengthen the company over that time.

      • BaronMatrix
      • 3 years ago

      They added the additional units for SMT… They may get closer to 50% increase from it.. We’ll see…

    • Tristan
    • 3 years ago

    Just few lies from AMD, as usual
    Blender use both CPU and GPU for rendering. They intentionally didn’t provide full specs of systems, just to hide existence of high-end graphics cards. High-end pc have high-end graphics, for sure. So, strong GPU is reason why perf results are the same, and Zen is slow crap as leaked AOTS benchmarks revealed.

      • ronch
      • 3 years ago

      Grumpy old Tristan.

    • DPete27
    • 3 years ago

    The first slide “Quantum Leap in Core Execution Capability” is amusing. I’m sure some marketing intern put “Quantum” in there as a hype word, but given AMDs past of over-promising and under-delivering, I prefer to use in the Schrodinger’s Cat sense: Is it a leap or is it not?

      • cobalt
      • 3 years ago

      Well, if they mean in the quantum physics sense, perhaps they should have read the definition first:

      “In physics, a quantum (plural: quanta) is the minimum amount of any physical entity involved in an interaction.” (from [url<]https://en.wikipedia.org/wiki/Quantum)[/url<] So it's the smallest physically possible leap they could have made!

        • christos_thski
        • 3 years ago

        Well, the merriam-webster dictionary defines quantum leap as “a sudden large change, development, or improvement” (it was even used that way as the name of clive sinclair’s successor to the spectrum home computer)

        I still upvoted you, though, because that was funny 🙂

          • cobalt
          • 3 years ago

          Oh, of course! It’s quite a common phrase, but it does always give me a chuckle when someone uses it. (Not entirely sure of the origin of the phrase, though: is it because quantum physics is a huge leap from classical physics, or is it a reference to quantum tunnelling, or is it just a weird name for discontinuous electron ground states? Or did the 1980’s Scott Bakula show use it first? Wikipedia says electron ground states, but that seems like the least sensical one.)

          (Edit: and it appears someone on Ars made the same joke as me. Guess I’m not the only one who giggles whenever someone uses it.)

            • Wonders
            • 3 years ago

            In my other life as a marketer, I’m going to start slipping “Newtonian leap” into PowerPoint slides wherever possible.

      • ronch
      • 3 years ago

      Well, technically the Cat is dead. Been that way since AMD retired the Cat cores.

        • BurntMyBacon
        • 3 years ago

        Jaguar is still around … If you count consoles that is.

      • Voldenuit
      • 3 years ago

      The more we know about Zen’s performance, the less certain the release date becomes.

        • tipoo
        • 3 years ago

        We can know either it’s release date or it’s speed, but not both

          • djayjp
          • 3 years ago

          QM hilarity lol

      • Redocbew
      • 3 years ago

      What they really wanted was a Tachyonic Leap. The ability to go back and fix what they screwed up the first time.

        • tipoo
        • 3 years ago

        The tachyon orders a beer. A tachyon walks into a bar.

    • f0d
    • 3 years ago

    im seemingly a bit more optimistic about zen now
    if it really is around the same performance as broadwell-e and i can get some decent overclocks out of it (4.5ghz+ on custom watercooling) i might finally have something to replace my sandy-e

    the two things im worried about now is overclocking and price
    in my experience amd gpu’s and cpu’s dont overclock as well as their competition and the last time amd had a superior cpu they priced it really high (for example look at the chart here [url<]https://techreport.com/review/8295/amd-athlon-64-x2-processors)[/url<]

      • ronch
      • 3 years ago

      If Zen is as fast as Broadwell then you’ve had something to replace your Sandy for a while now. It just won’t be from AMD.

        • Voldenuit
        • 3 years ago

        *As fast as Broadwell-E.

        That’s no slouch. For demanding computing tasks, this is the benchmark, and for gamers, that should be enough that the CPU is no longer the bottleneck (unlike with ‘dozer).

          • ronch
          • 3 years ago

          If he meant having a CPU that’s faster than Sandy that could be a good upgrade, we’ve had that for a while now. If he meant having a good AMD CPU to upgrade from Sandy, then yeah, Zen could be it.

            • f0d
            • 3 years ago

            for me its a mix of price/performance and overclockability – i dont care who actually makes it

            we have had 8 core performance cpu’s for a while but they are mega expensive – especially here in australia [url<]http://www.scorptec.com.au/product/CPU/Intel_Socket_2011-3/63616-BX80671I76900K[/url<] im cautiously optimistic zen will be much cheaper than 6900k at similar performance and overclocks really well - all three are what imafter

        • f0d
        • 3 years ago

        problem with broadwell-e is the price, hopefully zen is cheaper

        • Intel999
        • 3 years ago

        Maybe he has been waiting for something realistically priced. By January 1st both the 6900 and whatever Zen ‘s street name becomes will be well below the current $1,000.

    • Kretschmer
    • 3 years ago

    AMD always releases stupidly optimistic figures and comparisons before each CPU launch. Remember when many people expected Bulldozer to thrash Sandy Bridge?

    Let’s refrain from hype until the final product is benchmarked.

    As an aside, the “3Ghz is not my final form” really worries me, as AMD has a history of ramping up clocks t compete at the expense of power efficiency for many years (290X, FX8150, FX9590, et cetera). If they can only be competitive with a TDP of 150W, that spells doom for mobile sales.

      • RAGEPRO
      • 3 years ago

      Other sites are reporting credible info that the 8-core variant will have a TDP of 95W, which is alarming in the other direction.

        • tipoo
        • 3 years ago

        Wait, why is it alarming in the other direction? Why would lower wattage at a given clock and core count be alarming?

          • drfish
          • 3 years ago

          The 6900K is a 140w part…

            • chuckula
            • 3 years ago

            A 6900K running full-out is a 140W part.

            A 6900K that’s artificially limited to a noticeably lower clockspeed? That would require some testing to give a better estimated TDP.

            • drfish
            • 3 years ago

            I understand, I was just pointing it out because it seemed like he had forgotten we weren’t talking about a typical ~90w desktop CPU.

            • Meadows
            • 3 years ago

            Noticeably lower? The base clock speed of that processor is 3.2 GHz and it sure as hell wasn’t going to turbo at all under an all-around load such as rendering. So you’re talking about a 6.25% loss there. A common rule of thumb for noticeability is 10%.

            Assuming AMD haven’t undervolted the CPU, which I’m damn sure they haven’t, this makes the processor a 131 W part on the back of an envelope.

            • chuckula
            • 3 years ago

            [quote<]The base clock speed of that processor is 3.2 GHz and it sure as hell wasn't going to turbo at all under an all-around load such as rendering. [/quote<] Well it sure as hell wasn't going above 3.0 GHz because as AMD said at the demo they artificially limited its clockspeed to be slower than the base speed. As for a multi-core Intel chip never turbo boosting under multi-core loads, I have plenty of counterexamples to that line.

            • Meadows
            • 3 years ago

            In my experience, graphics rendering is a special type of multi-core load in that it generally keeps very few CPU resources free, or none at all. There is a user-detectable difference between 100% CPU usage and “100%” as an OS might report it. As such, I’m very skeptical that a processor like this would have much thermal headroom with its default settings and default UEFI constraints anyway.

            Even if what you propose is true for this type of workload, what are we looking at? An extra 100 MHz? 200 MHz? If I’m being awfully generous and assume a 3.5 GHz speed across all cores during times of full utilisation, then we’re still looking at a delta of ~14%, ergo around 120 W during AMD’s demo.

            Don’t get me wrong, I’ll be cautious in my optimism until I actually hold one of those things in my hand, but based on what little information we got here, the above numbers look promising. There hasn’t been a workload in several years where AMD provided more performance/watt than intel, no matter how niche the workload, so the existence of just such an example should be good news.

            • BaronMatrix
            • 3 years ago

            Especially when it’s the most popular open source 3D renderer and game engine… It is NOT a benchmark… It’s an application used by 1000s of content creators…

            • raddude9
            • 3 years ago

            Just be honest and say that the more multi-core the load is, the lower the turbo. The original poster was not correct in suggesting that multi-core loads don’t benefit from turbo clocks, but it’s also not correct to suggest that multi-core loads will overclock to the same degree that single core loads will.

            • derFunkenstein
            • 3 years ago

            We have no idea the consumption of this 3GHz engineering sample, but if the final silicon is supposed to be rated 95W, it’s fairly safe to say 3GHz is probably also less than 95W.

            • rechicero
            • 3 years ago

            Not at 3GHz

          • RAGEPRO
          • 3 years ago

          We don’t have a ‘given’ clock, though. 3GHz seems pretty close to the limits for an 8C at 95W, unless Zen is amazingly power-efficient or something. Given that it apparently has similar per-core and per-clock performance to Broadwell, either AMD has achieved a master stroke, or it’s going to come out at like 3.2 GHz or something.

          I mean, yeah, there’s always overclocking, but on these tiny FinFET processes that seems pretty… well, I’ll say hit-or-miss.

            • tipoo
            • 3 years ago

            Ah, I see what you meant now. That’s only one SKU though. We’ll have to wait and see.

            • AnotherReader
            • 3 years ago

            CPUs also have a wider range of bins than GPUs.

        • Gadoran
        • 3 years ago

        We all don’t know what is this part, it could be a server part, clocked to stay in the 95W power budget. After all looking Intel server cpus lower power SKUs are clocked a lot slower than performance ones.
        Anyway forget performance without heat, Samsung process is not famous to be the best around, for example TSMC is clearly better on power consumption and Samsung was trunched by Apple for 10nm A11 development.

        A process licensee like GloFo can’t do miracles, it has not the proof neither the experience in development.

      • bfar
      • 3 years ago

      Well this one was run in real time in the presence of journos alongside a good intel chip. They would want to havs confidence in the product to do that.

        • Kretschmer
        • 3 years ago

        Yes, one not-quantified example of performance was given…with the Intel part artificially constrained in frequency. This could be a good chip, but never, ever trust AMD marketing benchmarks.

        Off the top of my head, there were several GPU launches in the past year where AMD showed themselves beating Nvidia in every game, only for TechReport to report an Nvidia lead.

      • DPete27
      • 3 years ago

      [url=http://wccftech.com/amd-zen-8-core-4-core-cpus-leaked/?utm_source=wccftechtwitterfeed&utm_medium=wccftechtwitter&utm_campaign=Feed%3A+Wccftechcom+%28WCCFtech.com%29<]Rumor has it[/url<] the 8c/16t part will be 95W TDP and 2.8GHz base/3.2GHz boost. Assuming that's correct, they're not going to push it much further than 3GHz. While that number may match well against Intel's 8c/16t CPUs that have a 140W TDP (though the 6900K obviously wouldn't pull 140W running at it's base clock of 3.2GHz to match Zen's purported boost), I'm skeptical that Zen will be able to keep pace with the much more common consumer i5's and i7's running at nearly 4GHz boost clocks and 90W TDP.

        • RAGEPRO
        • 3 years ago

        To be fair, you would want to compare the 65W 4C/8T chip to those. It will doubtless have higher clocks. Hopefully it keeps the 8MB L3 cache of the 8C chip.

          • DPete27
          • 3 years ago

          True. Fingers crossed.

        • djayjp
        • 3 years ago

        The 4 core version should be able to compete I’d imagine.

        • Gadoran
        • 3 years ago

        Rumors :)……likely it was the TDP of the early 2.5GHz part.

        AMD never mentioned TDP, this means a lot. Anyway a 8 core cpu can not run at a competitive speed in the range of 95 W, the GloFo process does not allow this, being hotter that 16nm TSMC and sure worse or on pair (best case) with Intel 14nm.

        As usual AMD will stay in the 125W figure (or more).

    • derFunkenstein
    • 3 years ago

    The most frustrating thing is we still have 5-8 months of waiting ahead of us. It seems like it’s been “next year” for [url=https://techreport.com/review/28228/amd-zen-chips-headed-to-desktops-servers-in-2016<]well over a year[/url<] now.

    • ultima_trev
    • 3 years ago

    Surely no one will take their claims seriously unless there is a Geekbench score? That is the only CPU test that matters these days!

    • maxxcool
    • 3 years ago

    I am going to reserve judgement, but if others are capable of running Ashes and a handful of other benchmarks I find it DEEPLY suspicious the ONLY benchmark they opted to run here to “convince” people with is 1 render test.

    This seems like a last minute gig that was not really well planned.

      • tipoo
      • 3 years ago

      Otoh Ashes is a very niche game that’s pretty much an agenda benchmark, while Blender is a long standing tool used by professionals every day. It has fairly good core scaling (not as good as Cinema 4D, but good), so it should be using all 8C/16T for both processors.

      I don’t see a way to spoof that one, unless the i7 was given the lowest tier memory or something.

        • maxxcool
        • 3 years ago

        never-mind, not enough coffee .. meant to say that a cpu can be great at rendering but awful at other things.

        Doubling the FPU is what is driving this result… now show me everything else I do that is NOT fpu driven ..

        Tired.. sick 4..yearold .. cheers.

          • Andrew Lauritzen
          • 3 years ago

          Blender seems like a pretty impressive result to me. Particularly if it’s using the ray tracing path which IIRC uses Embree internally (which has been obviously fairly optimized for Intel chips).

          Even more so in that Zen would seem to have only ~1/2 the raw float throughput of HSW+ per core; obviously I don’t know how well blender makes use of AVX2 instructions, but it’s still pretty impressive I think.

          Much more interesting than the comparisons to client i7s in Ashes, etc.

            • chuckula
            • 3 years ago

            I’d be very interested to see the Blender configuration and version that was used for that particular demonstration. Something tells me that information is staying confidential though.

            • Andrew Lauritzen
            • 3 years ago

            Yeah of course it’s easy to pick favorable results in this sort of comparison, but I’m willing to extend the benefit of the doubt. They could have picked an easier target than a 6900k if they really wanted to swing things, so good on them!

            • terranup16
            • 3 years ago

            Quite agreed. And, likely, an easier target than Blender rendering.

            I find it interesting that they specifically used open source software for this- I am wondering if that was done for transparency, done so the demo could be more easily replicated down the line (due to the software being free), or so that AMD could patch or source compile the software to better-support Zen (whereas more proprietary applications may be tweaked/tuned more specifically towards Intel’s latest architectures).

            • maxxcool
            • 3 years ago

            ^ This is my guess. Which is not wrong or bad to present yourself with. However using as the *only* demo seems a bit old-school-AMD-marketing shenanigans. Especially if your trying to re-brand yourself as open, or transparent etc …

            But again .. holding judgement until ANAND, HARDOCP or TR gets a sample.

            • terranup16
            • 3 years ago

            Yeah, there is an argument that we’re very far away from release still so picking one benchmark is more “teaser” than “shady”.

            • smilingcrow
            • 3 years ago

            If they are due early Q1/17 then they will have to commit to clock speeds fairly soon as it takes a while to ramp up production to be ready for a full launch.
            If they release any time in Q1 then AMD still don’t have that long to finish preparing to start manufacturing.

            • caconym
            • 3 years ago

            Yeah, this test and result is just what I was hoping for. Even if it doesn’t stomp Intel the way the K7’s FPU did (and I wasn’t expecting it to), it still implies that these CPUs aren’t just going to be useful for gaming and web browsing, but content creation and light CAD stuff, which is a market AMD has had no clout in in over a decade.

            Bottom-feeding in the gaming PC market is probably not going to keep AMD’s CPU business alive, IMO.

        • the
        • 3 years ago

        AMD’s main purpose of the test was to do a comparison of IPC by normalizing core count and clock speeds. In this regard, AMD has caught up by that one metric but it is also an indirect admission that Intel has faster systems on the market right now.

        A good first sign for Zen but some independent testing is needed to verify.

      • Flapdrol
      • 3 years ago

      In the ashes benchmark the zen cpu is beaten by a “regular” high clocked quadcore i7. In your presentation you want to at least win the benchmark.

      Beats the i5 in ashes though, so priced right this could be nice cpu. Especially if there’s room for higher clockspeeds.

    • chuckula
    • 3 years ago

    Incidentally, this article has another Blender benchmark run during a review of the 6950X to give you an idea about core scaling in Blender:
    [url<]http://www.pcworld.com/article/3075433/hardware/intel-broadwell-e-core-i7-6950x-review-the-first-10-core-enthusiast-cpu-is-a-monster.html?page=2[/url<]

      • smilingcrow
      • 3 years ago

      Here’s a much wider range of Blender benchmarks and it seems to scale very well:

      [url<]http://blog.render.st/benchmarking-blender-on-renderstreet-dual-cpu-and-quad-gpu/[/url<]

    • chuckula
    • 3 years ago

    Hrmm….

    — Enhanced Branch Prediction

    — Micro Op Cache

    — 1.75x instruction scheduler window

    — 1.5x execution width and execution resources

    — Result: Instruction level parallelism designed for dramatic gains in single-thread performance.

    Ok! Which one of those jokers in marketing stuck that Haswell slide into the deck!

      • tipoo
      • 3 years ago

      Apart from uop cache, widening the thing and enhancing the thing isn’t new 😛

        • chuckula
        • 3 years ago

        ???

        uOp cache: Sandy Bridge (actually Pentium IV did it first and it was called the trace cache)

        Larger scheduler windows: Been happening since the Core 2.

        Wider execution resources: Ditto.

        ILP with “dramatic gains in single-thread performance”: Yeah, that too.

          • tipoo
          • 3 years ago

          Yes, and the uOp cache is the one novel thing to Intel there (and it’s also because ISAs like ARMv8 don’t need a uop cache like x86)…You don’t agree processors outside of Intel have been widening schedulers and execution resources to reach the end goal of higher IPC?

            • chuckula
            • 3 years ago

            [quote<]You don't agree processors outside of Intel have been widening schedulers and execution resources to reach the end goal of higher IPC?[/quote<] I never said that, we aren't talking about POWER here. I am saying it's certainly "new" to AMD in as much as Lisa Su is up there making a big deal about it on stage and we have a decade of AMD not doing that.

            • tipoo
            • 3 years ago

            Well, all right, but then it could well have been “Did AMD steal the slide deck from every sane processor development in the last decade”? 😛

            The ARM boom in the last decade is a good example of it, or Power, etc.

            • chuckula
            • 3 years ago

            ARM is another example of copying x86, at least for high-end ARM parts.

            • JumpingJack
            • 3 years ago

            Interesting….

            AMD simply extends the size of the integer registers from 32 bit to 64-bit for the Intel owned x86 ISA and all fanboys claim it is new, novel, brilliant innovation, completely all AMD and Intel is this dirty rotten company that copies AMD…. yet AMD copies Intel widening the core and implementing SMT and it is no big deal, Intel wasn’t the innovator. … kinda ironic.

            • terranup16
            • 3 years ago

            So if a dirty, rotten copier is a company that works with SMT, then IBM is…?

          • ronch
          • 3 years ago

          The original Bulldozer core had a 40-entry Integer scheduler + 60-entry FP scheduler. Not sure how those grew in later iterations of Bulldozer but if they held on to those figures, does Zen have a ~175-entry scheduler (aggregate)? If we assume Excavator had bigger schedulers, I think Zen will end up having much more entries in its schedulers than current Intel cores. Haswell had a 60-entry UNIFIED scheduler, Skylake probably has a bit more. It’s interesting to note, though, that AMD still sticks with separate schedulers for Integer and FP instructions. I kinda expected them to have a unified scheduler before those purported Zen slides came out back in 2014-15.

        • the
        • 3 years ago

        Micro-op cache isn’t new either as Intel has been doing that since Sandy Bridge. It is new for AMD. I see that features something the Bulldozer family should have had to reduce pressure on the limited decoders.

          • AnotherReader
          • 3 years ago

          The concept of a [url=http://www.researchgate.net/publication/234820789_Run-time_generation_of_HPS_microinstructions_from_a_VAX_instruction_stream<]micro-op cache has been known since 1986[/url<], at the latest. Other papers from the HPS group are listed at [url<]https://people.cs.clemson.edu/~mark/hps.html.[/url<] If DEC hadn't ditched the VAX for the Alpha, they would have probably implemented it a decade before Intel.

      • Unknown-Error
      • 3 years ago

      Next, copy/paste a Power9 slide

      :O :O :O

      • raddude9
      • 3 years ago

      All of the processor makers borrow from each other, Intel has “borrowed” many ideas from plenty of other chip makers, please don’t try to imply that people only borrow from them.

    • 1sh
    • 3 years ago

    This might be a good time to buy stock in AMD.

    Update: I kid, I kid.

      • tipoo
      • 3 years ago

      Up to 6.87 as of right now. When it was 2 dollars would have been a great time 😛

        • jihadjoe
        • 3 years ago

        Plot twist: he bought at 2 and is riding the hype train in order to unload just before Zen actually launches (in case it turns out to be a huge letdown as Bulldozer and Phenom were)

          • 1sh
          • 3 years ago

          [url<]http://treasure.diylol.com/uploads/post/image/658330/resized_the-most-interesting-man-in-the-world-meme-generator-most-can-t-figure-me-out-and-the-ones-who-do-i-cut-their-brake-lines-5f77cc.jpg[/url<]

    • weaktoss
    • 3 years ago

    [quote<]admittedly green eyes[/quote<] That's some masterful flamebait in an AMD article 😀

      • Wirko
      • 3 years ago

      Why? AMD’s best years were lit by a green AMD logo.

      • Platedslicer
      • 3 years ago

      To be fair, AMD used to be the “green team” in the CPU world.

    • tipoo
    • 3 years ago

    [quote<] Tantalizingly, AMD says the 3GHz figure isn't the final clock speed it expects production Zen chips to top out at, either. [/quote<] There you go to anyone who was worried about the low engineering sample clock speed. As I mentioned Bulldozers ES's were like 2.5GHz and that shipped on desktop stupidly high clocked, it doesn't mean production clocks.

      • the
      • 3 years ago

      It maybe representative of what the server parts get though. Base clock for the first iteration of Bulldozer in servers wasn’t that high where as desktops went all the way to 5 Ghz turbo. Of course that is due to pushing 16 cores into a smaller 115 W power budget vs. 8 cores with 200 W to spare.

      I would agree that doing 3 Ghz demos right now is a good sign though.

    • chuckula
    • 3 years ago

    That’s a nice Blender benchmark run and its nice to see Zen accomplishing a real workload.

    But take the performance conclusion with a blood-pressure raising amount of salt.

    There’s history.
    [url<]https://www.techpowerup.com/152569/amd-fx-8150-looks-core-i7-980x-and-core-i7-2600k-in-the-eye-amd-benchmarks[/url<] [Wow, the douche level of the AMD fancrowd is quite strong. I post a benchmark that shows Broadwell E in a rather unfavorable light and all of the sudden it's an attack on AMD. I'd just love to see some same people who insulted Haswell when it launched explain how Haswell is such a failure while Zen is a miracle because it copied Haswell's design philosophy and launched 4 years later]

      • raddude9
      • 3 years ago

      Classic chuckula. If AMD bring up a benchmark you can argue with, you point to a 5 year old article to create an air of FUD. But if Intel are caught comparing their new chip to a FOUR year old Nvidia chip, then that’s fair game:
      [url<]https://techreport.com/news/30539/intel-announces-next-gen-knights-mill-xeon-phi-accelerator?post=996782#996782[/url<]

      • Rza79
      • 3 years ago

      In all fairness (even though those benchmarks are handpicked), Bulldozer is really competitive on those tests. Still to date it scores well on those tests. So it’s not like AMD lied or anything. They just chose tests that prefer throughput.

        • travbrad
        • 3 years ago

        Yep those are are all programs that benefit from 8 threads/cores and Bulldozer is actually pretty similar to Sandy Bridge in those applications. It’s all the programs/games that use less cores where Bulldozer falters, plus Sandy Bridge was/is amazing at overclocking further widening the per core performance gap.

        This Zen test is more impressive (if it’s real) because they are comparing it to an Intel CPU with the same number of cores/threads at the same clock speed. Of course it still remains to be seen how much higher Zen can clock when it is actually released.

          • Srsly_Bro
          • 3 years ago

          Zen is real. They just did a demonstration. Do you think they were running two 6900k CPUs against each other?

            • travbrad
            • 3 years ago

            Of course Zen itself is real but I always take demonstrations/tests performed by the company selling the product with a grain of salt. They will often find a task or tweak some settings that favors their particular product such as:

            -Fury X being faster than GTX 980ti in every single game (by using settings no one would ever use)
            -GTX1080 being “twice as fast” as a Maxwell Titan X (but only in VR)

            I do hope it’s actually representative though and they really have managed to pretty much match the IPC of a 6900K.

      • Meadows
      • 3 years ago

      I’ve been meaning to mention this for a while but raddude9 put it nicer than I would have.

      Every time there’s a CPU related piece of news, you go into comment diarrhea mode and when you do, it’s usually “yae intel, nay amd”. It’s nice that you caution people and provide insight on occasion, but you let your bias show too often.

      As for the “comment diarrhea”, you literally started 4 separate comment threads under this news posting within a span of about 30 minutes. I assume it’s because you’re the excitable sort and started commenting before you’d finished reading the piece, however it makes you look rather fanatical, and, as a result, it hurts your credibility even when you’d otherwise be right.

        • flip-mode
        • 3 years ago

        Yep, the dude’s comment spam is a little out of control. I think he sometimes makes legit points, but often not, and he frequently makes comments along the lines of “oh now people owe me something for disagreeing with me”, which I can see he has done in one of his comments here.

        chuck needs to take TR’s “quality over quantity” approach to heart.

        • cegras
        • 3 years ago

        Someone usually calls him out when he acts like this, he ignores it, reverts to good behaviour for a while, then resumes once he thinks we’ve forgotten.

          • Meadows
          • 3 years ago

          Just as well, let’s keep a log then for giggles.

            • bjm
            • 3 years ago

            Hah, yeah. I did that at one point [url=https://techreport.com/news/30394/3dmark-time-spy-benchmark-puts-directx-12-to-the-test?post=990429<]here[/url<]. Alas, it was to no avail. He went silent for a bit and popped out from under his bridge [url=https://techreport.com/news/30424/sapphire-nitro-radeon-rx-480-hot-rods-polaris-10?post=992468<] shortly after[/url<] . I wonder what motivates the guy... he's either a really well paid shill or has a true hatred for AMD. He's literally in every AMD thread with the same zeal. It's almost as sad as it is irritating.

            • Meadows
            • 3 years ago

            I do intermittently wonder if he’s on anyone’s payroll but then I remember there are some truly amazing people on the internet sometimes.

            • cegras
            • 3 years ago

            Keep those links handy – trials need evidence!

            [url<]https://techreport.com/discussion/30037/amd-radeon-pro-duo-bridges-the-professional-consumer-divide?post=975946[/url<] Googling "techreport cegras chuckula" brings up a very interesting history where I have tried to get him to stop intermittently for about four years. Oh well.

            • rechicero
            • 3 years ago

            I dont think Inter PR would pay somebody like that. They are way more subtle and profesional. On the other hand… yes, he seems to have some kind of obsession with AMD. I actually hope he’s paid for this, if not, he could be in need of some sort of counseling.

          • raddude9
          • 3 years ago

          That’s a very good description of his behaviour pattern, anybody know why he’s such an AMD troll though?

            • cegras
            • 3 years ago

            Conjecture: Owns INTC and NVDA. Happy with NVDA going to the moon, frustrated at INTC’s inability to succeed in mobile (and he shilled for intel’s mobile platform for years on TR – we all saw it).

            • Waco
            • 3 years ago

            Being a shill implies that he was an employee…he was not.

        • srg86
        • 3 years ago

        He may be quantity over quality, but so far I’ve not seen anything to actually prove him wrong yet.

        As mentioned in my other post, Zen could be the first, it is impressive.

      • albundy
      • 3 years ago

      There’s still an AMD fancrowd? Is it the moms, grannies, and grandpappies? Old equipment usually gets passed down in families. That’s fairly sad that they haven’t moved on though. In any case, lets add fuel to the fire.

      Is it really worth waiting half a year or more for this? intel will be releasing Kaby Lake and later on 10nm Cannon Lake by the time Zen is released. How will it even compete at the performance pricepoint? And better yet, how will AMD get their customers back when they abandoned them for years, and when will they be abandoned again?

        • joselillo_25
        • 3 years ago

        there is a lot of people that only want the job to be done when they use a computer.

        this means that they want to game, play videos, encoding etc.. and do not care if their computer is 10fps faster or encode in 4 min less.

        AMD is giving this people products at a nice price, in the GPU market and now in the CPU market. 470 and 480 gives you the latest technology and can play virtually an game. with Zen you are going to have top cpu power at a reduced prices, and this 5W-10W in idle will be amazing for people that use internet a lot and have the computer in idle most of the time.

        you also can spend 800$ in a CPU if you want but this approach looks ridiculous to me for a domestic pc user.

      • Redocbew
      • 3 years ago

      Surprise! AMD does still have a marketing department, and it seems they are capable of putting together a decent product demo. Who would have thought that? I must agree though that they clearly have some agenda and are not to be trusted. It’s not like they just spent the last five years and more cash than most people ever see in their lifetime developing it, right?

      If AMD played it safe and “copied” Haswell, so what? Who cares?

      • BaronMatrix
      • 3 years ago

      Blender is NOT a benchmark.. It’s an actual 3D renderer and game engine…

        • Beelzebubba9
        • 3 years ago

        Do you think Blender results are likely valid or does the nature of the workload make it very open to the type of benchmarketing optimizations tech companies love so much?

      • Unknown-Error
      • 3 years ago

      I’ve been going through AT forum posts on the blender test. Some serious question have been raised, especially how it relates to IPC increase. How does the Blender test relate to IPC gains? Was this more of a comparison to Intel equivalent 8C/16T in overall MT performance rather than a clear single thread test? If it was an IPC test then the number of cores/thread the CPU has is irrelevant. If it was an overall performance test then why down-clock the Intel system? If it was an overall performance metric then the clock-speeds absolutely matter. Your competition from the get go have at least 500 mHz advantage. There is some serious jugglery being done here by AMD side. I am beginning go back on my initial optimism I had a week ago regarding Zen.

        • chuckula
        • 3 years ago

        I’m sure I’ll get downthumbed (again) for failing to toe the AMD line but just as we saw with the Rx 480 “comparison” tests only a couple of months ago there’s one easy way for AMD to avoid these nagging questions: Stop acting like a second-tier company whose only validation in life is the existence of your larger competitor as the standard by which you measure yourself and start acting like a first tier company that can market its own products based on their own merits.

        That basically means: Show off a demo of Zen, talk about all the nice features in Zen, and S.T.F.U. about anything and everything that Intel does.

          • smilingcrow
          • 3 years ago

          The only thing that really matters for a desktop CPU is performance so a technical talk is irrelevant to most.
          Showing a real world test using non-benchmark software gives a real guide at least to one or even a few aspects of the chip. But for that to make sense it needs a reference so they either use their own current but ancient CPU or a more recent one from Intel.
          I think they chose the right chip to compare it to even if the test asks more questions than it answers.

          • cegras
          • 3 years ago

          [quote<]Stop acting like a second-tier company whose only validation in life is the existence of your larger competitor as the standard by which you measure yourself and start acting like a first tier company that can market its own products based on their own merits.[/quote<] [url<]https://techreport.com/news/30539/intel-announces-next-gen-knights-mill-xeon-phi-accelerator?post=996782[/url<]

            • chuckula
            • 3 years ago

            Fascinating, where did Intel intentionally change the configuration of Nvidia’s hardware to a non-standard setup?

            Where did Intel provide no real numbers in a product that Intel won’t sell you to justify the performance of KNL? (KNL is quite on sale, Arock was selling 4-way 2U server boxes full of them at IDF and you can order a fully operational KNL workstation for 1/25th the price of the P100 server systems)

            What did Intel do to make it impossible to actually compare its products to the competitor just like AMD did in that demo? If anything, KNL’s wide availability compared to the P100 and NVidia’s official level of fear over a competitor that is selling well by all accounts shows that Nvidia is in the same boat as AMD.

            • cegras
            • 3 years ago

            [quote<]Stop acting like a second-tier company whose only validation in life is the existence of your larger competitor as the standard by which you measure yourself and start acting like a first tier company that can market its own products based on their own merits.[/quote<]

      • pneujet
      • 3 years ago

      Copied who’s design philosophy? 64 bit, multi Cpu?

      • Unknown-Error
      • 3 years ago

      Ian Cutress @ Anandtech pointed out some of AMD trickery: [url<]http://www.anandtech.com/show/10585/unpacking-amds-zen-benchmark-is-zen-actually-2-faster-than-broadwell[/url<] Simply put, do take AMD's presentation with an Everest size mountain of salt. From this posts onwards I'll reserve my judgement until the independent reviews are out.

      • JumpingJack
      • 3 years ago

      Chuckula — it is hard to tell, the link you provided, and AMD’s history of pumping and disappointing since conroe lends me to believe they will avoid having egg on their face this round. Who knows…. maybe they did catch up in one generation after falling so far behind. AMD certainly has the engineering chops to perform such a feat.

      • BurntMyBacon
      • 3 years ago

      [quote<]But take the performance conclusion with a blood-pressure raising amount of salt.[/quote<] Yes, but how much are we raising the blood pressure by? Are we talking enough to go hypertensive, enough to cause an aneurysm, or enough to match your's when you read this article?

    • tipoo
    • 3 years ago

    [quote<]Nitty-gritty details aside, the real question on everybody's mind is whether AMD met the 40% IPC improvement goal that it's publicly committed to over the past few months. To make the point that it has, AMD put a Summit Ridge engineering sample running at 3GHz up against an eight-core, sixteen-thread Core i7-6900K artificially limited to the same 3GHz speed. AMD ran the same Blender 3D rendering workload on both chips at the same time. Watch the video above for a sense of how Summit Ridge stacks up to Broadwell-E.[/quote<] Blender easily scales past 10 cores at least though, isn't this comparing 8 Zen cores to 4 Intel cores? And the same ratio of threads. Edit: No, I'm only a few sips into my morning coffee, it's an 8 core Intel.

      • jihadjoe
      • 3 years ago

      6900k is the new Broadwell-E chip. 8c/16t
      [url<]http://ark.intel.com/products/94196/Intel-Core-i7-6900K-Processor-20M-Cache-up-to-3_70-GHz[/url<]

        • tipoo
        • 3 years ago

        Oh shoot! Colour me cautiously optimistic then.

      • Srsly_Bro
      • 3 years ago

      Start drinking coffee earlier in the morning!

    • drfish
    • 3 years ago

    I just want to take a sec to thank Intel for making Sandy Bridge so awesome that I could hold on to my CPU long enough to find out if AMD will be a viable option for my next rig. Here’s hoping…

      • djayjp
      • 3 years ago

      Is it that awesome or is it that there was little competition? We’ll never know I suppose.

        • RAGEPRO
        • 3 years ago

        Well, the way you’d measure that is by comparing it to its contemporaries. If you look at Sandy Bridge compared to the later Core 2 processors, or even the first-generation Core i-series processors, it really is pretty awesome. Nehalem-family chips were a huge step forward over Core 2s, but Sandy Bridge was nearly as large of a leap over Nehalem.

          • djayjp
          • 3 years ago

          Maybe, though still not at the level of the good old days of ~50% perf increases annually. Pentium 4 to dual core transition was huge.

            • derFunkenstein
            • 3 years ago

            Physics made sure that was never going to continue even before AMD slipped up.

            Remember this? [url<]http://www.anandtech.com/show/680/6[/url<]

          • Krogoth
          • 3 years ago

          Sandy Bridge were marginally faster than Bloomfield/Lynnfield chips that they replaced. The primary difference was that Sandy Bridge consume a lot less power at load and overclock like a dream.

          Bloomfield/Lynnfield also had decent amount of overclocking headroom but they became blast furnaces that required high-end cooling to keep stable.

            • the
            • 3 years ago

            Sandy Bridge was the last really big IPC gain. The better power consumption and overclocking were just icing on the cake.

            • Krogoth
            • 3 years ago

            Nah, Nehaelm was the last big bump in IPC as far as Intel CPUs are concerned.

            Sandy Bridge offer a minor bump in IPC over Nehalem. The same trend kept going with Sandy Bridge’s successors. People who look at those Sandy Bridge reviews quickly overlook that Sandy Bridge chips operated at higher clockspeeds and had higher turbo-boast then Lynnfield/Bloomfield of the day. This is what attribute to the “perceived” performance boost not so much from architecture improvements.

            • the
            • 3 years ago

            Nehalem didn’t actually change the execution backend at all. Rather it focused on the on-die memory controller, changes to the cache topology (hello L3) and re-intorducting Hyperthreading.

            Sandy bridge was big as it added AVX instructions and widened the design even further. The memory controller and Hyperthreading also received a few tweaks to improve performance.

            [url<]https://us.hardware.info/reviews/6215/2/intel-five-generation-ipc-test-broadwell-haswell-ivy-bridge-sandy-bridge-and-nehalem-results[/url<] Note that that link above was done at a fixed 3.0 Ghz for all chips. As you point out Sandy bridge did clock higher than Nehalem for stock chips and overclocking as a dream. That widens the performance gap even further from Nehalem -> Sandy Bridge.

            • Krogoth
            • 3 years ago

            Nehalem introduced Quick-Path to x86 front which made a massive difference in multi-socket world.

            Sandy Bridge at stock was a minor performance bump over Nehalem at best. It was a massive architecture overhaul though but it only translated mostly into making it easier to add more core logic into the silicon die.

            • tsk
            • 3 years ago

            No, stop, you’re wrong. He just provided data to prove you wrong.

            • Krogoth
            • 3 years ago

            Nah, it is just Intel fanboys who grossly exaggerate the performance jump from Nehalem to Sandy Bridges from a pure architecture standpoint. The “perceived” performance bump was more from the higher stock and turbo clockspeeds. The jump was similar to the later jump from Ivy Bridge to Haswell.

            At time, Sandy Bridge was kinda underwhelming from a performance standpoint that Bloomfield/Lynnfield users had no compelling reason to upgrade aside from improving power efficiency. Sandy Bridge platform only made sense to those who had skipped Lynnfield/Bloomfield and were still on Core 2/Phenom II and older. The lack of commercial-tier Westmere quad-core options (existed only as binned Socket-1366 Xeons) made Sandy Bridge even more lucrative.

            Nehalem brought a lot more to the table at its debut. The performance jump from moving memory controller onto CPU logic removed the FSB bottleneck that hindered Core 2 when it was pushed hard. It yielded significant gains in most real-world applications. Just like we have seen with the jump from K7 to K8 on the AMD camp. The introduction of Quick-Path was a game-changer for Intel in the multi-socket world. Hyper-threading a.k.a SMT was back again to help out with those hilariously parallel workloads.

            Nehalem was a huge deal for workstation and prosumer-types when it launch and it cemented Intel’s growing dominance over AMD in the post-Core 2 world. AMD only managed to make a somewhat viable answer in some workloads almost three years later with Bulldozer. The marginally faster but but far more efficient Sandy Bridge/Ivy Bridge completely blew Bulldozer out of the water.

            • jihadjoe
            • 3 years ago

            Nice link, but it also shows Haswell being as big a bump over Sandy/Ivy as Sandy was over Nehalem (especially in x264), making your previous post about ‘Sandy Bridge being the last big bump in IPC’ wrong lol.

            • the
            • 3 years ago

            No, the link shows the bump from Nehalem -> Sandy Bridge to be larger than Ivy Brisge -> Haswell. Other factors like clock speeds do make that Nehalem -> Sandy Bridge architectural change feel larger. This is why Sandy Bridge is remembered so fondly as it brought both IPC and clock speed gains where s now Intel is mainly providing smaller IPC gains.

            • jihadjoe
            • 3 years ago

            Not for x264 it doesn’t.

            x264 ST: Nehalem -> Sandy : +11.5%
            x264 ST: Ivy -> Haswell : +12.8%
            x264 MT: Nehalem -> Sandy : +9.3%
            x264 MT: Ivy -> Haswell : +13.7%

            Again, all this data is from your own link.

            • terranup16
            • 3 years ago

            Per clock performance difference is actually obscene. I did a meta analysis if reviews and benchmarks (clock for clock only) as part of some work to determine theoretical peak performance for many processor models in our DC and Nehalem -> Sandy Bridge is the biggest jump by far since Pentium IV -> Conroe as I recall. The modifiers I came up with are baselined against Pentium IV and as I recall Nehalem is around 1.66x but Sandy Bridge is over 2x.

            • Krogoth
            • 3 years ago

            No, it was pretty darn close in most applications. Sandy Bridge barely squeezes by a ~10%+ gain in the best case scenario. The jump from Ivy Bridge => Haswell was greater when you factor in HT performance.

            The main difference was that Sandy Bridge consumed nearly [b<]half of the power[/b<] over a Nehelam-era counterpart when fully loaded. In terms of power efficiency, Sandy Bridge was a massive leap. It was icing on the cake that Sandy Bridge chips effortlessly overclocking and they barely threw out any heat and didn't need a ton of volts to keep stable.

            • Waco
            • 3 years ago

            Note that he said data center, so I imagine he’s talking about EP chips, not regular ones.

            • Krogoth
            • 3 years ago

            Sandy Bridge-EP/Sandy Bridge-E chips had a massive bump in cache and enjoy the full benefits of Socket 2011, so you would see a bigger performance bump over their Nehalem/Westmere equivalents in enterprise-tier applications.

            Skylake is going to a bigger bump in this area then the jump from Nehealm/Westmere to Sandy Bridge. Skylake was designed to handle hilariously parallel workloads and be infused with a ton of cores on the CPU logic.

            • Waco
            • 3 years ago

            We’ll see what Skylake-EP delivers soon. Memory bandwidth is still an issue for HPC workloads though…

            • Beelzebubba9
            • 3 years ago

            I ran a Core i7 920 at 4Ghz on air for years and you’re not exaggerating. The power delta between stock and 4Ghz was something like 120w at the wall for my system under load and you could just feel the heat pouring from the back of the case.

            • mganai
            • 3 years ago

            Lynnfield was a bit cooler than Bloomfield at least.

            • Krogoth
            • 3 years ago

            Nah, they ran about the same TDP at their given clockspeeds.

            The only differences between Bloomfield and Lynnfield is that Lynnfield had an intergrated 2.0 PCIe controller but a dual-channel DDR3 memory controller while Bloomfield lack an integrated 2.0 PCIe controller, but had an extra QPI link for an external one a.k.a Northbridge of X58 platform and a triple-channel DDR3 memory controller.

        • Firestarter
        • 3 years ago

        [quote<]I'm running out of ways to say "continued dominance,"[/quote<] That's what the i5-2500K/i7-2600K review said back in the day, the sandy bridge quad cores were fast enough that they were often beating previous gen 6-core CPUs in heavily multithreaded applications and outright embarrassing them in single threaded benchmarks. Sandy bridge was that awesome and that's the reason AMD couldn't compete

          • djayjp
          • 3 years ago

          Outright embarrassing previous gen 6 cores in single threaded? Nonsense. Nehalem was much closer in that regard than you recall. The only test I remember it being much faster in was Cinebench.

            • terranup16
            • 3 years ago

            Most-likely depends on the specific processors being compared, but if you gave me an option between a six-core Nehalem Xeon and a Xeon E3-1270 v1, I’d take the latter every time.

            • MOSFET
            • 3 years ago

            [quote<]if you gave me an option between a six-core Nehalem Xeon and a [b<]Xeon E3-1270 v1, I'd take the latter every time.[/b<][/quote<] That's worth repeating, even 5 years later.

            • Krogoth
            • 3 years ago

            For workloads that favor clockspeed, don’t need memory bandwidth and aren’t hilariously parallel. The Ivy Bridge Xeon would be a better choice. Otherwise, the Westmere Xeon is the better choice if you need bandwidth (triple-channel DDR3 due to Socket 1366) and your workload loves having as many threads you can throw at it.

            • Firestarter
            • 3 years ago

            and [url=https://techreport.com/r.x/sandy-bridge/pfactory.gif<]Panorama Factory[/url<], [url=https://techreport.com/r.x/sandy-bridge/picc-overall.gif<]picCOLOR[/url<] ([url=https://techreport.com/r.x/sandy-bridge/picc-synth.gif<]twice[/url<]), [url=https://techreport.com/r.x/sandy-bridge/x264-1.gif<]x264 encoding[/url<], [url=https://techreport.com/r.x/sandy-bridge/pov-chess.gif<]POV-Ray[/url<], [url=https://techreport.com/r.x/sandy-bridge/valve-vrad.gif<]Valve VRAD[/url<] performance per core is not easy to filter out of those results because they're all multithreaded benchmarks, but it's clear that the i7-2600K beats the comparable previous gen i7-875K by a wide margin in many workloads, and the only reason the i7-970 can keep up is because it has 50% more cores and a way higher power budget

            • RAGEPRO
            • 3 years ago

            Dunno boss. [url=https://techreport.com/review/20188/intel-sandy-bridge-core-processors/6<]Blast from the past says he's right.[/url<] That's more or less what I recall. [url=http://www.anandtech.com/show/7003/the-haswell-review-intel-core-i74770k-i54560k-tested/6<]Anandtech agrees.[/url<]

            • Krogoth
            • 3 years ago

            Gulftowns were faster at anything that utilized more than four-threads. The Sandy Bridge chips were only marginally faster at applications were single or dual-threaded. They were dead even when you went with applications that used four-threads at most.

            The main difference is that Gulftowns were toasty SOBs at load while Sandy Bridges barely threw out any heat at load.

            • RAGEPRO
            • 3 years ago

            My links in the post above yours disagree with your statements.

            • Krogoth
            • 3 years ago

            Actually, they supported it. Look at the entire contains of the TR article not just cherry-pick a game (Starcraft 2) that is known to be [b<]dual-threaded only[/b<]. The Anandtech review doesn't even have a Gulftown chip in the comparison. i7 965 is a Bloomfield chip not a Gulftown. The applications that utilize more than four-threads show that the Gulftown chips outpacing the Sandy Bridge and Nehalem chips by a decent margin that you would expect from a six-core chip w/ HT using similar architectures. It is a very tight race with applications with stuff using four-threads. The Sandy Bridge chips are only faster at dual and single-threaded applications but this advantage mainly comes from higher clockspeed and Turbo-clock speeds that Sandy Bridge chips operate at over their Westmere and Nehalem predecessors. You were lucky to get similar results with a Bloomfield/Lynnfield chip and Gulftown chips needed exotic cooling to keep those six-cores from burning themselves. I'm not saying Sandy Bridges were terrible, but fans are putting too much of a positive spin on them. Their main attractions was not performance. It was [b<]power efficiency[/b<] and the bloody things effortlessly overclock beyond 4Ghz without becoming blast furnaces.

            • RAGEPRO
            • 3 years ago

            Heh, did you look at the links? [url=http://images.anandtech.com/graphs/graph7003/55318.png<]Here[/url<] are some [url=https://techreport.com/r.x/sandy-bridge/wlmm.gif<]more direct[/url<] links. Admittedly the difference is smaller than I thought (because I fell victim to Anandtech's mistake of labeling the 965 as "965X", making me think it was Gulftown), but even still, Gulftown really is slower than Sandy Bridge-DT even in the majority of multi-threaded tasks. You're not wrong about the difference being mostly from clock rate, but does that really matter? The point was whether or not Sandy Bridge was a big improvement, and if it comes at higher clock rates, then it's a big improvement. That's sort of along the same lines as folks who say Pascal isn't an improvement over Maxwell because the majority of its performance gain comes from high clock rates. It doesn't really make any sense in context. I don't really know why you're so adamant about defending the first-generation stuff. It was great in its time, but Sandy Bridge grossly outstripped it, and obviously newer hardware is so much the better. [i<]edit[/i<]: And besides, this comment thread we're both posting in is in reply to djayjp's remark that the 6-core Gulftown parts were "much closer" to Sandy Bridge in terms of single-threaded performance. Which is obviously untrue; they're quite far.

            • Krogoth
            • 3 years ago

            Sandy Bridge didn’t grossly outstripped the Gulftown all in terms of single-threaded performance and the tables turn when you dealt with hilariously paralleled stuff. Sandy Bridge was faster at single-threaded and dual-threaded stuff but it was nothing like the jump from Penyrn to Nehalem.

            The advantages that Sandy Bridge had over the Gulftown chips was that SB was fairly affordable (~$200 for the cheapest SB chip and top of line model going around a little over $300 while cheapest Gulftowns hover around $500 going north of $1,000 for top of the line model), you were forced to get an expensive X58 board versus a more modest 6 series board, SB ran much cooler and overclocked like a dream.

            Gulftowns were a workstation platform at heart. They made as much as sense as the current Broadwell-E chips for mainstream users and gamers. You only got Gulftowns if you did real work at the time.

            • RAGEPRO
            • 3 years ago

            Heh. I mean, I hear you bub, but the benchmarks tell a different story.

            • Krogoth
            • 3 years ago

            Not really, the benchmarks make it clear that Gulftowns are better at workloads and applications that require a ton of threads.

            Sandy Bridge is only faster at mainstream stuff and gaming where clockspeed is still king rather then having ton of cores. The difference isn’t that drastic either.

            • Srsly_Bro
            • 3 years ago

            True

          • terranup16
          • 3 years ago

          More or less. You sorta had a back-to-back situation where Intel got Core 2 in a way that it didn’t need to spend as much time on R+D for, and that upended AMD’s dominance over Pentium 4. Intel used most of the Core 2 era to develop Nehalem, which nowadays feels like the Pentium 4 all over again- big, hot, and power-hungry, but using a combination of better fabs and a stupidly high TDP, Intel made it an effective halo product that AMD didn’t have much chance of touching. Then Intel manages to get Sandy Bridge out to put forth a supremely refined successor to Nehalem, as AMD meanwhile careens way off-course with the late-to-the-party Bulldozer.

          And that’s what gets us here. Sandy Bridge made for an extremely effective SMT architecture that thoroughly utilized Intel’s fab advantages. AMD made a couple of poor architectural decisions for Bulldozer that couldn’t be fixed without completely rearchitecting a new CPU. And until recently, Intel had an insane fab advantage compared to the other players so trying to go toe-to-toe with an architecture like Sandy Bridge (and Zen really does, it walks right onto Intel’s turf and challenges it to a duel) would have just blown-up in AMD’s face. I mean, look at GPUs where you had the highly efficient Maxwell architecture, yet Pascal trashes Maxwell, and the disparity in process node between Maxwell and Pascal is the same as current-generation AMD CPUs and what Zen will be on.

          Even now, Intel’s 14nm process is considered to be better than GloFo’s 14nm and TSMC’s 16nm, so AMD still needs to make up the gap in other ways.

        • NovusBogus
        • 3 years ago

        Awesome, as evidenced by the lack of anyone on the software side demanding moar megahurtz. Hell, even an i3 is capable of running just about anything.

      • jihadjoe
      • 3 years ago

      My last AMD rig was an A64 X2 on an NForce board. Shit was awesome when AMD and Nvidia played nice with each other.

      • wingless
      • 3 years ago

      My Core i7-2600K is happy to keep me going until Summit Ridge too. Thanks Intel!

        • Srsly_Bro
        • 3 years ago

        I have a 2700K and it has aged gracefully.

      • chuckula
      • 3 years ago

      You might want to check out the AM4 motherboard reviews first.
      AMD might want to figure out how to get Zen working in a Z170 motherboard if they really want sales.

        • Krogoth
        • 3 years ago

        The first revision of Intel’s 6 chipset series were completely flawless. Intel USB 3 controllers are completely trouble free.

          • chuckula
          • 3 years ago

          Yeah, and if in the last 5 years Intel had never made any improvements to its chipsets you might have a point.

          Literally every AMD chipset sold today is basically from the Sandy Bridge era (that includes rebrands of the same southbridge that have been sold in multiple generations of incompatible APU motherboards –> imagine your butthurt if Intel did that!), and believe me there are plenty of bugs in AMD’s USB implementation that get a free pass because it’s AMD.

            • Krogoth
            • 3 years ago

            The point is that Intel isn’t flawless and took them several revision to fix silly issues on their chipsets namely USB3 controller.

            Pretending that Intel are completely flawless is being disingenuous at best.

        • raddude9
        • 3 years ago

        Classic Chuckula, always here to spread FUD about products that have not even been released yet… only if those products are non-Intel of course.

      • xand
      • 3 years ago

      Sandy Bridge (i7-2600k) launched Q1 11. Assuming it was a pre-emptive response to Bulldozer (Oct 11), maybe Kaby Lake will be an amazing pre-emptive response to what has been released about Zen, which will mean that it might be finally time for me to upgrade too!

    • chuckula
    • 3 years ago

    [quote<]Summit Ridge is an unabashedly high-end desktop chip fabricated on GlobalFoundries' 14-nm FinFET process, the same as the recently-released Polaris graphics card family.[/quote<] [quote<]AMD is also confident that Zen can scale to mobile and embedded devices, all on the same 14-nm GloFo process.[/quote<] Yeah, I'm sure I'll hear lots of mea culpas from the mob who claimed that TSMC was the only possible candidate to fab these chips just a few days ago.

      • tipoo
      • 3 years ago

      Le sigh. I expected this based on Zen already being fabbed for Glofo because of the APUs, but I hoped for TSMC.

      Is it at least a refinement of the process that brought us Samsung iphonegate and the 480s non-efficiency win?

      • rechicero
      • 3 years ago

      “Mea culpa” doesnt mean “I was wrong”, JFYI

      PS: Actually, depending on the language it could mean “He pisses guilt” or “piss guilt”, buy I guess you wanted to use the Latin meaning.

        • davidbowser
        • 3 years ago

        In US English, the most common meaning is admission of guilt or fault. And yes, the Latin meaning is where we get it.

        • Jason181
        • 3 years ago

        Actually, recent archeological evidence proves that it directly translates to “My bad.”

        Of course I could be wrong. If so, mea culpa. 🙂

      • BaronMatrix
      • 3 years ago

      GloFo is the EXCLUSIVE manufacturer of Opteron chips… Even Samsung will probably be relegated to RavenRidge, not FX… But that’s why there was pressure to license 14LPP… Samsung’s ramp was MUCH SWIFTER than Intel’s…

      Now GloFo can work on IBM’s 7nm which they need for future Power chips and can use for Opteron…

    • tipoo
    • 3 years ago

    But what about the First moments of Zen?

    [url<]https://www.youtube.com/watch?v=vvnor9S3b0c[/url<]

Pin It on Pinterest

Share This