Rumor: AMD Zen engineering samples leaked and benchmarked

With the recent graphics card releases more or less out of the way, it's now time to turn our collective attention to everything Zen. The new AMD CPUs are set to be released sometime in 2017, and it's only fitting that rumor mill has already started churning. A little while back, Guru3D got a lead on the specs for four variants of Zen CPUs.

The leak includes information on two Socket AM4-based units and two SP3-based beasts. As always, this is a rumor, so take this information with at least a couple barrels of sea salt. Here's a quick summary:

Socket Cores/threads Max. TDP L2 cache L3 cache Base clock Boost clock Idle clock Idle power
AM4 4/8 65W 2MB 8MB 2.8 GHz 3.2 GHz 550MHz 2.5W
AM4 8/16 95W 4MB 16MB 2.8 GHz 3.2 GHz 550MHz 5W
SP3 24/? 150W 12MB 64MB 2.75 GHz 400MHz
SP3 32/? 180W 16MB 64MB 2.9 GHz 400MHz

The rumors get a little more interesting than that, though. WCCFtech claims to have some figures for a benchmark of Ashes of the Singularity running on two particular engineering samples which the site named 1D and 2D, and that the game detected as 8-core, 16-thread CPUs. The samples apparently differ only in their revision number.

The reported benchmarks were conducted with a Radeon RX 480 graphics card. Using the "High" preset in Ashes at 1920×1080, the 1D unit turned out "CPU frame rates" of 58 FPS. The second engineering sample didn't fare so well, though. In turn, WCCFtech moved forward to compare the leaked results from 1D to three other CPUs—an FX-8350, a Core i5-4670K, and an i7-4790. The results came out as follows:

CPU Average CPU framerate
Core i7-4790 (3.6 / 4.0 GHz) 65.4
Zen ES 1D (2.8 / 3.2 GHz) 58
Core i5-4670K (3.4 / 3.8GHz) 52.6
FX-8350 (4.0 / 4.2 GHz) 42

Despite the Zen CPU's apparently-low clock speeds, the 1D sample acquited itself pretty well, coming out #2 in the ranking—in other words, 13% slower than the i7-4790 but roughly 10% faster than the i5-4670K. The FX-8350, rather predictably, was left behind in the dust.

Assuming any of the results check out, this bodes well enough for Zen. If the chipmaker can hopefully get the clock speeds up a little, there's a chance that meatier Zen CPUs might meet Intel's high-end Skylake CPUs in the ring for a showdown.

Comments closed
    • wingless
    • 3 years ago

    AMD clock speeds ≠ Intel clock speeds.

    • ThatStupidCat
    • 3 years ago

    I wonder if AMD’s use of more and more cores is finally having an effect. I hear Intel will bring 6 cores as a mainstream processor starting with canonlake.

    For what it’s worth I’m all for more competition.

    If I was AMD I’d be doing a lot of research into compilers and how to make them more efficient at utilizing all cores and help update those compilers especially for AMD chips. Ditto for Intel.

    • End User
    • 3 years ago

    I pine for the heady days of my beloved Athlon XP 4800+.

    (still in use by my brother-in-law)

      • Pancake
      • 3 years ago

      Surely, that’s an X2 4800+? IIRC the single core Athlon XP went up to an “Intel equivalent” 3200+. Thunderbirds are go.

        • End User
        • 3 years ago

        My obsessive hatred for Windows XP has done me wrong.

          • Krogoth
          • 3 years ago

          Why do you loathe XP so much?

          It is arguably Microsoft’s best attempt at making a decent consumer-tier OS that wasn’t plagued by stupid issues and/or questionable UI choices. Its only real fault is the lack of support for x64, DX10+ and anything post-Sandy Bridge in terms of hardware support.

          It still works well for legacy hardware platforms that no need for DirectX 10+ or 64-bit applications.

            • tipoo
            • 3 years ago

            “It still works well for legacy hardware platforms that no need for DirectX 10+ or 64-bit applications”

            Or no need for security updates

            • CaptTomato
            • 3 years ago

            FFS, XP sux balls compared to W7, and by all accounts W10 is decent.

            • Krogoth
            • 3 years ago

            Windows 7 does not have driver support for legacy platforms and has compatibility issues with legacy software.

            There’s a reason why XP persisted so dang long and 7/2008 R2 ecology is just a recent development in SMB world.

            • End User
            • 3 years ago

            Lack of x64 support was the key failing for me.

            The success of Windows XP was an albatross around Microsoft’s neck. It lulled Microsoft into a state of OS stagnation that they have never fully emerged from.

            • Krogoth
            • 3 years ago

            x64 support wasn’t a killer app outside of datacenters and HPC environments until recently.

            OS from every camp have been rather stagnate for almost 15 years now. Outside of supporting new hardware standards and having higher thread counts/memory ceilings. There hasn’t been that much innovation.

            • End User
            • 3 years ago

            x64 was a godsend to consumers such as myself. Bury your head in the sand much?

            You live in a bubble if you think that OS stagnation (apart from Windows) is a thing.

            • Krogoth
            • 3 years ago

            You are a classical prosumer of course you would have appreciated x64, but the overwhelming majority of the SMB and mainstream market had no need for x64 until recently.

            There was no killer app that made x32 woefully inadequate and would justify the pain and cost of upgrading.

            OS have been stagnate since early 2000s. Stop looking at the pretty UI and look at the actual code itself. The newer versions are pretty much the same shit except that they support newer software and hardware standards.

            Can you tell me one ground-breaking feature in OS since the early 2000s that isn’t some stupid UI non-sense?

            • ClickClick5
            • 3 years ago

            Less kernel panics and hard locks? *shrug*

            • End User
            • 3 years ago

            Funny you should focus on the apps. I’m on the Office Insider fast track for Office for Mac. Microsoft just add x64 support to Office for Mac. On the surface this example supports your “no need for x64 until recently” claim. This ignores the biggest gift that x64 gave all consumers – the ability to use more than 4GB of memory. Breaking through the 4GB barrier was the “killer app” that x64 brought to the consumer. After that x32 was shown to be woefully inadequate.

            As far as ground-breaking OS features go, all you need to do is compare Mac OS X 10.0 to macOS Sierra. The code is irrelevant.

            • Krogoth
            • 3 years ago

            4GiB barrier only became an issue in recent years in the SMB and mainstream markets due to crappy “Web 2.0/3.0” sites, some PC game exclusives and bloated applications.

            It took almost tens years for x64 to become a killer application in the SMB/mainstream market.

            • End User
            • 3 years ago

            For the vast majority of mainstream users a Windows based PC is overkill so, for that market, you are right. All those users need to get there work done is a Chromebook.

            • smilingcrow
            • 3 years ago

            Wasn’t XP a bit of a mess on release and it took a few Service Packs to knock it into shape?
            I’d say that Windows 7 was their best consumer O/S as it was solid from day one, had x64 support and if you wanted backward compatibility for ancient software get the Pro version which has the XP Pro VM included.

            • tipoo
            • 3 years ago

            Yep, pre-SP1 and even SP2 XP memories are getting a bit warmed over in peoples minds. It was a mess at the start. And by the end of it’s life with SP3 and fully patched up, it wasn’t dramatically faster than 7 on older hardware either.

            • Krogoth
            • 3 years ago

            XP just had issues with its stock USB 1.1/2 drivers at launch. People forget that its predecessors 9x, NT and 2000 had similar issues as well. The OS itself was rock solid for the most part. A night-day difference between NT4 and 9x.

            • smilingcrow
            • 3 years ago

            NT4 and Win2K were solid, alas XP was where MS gave up on their workstation O/S and sullied it with the consumer crap.

            • Krogoth
            • 3 years ago

            NT4 and W2K had driver issues with customer-tier hardware and USB issues that also plagued XP.

            XP unified all their products under one codebase. XP was just W2K+ with a prettier default UI and [b<]optional[/b<] customer-tier stuff. If you didn't like the customer-tier stuff you can completely remove it and becomes "W2K+". For all intends and purpose XP render W2K completey obsolete and Microsoft discontinued support for it well before they pulled the plug on XP.

            • Mr Bill
            • 3 years ago

            I ran Windows XP Professional x64 Edition for years rather than upgrade to Vista or Win 8.1. Finally upgraded to Win7 Pro this year.

        • adampk17
        • 3 years ago

        The single cores went up to at least 3500+. I had one. I think the real clock speed was 2.2 GHz.

          • Krogoth
          • 3 years ago

          Actually the single-core A64 went up to 4000+ that ran at 2.4Ghz with 1MiB of L2 cache. The FX-60 was the step-up at 2.6Ghz but was completely unlocked.

    • 1sh
    • 3 years ago

    AMD is going to be the first to bring octo core CPUs to the mainstream market AGAIN.
    Who the hell wants to pay a gazillion dollars for an Intel octo core CPU?

    • TwoEars
    • 3 years ago

    What the heck is a “cpu frame rate” in the context of using a RX 480 graphics card? Are we to believe that the game is cpu limited at around 60fps@1920×1080? If so this does not bode well.

    • Star Brood
    • 3 years ago

    At best, it’s a competitor. Hopefully, it gets Intel to drop their prices. At worst, it will be a cheap option for price-wary buyers. In any case, AMD fanboys can rejoice.

      • ikjadoon
      • 3 years ago

      In terms of FPS per GHz, Zen is 10% faster than Haswell (but on a 8c/16t vs 4c/8t). That’s faster than Skylake FPS per GHz, AFAIK, if you extrapolate….

      I don’t care how many core/uncore/cache/ring/modules or anything like that…. Just price per GHz.

      • ikjadoon
      • 3 years ago

      Huh? It’s faster clock-per-clock than Haswell…are we all reading the same graphs?

    • chuckula
    • 3 years ago

    As for the 3.2GHz clock boost, I can see the final chips having a higher clockspeed, but if these really are the “big” cores that we’ve been promised I have to think about what kind of power envelope AMD realistically expects if all 8 cores can actually turbo up to near 4GHz or so simultaneously.

    Incidentally, if ES chips really do have noticeably lower clockspeeds than the final products then this can only be good news for Kaby Lake. I mean, [url=https://techreport.com/news/30074/alleged-kaby-lake-cpu-shows-its-face-in-sisoft-sandra-database<]three months ago when TR reported on the engineering sample it was turbo boosting to 4.2 GHz[/url<], so who knows what the final speeds might be.

      • ikjadoon
      • 3 years ago

      I think Intel is a lot further along with Kaby Lake, though. Z270 motherboards were teased way back in May. And, KL is likely to have the fewest performance enhancements ever: they already did “process” and “architecture”…what else is left?

      IIRC, that Zen 3.2GHz 8c/16t was in a 95W TDP.

      [url<]http://semiaccurate.com/forums/showthread.php?t=9270[/url<]

    • slowriot
    • 3 years ago

    If AMD can sell me a CPU on a MODERN platform with a bunch of cores I’ll likely buy a few to replace my home servers. Doesn’t look like they’ll be in my gaming rig or laptop though.

    • Kretschmer
    • 3 years ago

    If the 8-core part is hitting 3.2 at 95 watts, I fully expect it to be released at 3.8 and 160W.

    AMD seems to like to compensate for poor execution with clock boosts and poor power draw, as of late. 🙁

      • chuckula
      • 3 years ago

      Not 160 watts.
      The wraith coolers are only rated for up to 125W.

      Just watch out if AMD starts shipping a watercooler with every chip.

      • terranup16
      • 3 years ago

      Engineering samples often are clocked significantly lower. Zen’s current ES clocks are roughly the same as ES clocks for Bulldozer/Piledriver. AMD is still promising 95W TDP, so I would assume they aren’t going to jack clocks and TDP, but rather you’re going to see clocks rise to at least 3.6/4GHz @95W.

      AMD knows that Bulldozer got annihilated because per-thread performance matters, big-time, and Bulldozer came in a distant second. AMD this time around isn’t going to promise 8 cores at a 95W TDP but only deliver 2.8/3.2GHz clocks, they would slice down to a 95W quad core with the best clocks they could sustain for 95W TDP to provide a direct competitor while then going and offering a “hot and heavy” 8-core at 135W TDP or something and market that as going head-to-head with Intel’s HEDT.

      AMD’s continued desire to reiterate 8 cores @95W TDP shows Intel’s mainstream processors to be AMD’s primary target, so Zen is going to at least come close to i7-XXXXK clocks (in a perfect world, I’d imagine AMD will gut OC headroom and bin heavily to put a top-end offering of 4/4.4GHz or something to solidly out-clock Intel’s chips to help make up some of the IPC gap).

      • ikjadoon
      • 3 years ago

      Poor execution? You’re joking….it’s 10% faster than Haswell clock-per-clock.

      Faster than Skylake in terms of FPS per GHz and people still say “poor execution”…sheesh. Read the data we have, not whatever data you have in your head.

        • chuckula
        • 3 years ago

        [quote<]Poor execution? You're joking....it's 10% faster than Haswell clock-per-clock. [/quote<] Read this article: [url<]http://www.guru3d.com/news-story/amd-zen-engineering-sample-aos-further-analysis.html[/url<] Point out to me where the 8 core Zen at 3.2GHz maintains a consistent 10% lead over the 8 core Haswell at 3.2GHz.

          • Lord.Blue
          • 3 years ago

          Please remember that AotS limits the CPU cores being used to 4. So it is a 4 core to 4 core. Plus there are no 8 core Skylake systems…yet. And if you’re talking about the hyperthreading of the i7, Zen does that as well.

            • chuckula
            • 3 years ago

            [quote<]Please remember that AotS limits the CPU cores being used to 4. So it is a 4 core to 4 core. [/quote<] First of all, this "4 core limit" has suddenly popped into existence in this discussion thread with literally zero supporting evidence that it is true and plenty of contradictory evidence that it's wrong. Here's my evidence in support of the fact that this "4 core limit" is bunk: When Raj got up on stage at Computex to introduce the Rx 480, he used an 8-core 5960X rig to do it. If AoTs can't use more than 4 cores, then why did he intentionally want to hurt the performance of his own cards? Second of all, if there is a magical 4 core limit then that 5960X is [b<]ALSO LIMITED TO FOUR CORES[/b<] so we have a situation of an effectively 4 core Haswell system at 3.2 GHz -- so no "unfair" clockspeed advantage -- absolutely destroying Zen. It's also pretty bizarre that in a benchmark that purportedly can't use more than 4 cores that a 4 core 3.2 GHz Haswell chip is leaps & bounds faster than a 4 core 3.8GHz Haswell chip (the 4670K), but see point 1 about about how this mythical "4 core limit" isn't real.

            • Waco
            • 3 years ago

            [url<]http://semiaccurate.com/2016/03/01/investigating-directx-12-cpu-scaling/[/url<] All I could find to back up the "4 core" claim.

            • chuckula
            • 3 years ago

            Yeah, coming from SemiAccurate and the fact that AoTs in August of 2016 has been updated dozens of times from the AoTs of February 2016 [look at the version numbers] means I’m not impressed with their findings (as usual).

            Edit: For example, just looking at their results table they ran it at the “Extreme” preset with a rather pedestrian Fury (non-X) GPU. Sounds like a great way to stress the GPU and do nothing to actually test CPU scaling. By way of contrast, the article that Guru3D posted had a wide range of quality settings and — as expected — the CPU dominates at low-quality settings but then it turns into a GPU benchmark at high-quality settings.

            • Waco
            • 3 years ago

            Oh, I agree, I don’t trust SA at all. I’m just hoping there’s a kernel of truth in the whole debacle.

          • ikjadoon
          • 3 years ago

          Oh. Well. I did not see that link in the TR news post above. Hmm….that’s pretty crappy, then.

        • Kretschmer
        • 3 years ago

        I’m referring to poor x86 execution over the past decade.

      • maxxcool
      • 3 years ago

      I will be honest. All 8 cores at 3.8ghz @ 160watts would not be ideal.. but really in the scope if things would not be bad.

      A silicon spin later that could come down.

      I would absolutely tolerate that if it comes within 95% of Skylake clock for clock. Hell I have no idea what my 5-Thuban cpus in my personal test boxes at work were pushing for watts when I had them at 3.9hgz with throttling disabled.. and I was quite happy with them with hyper-212 air cooolers.

      But as Chuck mentions, and some leak ‘purportedly’ support… 125-watts seems the top end. *But* I have a table full of salt for that still.

      /tips hat/

    • Krogoth
    • 3 years ago

    I’m cautiously optimistic about Zen if the numbers do hold-up.

    Zen might bring in some much needed competition in the high-end desktop and lower-end workstation market where is a dearth of options. You have to spend through the nose if you ECC support on the Intel side and/or want something beyond six-core chips.

      • rudimentary_lathe
      • 3 years ago

      +1 on the ECC support. Hopefully AMD continues supporting it with their mainstream parts.

      • terranup16
      • 3 years ago

      Yeah, I’m also tentatively excited on the datacenter side of things. The current rumors are that AMD is going to scale the eight-core Zen dies up to 16 and 32 not by actually scaling or rearranging the die contents, but rather by adding additional CPUs via interposer. I am hoping for good results due to this approach in terms of per-core frequency and overall cost compared to Intel’s solutions.

    • kvndoom
    • 3 years ago

    Every time I hear news of a new CPU from AMD, I can’t stop Tech Report’s Bulldozer comic strip from entering my memory. It’s a Pavlovian response.

      • Kretschmer
      • 3 years ago

      Link: [url<]https://techreport.com/blog/22009/execution[/url<]

        • kvndoom
        • 3 years ago

        And now it cannot be unseen! :O

    • WaltC
    • 3 years ago

    The Zen sample wasn’t leaked…it was the “benchmark” that was leaked (you need to rejigger the title here because you make it sound like the CPUs themselves have been leaked…!)–no info on core-logic, ram, etc. ad infinitum. This is the kind of malarkey that surrounded nVidia’s infamous nV30 “launch” (AKA the leafblower)…;) Remember? (How could one forget.)

    As to the naysayers…on the eve of the Athlon launch people were saying similar things about AMD until AMD cleaned Intel’s clock for a couple of years…;) The point to be taken here is that AMD is jumping back into the high-end x86 desktop and server cpu markets once again. Anyone who would like to see AMD out of the picture is yearning for the return of $600 desktop CPUs, and must be fond of being forced into ram and motherboard changes like most people change socks, etc. Zen will require new mboards & ram, too–but it’ll be the first in ‘how many years?’ Users got a lot of mileage out of AM3+…a lot more than Intel has provided, imo.

    Quite truthfully, if not for AMD we’d all be running RDRAM, some slow variant of Itanium, and we’d be paying through the nose royally for all of it. And of course, needless to say, when running a game at 4k resolutions with max IQ the difference in frame-rate performance between AMD and Intel cpus *presently* is quite small–because the higher the res and IQ settings the more GPU-limited the performance of a game becomes.

      • kvndoom
      • 3 years ago

      600 dollars? You’re being too kind. I still remember the 300MHz Pentium 2 costing almost two grand upon its release.

        • jihadjoe
        • 3 years ago

        That off-die SRAM cache must’ve been $uper pri¢€¥!

        Funny though that after the i7-5775C having a huge off-die cache is cool again.

          • ImSpartacus
          • 3 years ago

          It’s always been a thing for big server chips. The biggest Broadwell chip has 55 MB of top level cache. There are simply a lot of cores to feed.

      • ronch
      • 3 years ago

      I don’t change my socks. Tolerate me. Don’t be a hater. Don’t be racist.

        • Redocbew
        • 3 years ago

        Ew. You must have stinky feet.

      • chuckula
      • 3 years ago

      [quote<]Users got a lot of mileage out of AM3+...a lot more than Intel has provided, imo.[/quote<] Yeah this nonsense again. Guess what: AM3+ really supported chips: 1. Bulldozer (2011). 2. Piledriver (2012). I guess Piledriver is an "upgrade" of Bulldozer... or more like what Bulldozer should have been in 2011. So yeah, that AM3+ platform has such "mileage". More like, AMD literally hasn't launched anything new since 2012. If Intel had just given up after Ivy Bridge and stopped launching anything then you could say that those Intel platforms have exactly the same "mileage" too. [quote<]Quite truthfully, if not for AMD we'd all be running RDRAM, some slow variant of Itanium, and we'd be paying through the nose royally for all of it. [/quote<] There's no real proof of any of that being true, and frankly the fact that you need to go back in time about 15 years to justify how AMD saved all of us is kind of telling about the current situation. I could just as say that "quite truthfully" if it weren't for Itanium then AMD would never have been scared enough to make the minor extensions to x86 that were the first version of x86-64.

        • cegras
        • 3 years ago

        > There’s no real proof of any of that being true, and frankly the fact that you need to go back in time about 15 years to justify how AMD saved all of us is kind of telling about the current situation.

        Gross margin on intel and nvidia?

          • chuckula
          • 3 years ago

          Q: Gross margin on intel and nvidia?

          A: Better than AMD.

          Analysis of A: To an AMD fanboy, the profits of companies that aren’t AMD are a sign of some fundamental “unfairness” in the world. To everybody else, the profits are a sign that AMD’s competitors are doing something right and that AMD should be emulating their competitors instead of putting on a disingenuous persecution complex about how life isn’t fair.

            • terranup16
            • 3 years ago

            I don’t personally care about fairness/unfairness towards AMD. I would rather instead simply have a viable competitor to nVidia and Intel that can challenge them on price/performance/reliability adequately enough to stimulate nVidia and Intel to cut into their profit margins to lower prices.

            • cegras
            • 3 years ago

            When AMD was competitive, INTC and NVDA’s gross margins were much, much lower. All WaltC was saying was that competitiveness gives us better prices, and incentivizes companies to develop better products.

            • chuckula
            • 3 years ago

            Another statement made with zero supporting evidence based on the same “glory days” speel I could hear from a drunk guy at a bar.

            Let’s look at a randomly selected quarterly report. How about, Q4 of 2005, which was several quarters before the Core 2 launched and when AMD was at the height of selling $1000+ dual-core Athlons.

            [url<]http://www.intel.com/pressroom/archive/releases/2006/20060117corp.htm[/url<] 61.8% gross margin. Hell, Intel had better margins when AMD was competitive than they have now.

            • cegras
            • 3 years ago

            [url<]https://ycharts.com/companies/INTC/gross_profit_margin[/url<] [url<]https://ycharts.com/companies/NVDA/gross_profit_margin[/url<] [url<]https://ycharts.com/companies/AMD/gross_profit_margin[/url<] Do you see a trend here? You really thought you could pull a fast one by cherry picking? You can plot AMD's margin against its competitors, and maybe you can conclude when they were competitive and not.

            • chuckula
            • 3 years ago

            I’m sorry, you just posted a 5 year chart for Intel’s profit margin as proof that AMD’s “glory days” came at the expense of Intel’s gross margin and how this is so magically great for consumers* or something.

            In fact, your stupid graphs prove the opposite of your point. If anything, AMD was even further behind its competitors in 2011 but your charts show higher margins in 2011 than in subsequent years.

            What product was AMD selling in 2011 that materially was “great” and materially hurt Intel in some real way? About the only moderate “win” I can think of was the integrated graphics in Llano vs. Sandy Bridge. Too bad that AMD screwed the pooch on Llano availability and [url=http://www.tomshardware.com/news/amd-lawsuit-class-action-stock-llano,25798.html<]got sued by its own shareholders for lying about it.[/url<] As for Nvidia, I see varying margin levels that don't seem to have much at all to do with AMD's success or failure over the last 5 years. Margins go up and margins go down. AMD is not the center of the universe for either Intel or NVidia and you need to learn that AMD's product strategy doesn't govern Intel or Nvidia's actions. * Incidentally, if AMD is so concerned about us poor poor consumers then why did they intentionally dump TSMC and run over to GloFo -- who has literally never fabbed a real GPU ever -- for Polaris? Was it because they just [b<]CARE TOO DAMN MUCH[/b<]? No, it was to save a buck at the production cost level. The results are underwhelming Rx 480 parts that aren't particularly available 2 months after they launched. But uh... some of them are $20 cheaper than a GTX 1060. Therefore miracle.

            • cegras
            • 3 years ago

            Huh? You can find the data back to 2004 in the table right below the chart. It’s clear that when AMD is competitive, INTC and NVDA are forced to sell their products at lower margins. Cherry picking a year or quarter where the margin bucks the average trend is the most classic mistake of data interpretation.

            • chuckula
            • 3 years ago

            [quote<]It's clear that when AMD is competitive, INTC and NVDA are forced to sell their products at lower margins.[/quote<] Bullcrap. According to you and a bunch of fanboys Piledriver was competitive with whatever Intel's been selling and is selling today. No correlation there. From what I was led to believe, GCN + HBM was the dawn of a new age of mankind. Nvidia isn't even supposed to be in business anymore. Ain't seeing that in those graphs either.

            • Redocbew
            • 3 years ago

            Put a little more diplomatically(not that I’m one to talk…), competition usually affects margins only when the competing products are actually good(I did say “a little”… ). If the competing products, for whatever reason don’t result in significant pressure, then it’s not really going to change anything.

            With stronger competition from AMD we probably wouldn’t be seeing $1700 i7’s, but will the sale of those change Intel’s margin picture overall? Probably, not so much. The day when it does is the day we have a problem. That means the number of people who think it’s a good idea to buy a $1700 CPU for their desktop PC is growing. We don’t want it to be doing that.

            • cegras
            • 3 years ago

            Clearly, the market decided otherwise, as intel was able to sell at higher margins due to having a superior product.

            [quote<]From what I was led to believe, GCN + HBM was the dawn of a new age of mankind. [/quote<] I haven't commented on anything regarding AMD's cpu products. You can google as hard as you want. I'm sorry you let yourself be led down a false path.

            • cegras
            • 3 years ago

            Here’s another website I found that shows charts back ten years. Unfortunately, I can’t spoon feed you completely, but you can compare the bathtub and inverse bathtub curves of AMD to NVDA and INTC, respectively.

            [url<]http://www.wikinvest.com/stock/Intel_(INTC)/Data/Gross_Margin[/url<] [url<]http://www.wikinvest.com/stock/Advanced_Micro_Devices_(AMD)/Data/Gross_Margin[/url<] [url<]http://www.wikinvest.com/stock/NVIDIA_(NVDA)/Data/Gross_Margin[/url<]

            • madseven7
            • 3 years ago

            Isn’t that when all the Intel muscle was used to prevent AMD from gaining market share?

        • rechicero
        • 3 years ago

        I understand you made your mission in life to see AMD fail, for whatever reason, but… do you really can believe, just for one second, that Intel would’ve not do what they actually tried to do? Really?

          • chuckula
          • 3 years ago

          You know who would have been a better friend to Prince the day he OD’d?

          Would it have been:
          1. A sycophant who loves to tell inflated stories of the good old days long long ago and pretend that popping that next round of pills is a good idea?

          2. Somebody who will slap him in the face and tell him what the real situation is?

      • tipoo
      • 3 years ago

      Forget? If I squint my brain just right I can still hear the sound!

        • JustAnEngineer
        • 3 years ago

        Here’s Stuart Pankin to help you remember:
        [url<]https://www.youtube.com/watch?v=MK0hU0OYvCI[/url<]

          • tipoo
          • 3 years ago

          The craziest bit about that commercial is that their advertising never got dramatically better, lol

            • chuckula
            • 3 years ago

            That anouncer guy was the Fix3r’s uncle right?

      • Voldenuit
      • 3 years ago

      I agree with all your points, except about gaming.

      High-fps gaming is more prevalent now, and CPUs can become a bottleneck at the 144 and 165 fps that the faster monitors can support these days.

        • ikjadoon
        • 3 years ago

        Agreed. But, in terms of CPU FPS per MHz, Haswell is at ~16.35 FPS per GHz. Zen ES is at 18.13 FPS per GHz, so already 10% faster than Haswell in terms of FPS per GHz…and it’s only at 3.2GHz.

        So…..good vibes! 😀 I play on 120Hz, so I need all dat CPU horsepower please!

          • Redocbew
          • 3 years ago

          Kill the precentages.

          There’s nothing magical about division which will imbue a pointless calculation with some intrinsic value. They mean nothing.

      • Redocbew
      • 3 years ago

      Once we started integrating various controllers into the CPU die the lifespan of the socket changed. The fact that AMD kept the same one around for so long is really just another indication of their platforms aging and failing to keep up. Furthermore, the longer you keep the same socket around the less likely it is that you’ll be able to use a new chip in an old board as a drop-in replacement. Socket A was around for a number of generations, but it wasn’t always so quick and easy to upgrade an old board on that socket either. How many people really care about the lack of an upgrade path these days anyway?

      If AMD wasn’t around, someone else would be. Like most everyone else I’d like to see AMD stick around, but there’s a little bit of RDF going on if you think Intel would remain all alone in x86-land forever without them.

        • terranup16
        • 3 years ago

        Would anyone else actually compete in x86? I figure that if AMD dropped out, it would be able to sell its x86 license (I know there’s some question around that, but I suspect that regardless of any contract it would be effectively necessitated for Intel to not get slammed for antitrust), but it still seems like a long shot.

        Regarding sockets, if we truly get to the point where the socket and CPU are generational, then why isn’t BGA mainstream for desktops yet? You would at least expect that the highest-end i7 for each of the mainstream and HEDT platforms would come in a BGA variant (since there would be no viable upgrade over that CPU).

          • yuhong
          • 3 years ago

          [quote<]I know there's some question around that, but I suspect that regardless of any contract it would be effectively necessitated for Intel to not get slammed for antitrust[/quote<] Thinking about it, I'd go so far to say that it is actually x86 CPU competition that is useful, not competition between PC makers doing the same "beige boxes". I would have suggested that Intel buy Compaq back in 1991 for example, then AMD buying another PC maker a decade later (when it has the resources to do so).

          • Redocbew
          • 3 years ago

          It’s a longshot because of the initial R&D, and because the bar is set pretty high for anyone who doesn’t have experience in x86. What I think is more likely is that the AMD brand would continue on under some parent company, at least temporarily. IBM already has an x86 license, so chips could be made there if needed.

          I think the success of mini PCs like the ZBox and the NUC show that BGA could work on the desktop. It’s the PCIe slot(s) which make the platform flexible more than the socket. If I had to guess I’d say it’s mostly nontechnical on why that hasn’t happened. Maybe OEMs don’t want to be restricted to using only certain boards and CPUs.

      • travbrad
      • 3 years ago

      [quote<]Anyone who would like to see AMD out of the picture is yearning for the return of $600 desktop CPUs[/quote<] I don't want to see either AMD or Intel out of the picture. They both raise prices when they don't have true competition ([url=https://techreport.com/review/8295/amd-athlon-64-x2-processors<]see pricing chart[/url<]). What we want above all else is healthy competition between them. [quote<]must be fond of being forced into ram and motherboard changes like most people change socks, etc. Zen will require new mboards & ram, too--but it'll be the first in 'how many years?' Users got a lot of mileage out of AM3+...a lot more than Intel has provided, imo. [/quote<] AM3+ has lasted awhile sure, but if you got a high-end Bulldozer at launch it's not like there has been a significantly faster AM3+ CPU to upgrade to since then. Technically it was backwards compatible with Phenom II, but Phenom II owners had AM2+ or AM3 boards and still had to get a new socket/mobo for Bulldozer. I doubt we will ever see any significant upgrades within the same socket from AMD or Intel again because of how much CPU improvements have slowed down, unless you start on a low-end CPU then upgrade later to the higher-end parts.

      • smilingcrow
      • 3 years ago

      “Anyone who would like to see AMD out of the picture is yearning for the return of $600 desktop CPUs.”

      Intel has had no meaningful competition for 10 years yet their current highest end mainstream consumer CPU is $350. They’ve had 10 years to raise prices due to no competition but haven’t bothered.

      “and must be fond of being forced into ram and motherboard changes like most people change socks, etc. Zen will require new mboards & ram, too–but it’ll be the first in ‘how many years?’ Users got a lot of mileage out of AM3+…a lot more than Intel has provided, imo.”

      Yet a Sandy Bridge system built five and a half years ago still beats AMD’s best so there has been no requirement to upgrade to keep ahead of AMD.

      “Quite truthfully, if not for AMD we’d all be running RDRAM, some slow variant of Itanium, and we’d be paying through the nose royally for all of it.”

      Quite truthfully you are full of idle and wild speculation which has little base in reality. AMD might as well not have existed this last 10 years but Intel would still have continued to improve their tech so they can sell people an upgrade. This is especially true for laptops and servers.

      “And of course, needless to say, when running a game at 4k resolutions with max IQ the difference in frame-rate performance between AMD and Intel cpus *presently* is quite small–because the higher the res and IQ settings the more GPU-limited the performance of a game becomes.”

      Even if that is true which judging by the rest of your fantasies I don’t take for granted who cares? Who actually has the hardware to run 4K games with all the settings maxed out? An irrelevant less than 1% of 1% and anyone who was spent that much cash would hardly choose the current AMD platform.

      AMD did some great things over ten years ago and maybe and hopefully they will in the future but let’s not slap them on the back for being a joke for a decade and pretend they kept Intel honest. Pure Fantasy.
      Your weak ass post getting so many up-votes says something about this site.

    • WhatMeWorry
    • 3 years ago

    The one good thing AMD has going for it is that it seems like Intel’s recent CPUs are almost stationary targets.

      • travbrad
      • 3 years ago

      They are using TSMC instead of GloFo for it too, which if GPUs are any indication seems like a better manufacturing process certainly in terms of efficiency. Their high power consumption in recent years has kept them from getting many design wins.

      I’m cautiously optimistic for Zen at this point. If they can get the clocks higher it could actually be pretty competitive with Intel or at the very least get much closer than they have been the last 5+ years. It could possibly even get Intel to lower prices on some stuff or release my dream processor (Kaby Lake with eDRAM/L4).

        • tipoo
        • 3 years ago

        Is that confirmed, or was it just the strong rumour? I hope the 480 was enough to fulfil the wafer silicon agreement for a while and both Zen and higher end Polaris/Vega get TSMC as rumoured.

        • chuckula
        • 3 years ago

        [quote<]They are using TSMC instead of GloFo for it too,[/quote<] This is the rumor that won't die even though there's no logic behind it whatsoever. [Edit: Oh look, I got downthumbed by the AMD "fan" squad but non other than Lisa Su just got up on stage and said that I was right and the usual suspects who think they are so "good' for being "pro" AMD were flat wrong... again: [url<]https://techreport.com/review/30540/amd-gives-us-our-first-real-moment-of-zen[/url<] ] Hint: That Polaris 11 article that just got posted? A cut-down version of that GPU is going into AMD's APUs at some point in 2017, and we know for a fact that GloFo fabbed those Polaris 11 chips. There is zero chance that different models of Zen processors are being made by two radically different fabs. AMD doesn't have the money or manpower to pull that off. Zen is a GloFo production, for better or worse.

          • RAGEPRO
          • 3 years ago

          Sincere question: why does a discrete GPU being fabbed one place necessarily mean another related design is also fabbed there?

          For what it’s worth, I can’t find anything to corroborate the idea, and it’s never made a lot of sense. Your argument doesn’t either, though. 🙂

            • tipoo
            • 3 years ago

            I think he meant that since Glofo is fabbing Polaris 11 and APUs are coming out with Polaris 11, it follows that Zen has been taped out on Glofo for the APUs, and that it then follows that moving desktop parts to TSMC would mean having Zen designed for both fabs.

            I’m not sure if I agree though, I thought there was hubbub a few years aback about shifting to a model where they could more easily shift between fabs as needed.

            • RAGEPRO
            • 3 years ago

            Yeah, I followed his meaning. I just don’t understand it. It seems like connecting A to R, or to put it another way, it doesn’t follow. I ate a bunch of bananas today, so that’s why my car is running better. Err?

            • tipoo
            • 3 years ago

            It’s not that disjointed…
            Polaris 11 on Glofo = Polaris 11+Zen APUs most likely on Glofo as the GPU is already taped out there, which then = Zen most likely on Glofo as Zen is already taped out there. For it to be otherwise, either Polaris 11 is taped out on both fabs or Zen is. His argument is just that AMD is too cash strapped to use both right now.

            • Redocbew
            • 3 years ago

            Well yeah, they probably are, but why would it be any easier on Global Foundries to take on Zen since they already have Polaris? They’re completely different chips, so I doubt there’s a lot in the tool chain for one which can be re-used for the other. Unless it’s just an issue of volume, why would it be any cheaper to use Global Foundries for both just because they already have one?

            • tipoo
            • 3 years ago

            “but why would it be any easier on Global Foundries to take on Zen since they already have Polaris?”

            The line of reasoning was already highlighted, not sure how else to explain it or that I even agree with it. But you’re dropping all the causal links from what you said. Glofo making Polaris 11, Polaris 11 being in Zen/Polaris APUs, which then means Zen is already taped out on Glofo since Zen is in Glofo APUs. To also have it made on TSMC takes significant engineering, you can’t just ship off a design to any old fab and have them print it out. You have to use their transistor libraries and such.

            So the thought is that a cash strapped company like AMD probably doesn’t have either:
            1) Zen taped out on TSMC AND Glofo
            or
            2) Polaris 11 taped out on TSMC AND Glofo.

            Because the above necessitates either. Either Polaris 11 is on both, or Zen is, and for development cost reasons that may not be likely.

            • Redocbew
            • 3 years ago

            So, because Zen purportedly has a Polaris IGP, you think having Zen made somewhere other than Global Foundries would be a kind of dual sourcing?

            Not sure if I buy that either, because it assumes that a Polaris IGP would have all the same functional components of an existing discrete GPU. Otherwise, it’s going to be a different chip anyway. There’s also the issue of how to integrate the IGP with the rest of the CPU die which clearly isn’t an issue with a discrete GPU. It’s all outside my field so I could be completely bass ackwards here, but that’s all stuff that would need to be taken care of regardless of the fab in which any of this is made.

            • tipoo
            • 3 years ago

            I’m not saying it, just explaining what the above meant. I thought I had heard TSMC was making some high end AMD parts, so I’m hoping for that.

            • RAGEPRO
            • 3 years ago

            A theoretical Polaris+Zen chip is not Polaris 11 though. It’s a different chip altogether, so everything will have to be done from scratch. Fabbing Polaris 11 doesn’t give GloFo any particular advantage in fabbing another chip, even if they are technologically related. It’s not as if these things are built from parts; you can’t re-use Polaris 11s as APU dies or anything.

            • Leader952
            • 3 years ago

            [quote<]I'm not sure if I agree though, I thought there was hubbub a few years aback about shifting to a model where they could more easily shift between fabs as needed.[/quote<] Not without spending untold number of additional $$$. Along with manpower at both sites. So with AMD cash strapped - not going to happen.

            • chuckula
            • 3 years ago

            [quote<] Sincere question: why does a discrete GPU being fabbed one place necessarily mean another related design is also fabbed there?[/quote<] Money. [quote<]Your argument doesn't either, though. :)[/quote<] Oh my argument makes perfect sense. See my response above and then go look at the bond obligations that AMD has coming due in about 2.5 years.

        • BaronMatrix
        • 3 years ago

        Only GF makes FX\Opteron…

        • ronch
        • 3 years ago

        [quote<]I'm cautiously optimistic for Zen at this point. If they can get the clocks higher it could actually be pretty competitive with Intel or at the very least get much closer than they have been the last 5+ years. It could possibly even get Intel to lower prices on some stuff or release my dream processor (Kaby Lake with eDRAM/L4).[/quote<] Well, that's certainly something you don't hear everyday. /s

      • FuturePastNow
      • 3 years ago

      I’ll bet within 6 months of Zen, assuming Zen is decent, we’ll see a 6-core “mainstream” socket CPU from Intel.

        • chuckula
        • 3 years ago

        Too late: [url<]http://hothardware.com/news/intel-brews-6-core-14nm-coffee-lake-processors[/url<]

    • sweatshopking
    • 3 years ago

    Every amd CPU launch for past ten years
    – leaked Info
    – fans: Maybe, guys, maybe! Just might be decent! Imma buy one! Looks fast!
    – launch: Wtf is this garbage damnit crap

      • tipoo
      • 3 years ago

      He’s not wrong. Phenom will save us! Er, ok, ok, bulldozer wi – oh, oh dear.

      I’m hoping for the best but expecting the worst so I won’t be let down here.

        • brucethemoose
        • 3 years ago

        You won’t be, it can’t possibly be worse than Bulldozer!

        Right?

    • pranav0091
    • 3 years ago

    What is this CPU-framerate?
    I know its a metric, but what exactly does it measure? It certainly cant be making the game render completely on the CPU I suppose.

    Is it multi threaded? How many threads can it use? How many cores can it use? Questions, questions…

    <I work at Nvidia, though my opinions are only personal>

    • TheJack
    • 3 years ago

    Intel is spending billions for a couple of IPC gains. These things are expensive. Don’t expect anything more than a rebrand from AMD.

      • flip-mode
      • 3 years ago

      I think you just had a retarded moment. It’s a new architecture; it’s already not a rebrand.

        • TheJack
        • 3 years ago

        yeah

    • wingless
    • 3 years ago

    Zen isn’t looking like it’s hot garbage so it’s OK. We’ll get 8 cores with 16 threads and ALMOST-Intel-levels of performance. For a budget system that’s good in my book. What did you expect? For AMD to beat Sky Lake by 50% in single threaded applications? Let’s be realistic about this. AMD, with their limited and almost bankrupt resources, have done a miracle just to come this close to Intel levels of performance. Building a system with this won’t be the catastrophe any of their current or previous CPUs are. Consumers can save a buck and still have a decent rig and maybe Intel will drop the price on their $1700 CPU to $1500.

    It’s a win-win for everyone.

      • Leader952
      • 3 years ago

      [quote<]Zen isn't looking like it's hot garbage so it's OK. We'll get 8 cores with 16 threads and ALMOST-Intel-levels of performance. For a budget system that's good in my book.[/quote<] If AMD has to price Zen for the budget market to sell it then AMD is over and will be gone by 2020.

      • ikjadoon
      • 3 years ago

      Uh, “almost”? It’s 10% faster than Haswell clock-per-clock. That’s better than Skylake in terms of FPS per MHz.

      If it can get higher clocks…well, let’s say “garbage” is the farthest thing it would be.

      Are we reading the same graphs??

    • bfar
    • 3 years ago

    I keep saying it. If these perform reasonably well in single threaded apps say within 15℅ of a similarly priced intel chip, are true 8 core parts and are priced below intels lineup they’ll fly off the shelves. 4 core parts will look like poor value the second these come available.

    • kuttan
    • 3 years ago

    Hope my 6 year old Core i5 750 rig can survive till Zen releases. My PC already had crashes in the middle and sometimes refuses to boot.

      • terranup16
      • 3 years ago

      I feel ya. I’m on an i7-950, so not quite as bad off, and I haven’t hit crash city yet, but… it’s getting harder and harder to not pull the trigger on some kind of platform upgrade.

        • kuttan
        • 3 years ago

        Oops the crash problem got fixed when I thoroughly cleaned my PC components . The components all were dirty after 6 year long heavy use. Air blower and alcohol did the trick.

    • NeelyCam
    • 3 years ago

    Sooo…. where can we find pure CPU benchmarks?

    • Ummagumma
    • 3 years ago

    Interesting specs.

    I wonder if the “idle power” is achieved by turning off various aspects of the CPU when they are not needed, or by just slowing down the clock rate.

    Turning off parts of a CPU may be is a nice idea “in concept”, but I find it can cause unwanted “latency” when used in some server applications.

    FWIW on my systems that are capable of “deeper than C3 states”, I “lock” them in Linux to dropping down to no more than C3 because of unwanted “side effects” of shutting down various aspects of the CPU in order to save power.

    Now it could be a Linux issue or it could be an Intel issue, I dunno, but at least I have a workaround that works for my situation. YMMV

      • LostCat
      • 3 years ago

      I believe modern CPUs can switch power states hundreds of times per second so added latency shouldn’t really be an issue, though I admittedly haven’t read up on them in a while.

    • ronch
    • 3 years ago

    Ok, I saw some numbers from other sites as well and the bottom line, it seems, is that these are ES chips and numbers seem to be all over the place. There should still be room for improvement but don’t expect double digit leaps from ES to final products.

    • ronch
    • 3 years ago

    Those idle power numbers make it clear that AMD intended Zen to be a mobile part first and high power part second. That’s a good thing. Isn’t that how Intel was thinking when they did Core 2?

    • ronch
    • 3 years ago

    This is all well and good, but the question is, does FPS linearly go up with clock speed? If it does, then at 4.0GHz (assuming it can reach that high) Zen will presumably edge out the 4790K. But I reckon that’s not always the case, if ever.

    Still, positive news. Thanks, Rory!! We miss you!!

    • Unknown-Error
    • 3 years ago

    The supposed Zen 8C/16T ES @ 2.8/3.2 GHz AoS leaks do sound just about right comparing with the previous rumors. IFF confirmed, then AMD has indeed hit the IPC gains they claimed and actually even exceeded it a little bit at least when comparing with the i7-4790K running at 4.0/4.4 GHz in the AoS benchmark. IPC is not up to Skylake but then again other than the rabid Fanois nobody expected it to be at Skylake or even Broadwell level.

    But, but, but, but…….Just look at the clock speeds. The Zen 4C/8T [b<]65W[/b<] has the same 2.8/3.2 GHz as the 8C/16T [b<]95W[/b<] CPU. Can a hypothetical [b<]95W[/b<] Zen 4C/8T reach 3.2/3.7 GHz or at least 3.1/3.5 GHz? If not, when it comes to the Desktop segment the IPC gains alone won't cut it at the high-end when going against current Intel offerings. But, Zen does looks pretty competitive in the Server segment. So, the big questions are, can AMD raise frequency? Does the mArch itself prevent higher frequencies and last but not least, how $h!++y is Global Flounderies fake "14 nm" process?

      • ronch
      • 3 years ago

      Yes the architecture itself can prevent higher frequencies. Not sure why you’d want that though.

      • kuttan
      • 3 years ago

      Engineering sample chips usually have low clocks. The ES version of Intel 14nm Skylake-S had a lowish 2.3Ghz and 2.9Ghz Turbo Clock.
      [url<]http://wccftech.com/intel-14nm-skylakes-engineering-samples-spotted/[/url<]

      • ikjadoon
      • 3 years ago

      How is IPC not up to Skylake?! It’s 10% faster per clock than Haswell….that definitely beat Skylake.

    • msx68k
    • 3 years ago

    Please, read carefully the result of that benchmark: The I7@3.6GHz is just 24% faster than I5@3.4GHz.
    Is possible that an Intel 8 threads CPU be just 24% faster than a 4 threads one that run at lower speed? Obviously not. The difference would be arround 100%.
    The answer is very clear: That test was made with the 4 processors using just four threads. (Furthermore, it is the most honest way to do the test, and see the real performance difference between all cpu.)
    In this scenario, and considering the clock speed, the Zen has the highest FPS/GHz of all. The Zen performance is close to the Skylake.

      • tipoo
      • 3 years ago

      Yes, the CPU statistics were pulled from AotS’ GPU mode, which means that CPU cores were capped at 4.

      • synthtel2
      • 3 years ago

      HT isn’t good for a +100% performance boost (or 80 or 70), it’s more like 30% for a good workload, and games don’t tend to be good workloads for it. The extra threads help with utilization of the cores, but utilization isn’t that bad to start with.

      • ronch
      • 3 years ago

      Er, no, SMT doesn’t give Intel’s cores double the performance. That 24% uplift may partly be due to SMT though. Even if the game is using just 4 cores, SMT may help since background tasks may bog the cores less.

      • etana
      • 3 years ago

      [quote<]Is possible that an Intel 8 threads CPU be just 24% faster than a 4 threads one that run at lower speed? Obviously not. The difference would be arround 100%.[/quote<] No, real-world performance does not scale like that with increased core count. Edit: To clarify, YES it is possible that an 8 thread CPU would be faster than a 4 thread one. NO the difference would NOT be around 100% ie a dual-core CPU is NOT 100% faster than a single-core CPU (except in theoretical/non-existant workloads wherein the software is coded to always have two independent threads with equal importance). In the real world a 24% increase due simply to core count or HT would actually be high for most workloads.

      • chuckula
      • 3 years ago

      [quote<]Is possible that an Intel 8 threads CPU be just 24% faster than a 4 threads one that run at lower speed? Obviously not. The difference would be arround 100%.[/quote<] Lol. No.

    • sophisticles
    • 3 years ago

    As I have said before, Zen will be a major disappointment; 8 core 16 thread and it barely beats an i5 4670k and handily loses to a 4790. Unless AMD prices these wrecks very attractively, I’m talking under the $200 mark, there’s no way I’m even considering a a Zen (I currently run a Skylake based Xeon quad core, I may buy a Zen as a paper weight).

      • Vaughn
      • 3 years ago

      And your post is a total fail because you have formed an opinion on one leaked benchmark.

      Wait until the product is released before claiming the sky is falling.

      Add to that fact you’re on a skylake system you shouldn’t even be looking at Zen you don’t need to upgrade!

    • Stochastic
    • 3 years ago

    If AMD somehow comes within 20% of Kaby Lake at similar price points I will be shocked. Heck, I’ll even be stunned if they don’t come within 20% but match Intel in terms of perf/$.

    • puppetworx
    • 3 years ago

    Those clock speeds do not inspire confidence.

      • tipoo
      • 3 years ago

      Bulldozers engineering samples were 2.5GHz. We’ll have to see what the launch specs are.

        • derFunkenstein
        • 3 years ago

        Not sure why you got down voted, because I think you’re right. They *may* launch at that speed, but it depends on power consumption and so on.

        I really hope they don’t, because Guru 3D basically showed that they’ll only be about as fast as the current FX line (since they benchmarked against under-clocked FX chips).

        I’m hopeful for 3.8-4.0 GHz. That would make for a pretty fast chip.

      • chuckula
      • 3 years ago

      I would say it depends on the power draw of that sample (which we don’t know). If it’s running at 65 watts and has plenty of clock headroom in final silicon then no problem. If it’s already near the top of its power/thermal range then there could be a problem.

      • flip-mode
      • 3 years ago

      I personally don’t care if it runs at 1 hertz if it performs well.

        • Sargent Duck
        • 3 years ago

        That would be an IPC MONSTER!

          • jihadjoe
          • 3 years ago

          Also a latency monster

            • synthtel2
            • 3 years ago

            Maybe it’s mostly asynchronous?

            • Firestarter
            • 3 years ago

            you want your 120fps to arrive asynchronously?

            • synthtel2
            • 3 years ago

            It already does, if you don’t use vsync. 😉

        • DoomGuy64
        • 3 years ago

        I grew up with 486’s, and it really hasn’t mattered much since we hit dual core. If it can beat haswell at a good price I might consider it, being that AMD chipsets usually last through several upgrades whereas Intel does not.

          • Airmantharp
          • 3 years ago

          Uh, what?

          People are still rolling 2000-series Intel CPUs, five years later.

            • DoomGuy64
            • 3 years ago

            Exactly. Has it sunk in yet? Just look at the cpu’s on the consoles. We don’t need uber high end cpu’s to play games. A basic quad core several generations back is good enough to play every single game out today. Hell, a phenom II can handle today’s games.

            Anyone who can say with a straight face, “People are still rolling 2000-series Intel CPUs, five years later.” and still not get it is just beyond oblivious to what I was saying. You get it, and yet you don’t.

            • Airmantharp
            • 3 years ago

            I *do* get it. Those CPUs are still good for today’s games, but only just- Phenom II’s were left in the dust before they ever shipped, and have remained there.

            • DoomGuy64
            • 3 years ago

            Lol, so you admit they are good enough while simultaneously insulting it’s performance as being left in the dust. I get it dude, as I have an i7. All I’m saying here is that if Zen can pull off a price/perf win, I might consider it.

            Also, Phenom II’s OC pretty well and if done right are still competitive. I had a Phenom II beforehand, but what was holding it back was the ddr2. I wouldn’t have upgraded if not for that. I may not upgrade again anyways, since my current i7 is good enough. I just don’t like how Intel treats it’s chipsets as single use boards, and would rather get one or two cpu upgrades out of a board, as swapping boards is not only expensive, but a huge hassle.

            Zen doesn’t need to beat Intel’s $1000+ workstation cpu’s to be a good upgrade. They just need to beat Intel at the price range they are being sold at.

            • Airmantharp
            • 3 years ago

            Good enough? At the time, but still not inspiring, for the time, and AMD never got better.

            Point is, the Intel CPUs are *still* good enough, but the AMDs aren’t- and Zen is in a tough spot. AMD literally survives on Intels desire to not be on the business end of an antitrust suit.

            Also, I don’t see a problem with Intel upgrading their chipsets generation on generation; some last two, some don’t, but each time they add things that AMD *still* lacks. And since you agree that these CPUs last a long time, thus the boards they’re released with, your argument is hypocritical, if not downright petty.

            • Krogoth
            • 3 years ago

            That’s completely wrong.

            Phenom-IIs were beating Intel’s own Yorkfields/Wolfdales a.k.a second-gen Core-2 (What was their current high-end platform) for almost six months, until Intel release their Lynnfield/Bloomfield chips a.k.a first-generation i5-i7 chips. The Lynnfield/Bloomfield stuff was only somewhat faster at stock then Phenom-II, but they had plenty of OC headroom. Phenom-IIs had almost no OC headroom and were fairly toasty. An OC’ed Lynnfield/Bloomfield system was blazing fast, but gulped down power like candy.

            Sandy Bridge didn’t come out until almost two years later. It solved the power consumption issue without sacrificing performance and AMD’s answer to that was lackluster Bulldozer family.

            • DoomGuy64
            • 3 years ago

            This. Only exception I would make is that Phenom II’s did OC well to around 4Ghz, and the tri-cores were unlockable to a quad. The original Phenom was the disappointment, not II. Was a really good value at the time, and memory/bus speeds were more of a bottleneck than the CPU itself. Putting high speed ddr and OCing hypertransport would often give the system a decent performance increase. DDR2 was holding back my system, and it was a better choice at the time to switch to intel than bother with an outdated system.

            I probably would have never upgraded if I had a ddr3 board. Those system are still fairly capable with games today. Bulldozer was a waste of money, so it was either max out your OC or upgrade to intel. I upgraded to intel, since DDR2 was hitting a wall, and it certainly wasn’t worth “upgrading” to bulldozer.

            • f0d
            • 3 years ago

            looking at TR’s benchmarks here
            [url<]https://techreport.com/review/16147/amd-phenom-ii-processors/4[/url<] the phenom II is getting beaten on just about every test by some kind of core 2

            • Krogoth
            • 3 years ago

            Actually, it is a dead-heated tie between the Phenom-IIs and Core 2 chips in that test. The Lynnfields/Bloomfields are the only chips that are faster. That’s just the worse case for the Phenom II.

            Phenom II were faster than Yorkfelds/Wolfedale Core 2 at stock in a large number of games and general computing applications. The only advantage Core 2 had over Phenom-II was overclocking headroom which would allow them edge by a bit but its not enough to catch-up with Lynnfield/Bloomfield at stock.

            • CaptTomato
            • 3 years ago

            Not at peak settings, for that you need a powerful MC/MT CPU, fast RAM and a grunty GPU….you can’t max out a 1080 with an old CPU/ram combo.

            • synthtel2
            • 3 years ago

            That depends on the game. Some games have settings that are a big deal for CPU load (lots of open-world stuff, in particular), and for some it hardly matters at all (Quantum Break immediately comes to mind as having an architecture like that, and Ubisoft’s [url=http://advances.realtimerendering.com/s2015/aaltonenhaar_siggraph2015_combined_final_footer_220dpi.pdf<]latest engine tech[/url<] would definitely qualify if they used it like a normal engine). Overall, game settings tend to have much more effect on GPU load than CPU load. [quote<]you can't max out a 1080 with an old CPU/ram combo.[/quote<] You could max out a Titan X on a Core 2 Duo if you asked it to render enough pixels. The framerate won't be good, but the Titan X will be fully utilized. For a more reasonable case, a 1080 at 4K on modern games wouldn't be slowed down much by a 2500k.

            • CaptTomato
            • 3 years ago

            For a more reasonable case, a 1080 at 4K on modern games wouldn’t be slowed down much by a 2500k.

            NOPE….I’ll see if I can find the guy doing the benchmarks that prove the opposite, granted we’re interested in playable framerates, not just max thru put.

            • CaptTomato
            • 3 years ago

            [url<]https://www.youtube.com/watch?v=zOGdWct6qtc[/url<] Here's the video, but he's more of a 120FPS dude, though it still points at the danger of bottlenecks.

            • synthtel2
            • 3 years ago

            He’s a 120 fps dude. You said it yourself, and he said himself that the video isn’t relevant for the 60 fps crowd. I personally wish that the market were more oriented towards high framerates, but most people consider 60 (or 45, or occasionally even 30) playable.

            Where exactly CPU and RAM stuff starts holding you back is very dependent on the game, but tends to be 90+ fps for something decent. There’s good reason for this – game devs know people are going to be pissed if there’s no way to get a game to reliably stick at 60 Hz vsync on moderately fast rigs, so a CPU someone thinks of as good needs to be good for at least that. Also, it’s probably gotta do 30 fps on consoles, and those CPUs are terribad.

            CPU performance of a game tends to be relatively fixed with regards to settings. The big exception is stuff that generates more draw calls. Usually the relevant in-game settings are draw distance, occasional geometry/clutter settings, usually shadows, and occasionally reflections (the latter two because they involve redrawing everything multiple times from different perspectives). In a game with small-ish environments like Quantum Break or Doom, this stuff generally doesn’t have to take so much time anyway, and it’s not a big deal for the CPU to handle. It’s when you get to stuff like GTA V and Witcher 3 that it actually becomes a big deal, but if the devs are decent, they pay good attention to draw call optimizations in that sort of game and we the end users never hear about it (or have reason to care, mostly even including situations like this). When the devs mess up, we see stuff like AC:Unity, not a game that’s simply more CPU-sensitive than most.

            The amount of load a game can put on a GPU, in contrast, is scalable pretty much indefinitely. At some point on the low end, you’ll stop seeing improvements because the CPU is the limiting factor, and at some point on the high end, you’ll stop seeing improvements because the improvements just aren’t visible anymore, but even ignoring the algorithms themselves, you can pretty much always make noteworthy changes load-wise with “what if we tried [less/more] [polygons/pixels/lights/texels/voxels/froxels/angles/samples/whatever]?” Game devs just condense this into low/medium/high/ultra because gamers don’t and shouldn’t have to know all about how rendering works.

            [quote<]NOPE....I'll see if I can find the guy doing the benchmarks that prove the opposite, granted we're interested in playable framerates, not just max thru put.[/quote<] Nope the what now? Impolite, that is, especially if you're not actually going to have a go at proving me wrong. 😉 Certainly neither of you did so, and your statement before implied you were concerned with the value proposition of upgrading your GPU (hence utilization) and not any particular framerate. I called out 4K specifically, a resolution at which a 1080 can generally push 40-60 fps in the latest games - enough to pass as playable for most people. Modern i5s with blah RAM are good for 80+ fps no problem, and a 2500k really isn't that much worse. In this case, CPU framerate is usually quite a ways above GPU framerate, and it's not exactly a contrived scenario. So we're back to.... [quote<]you can't max out a 1080 with an old CPU/ram combo.[/quote<] Looks to me like you can max out (if we mean fully utilize) a 1080 with a 2500k / DDR3-1600 pretty easily.

            • CaptTomato
            • 3 years ago

            You have a limited interpretation of “maxing” something out, so I don’t think my information is unworthy and is something any high FPS dude should be aware of otherwise they basically overspend on the GPU, and they’d presumably get crappy SLI/XF scaling if they tried to double up.

            • synthtel2
            • 3 years ago

            At least I’ve fully explained my interpretation of maxing out a card. I’m still left with a pretty fuzzy concept of what yours is. AFAICT, it’s something about being able to utilize the GPU fully even when the CPU is also utilized fully, but that doesn’t make sense – (a) most game architectures won’t let you get too close to full CPU and GPU utilization at the same time even with everything else perfect, (b) the correct balance of hardware for such a setup will be different for each game, unless you’re taking time to hand-tune each one to meet this ideal rather than to run/look best, and (c) for most people right now, targeting that ideal is pretty unhelpful in part selection. Since what I’ve gathered of it probably isn’t it, would you explain your definition?

            Your information is perfectly worthy when applied correctly. If you’re a 120 fps dude, you should be directing more of your budget to CPU/RAM and less to GPU. There is no argument there. Your source’s testing methods work pretty well for seeing if a GPU upgrade is worthwhile. That’s about as far as it goes, though. You have yet to convince me that either he or you actually knows what you’re talking about with regards to how the hardware handles things and why you see the results you do.

            • CaptTomato
            • 3 years ago

            Weird person.

            For whatever reason you’ve decided to play dumb with me, however it’s all been explained, in fact, you explained it in your last post.

            • synthtel2
            • 3 years ago

            The thing that I pointed out as not making sense is your position, and you’re sticking with it with no further explanation? I’d call you the weird person. “Playing dumb” is because your position makes *so* little sense that I have trouble believing that’s actually it.

            [sub<]In fact, I'm pretty sure you're just trolling now and I'm not sure why I'm bothering.[/sub<]

            • CaptTomato
            • 3 years ago

            You’re telling me that at a time when the best LCD are 144hz+ it’s not important to be aware that mid range cpu/ram combos will bottleneck the gpu?

            • synthtel2
            • 3 years ago

            I’ve specifically said that if you’re an HFR gamer, CPU/RAM matters a lot. Retconning stuff within a single thread doesn’t work. Retconning stuff at me only works so long as I think your reading comprehension is just that bad (I’ve unfortunately got practice countering meatspace retcons). Retconning done this badly tends to not work in general. IOW, your trollface needs work.

            • CaptTomato
            • 3 years ago

            WTF is wrong with you mate??????????

            • synthtel2
            • 3 years ago

            You too, bro.

            • maxxcool
            • 3 years ago

            Except both current consoles struggle to render at 30/60 fps consistently.

            • BurntMyBacon
            • 3 years ago

            Putting aside whether he’s right or wrong or even if it is relevant:
            [quote=”DoomGuy64″<]being that AMD chipsets usually last through several upgrades whereas Intel does not.[/quote<] He said chipset. [quote="Airmantharp"<]People are still rolling 2000-series Intel CPUs, five years later.[/quote<] You talk about a CPU. Point of interest: the Sandy Bridge chip you reference launched on Socket 1155. That socket house two generations of chipsets. Z68 et. al. launched with Sandybridge. Z77 et. al. launched with Ivy bridge. Afterwards, Intel moved on to Socket 1150 which also lasted through two generations of CPUs (Haswell and Broadwell) and chipsets. There is a degree of compatibility found between socket AM3 processors and AM2/AM2+ motherboards. The opposite partial compatibility can be found between socket FM2 processors and FM2+ motherboards. Socket AM3+ motherboards and CPU are (with a proper bios update) fully interchangeable with their AM3 counterparts. Of course, the relevance is debatable when you have to go back to at least Sandy Bridge on the Intel side to find a decent comparison.

          • Kretschmer
          • 3 years ago

          AMD’s platforms are not their x86 strong suit.

            • DoomGuy64
            • 3 years ago

            So? The GPU is vastly more important for games. Look at the CPU’s in the consoles. We’ve hit a point where any CPU is good enough, and price/perf is now more important.

            Zen isn’t bulldozer. If it can beat my haswell i7 and has some OC headroom at a decent price, I might consider it for my next upgrade. Intel is price gouging the high end, and gamers don’t need workstation grade CPU’s to play games. Also, a lot of stuff that was highly CPU centric back in the day like transcoding can now be done on the GPU. It’s more about what the individual’s needs are, and not the brand.

            Sure, intel can probably beat zen with their high end workstation processors. I just won’t be shelling out for one, so whatever does best in my price range will be what I buy next.

            • RAGEPRO
            • 3 years ago

            If AMD wants to compete they need good single-threaded performance. It’s really the only CPU performance characteristic that actually matters. Good single-threaded CPU performance is still critical for games, and the Playstation 4 and Xbox One are sharply handicapped by their weak CPU performance. Check a few articles:

            Over at PCPer an article titled “[url=http://www.pcper.com/news/General-Tech/Sony-PS4-and-Microsoft-Xbox-One-Already-Hitting-Performance-Wall<]PS4 and Xbox One already hitting performance wall[/url<]" explains how even with extensive multi-threading the renderer needs half the CPU performance just for itself. That's from last year, even. [url=http://www.eurogamer.net/articles/digitalfoundry-2016-dark-souls-3-face-off<]Dark Souls 3 struggles to hit 30 FPS on the consoles[/url<], and it's not because of their GPUs. [url=http://www.techspot.com/review/1162-dark-souls-3-benchmarks/page2.html<]The Radeon R7 370 can pull off 31 FPS minimum[/url<] while handicapped by a full OS and only 2GB of video memory, and that's on max settings. The console versions are closer to 'medium', and the Xbox One runs in 1600x900. Also from Eurogamer, [url=http://www.eurogamer.net/articles/digitalfoundry-2016-xbox-one-project-scorpio-spec-analysis<]some spec analysis of the Project Scorpio[/url<] which says the gap between CPU and GPU is huge, yet only going to get wider. Of course, [url=https://techreport.com/news/27168/assassin-creed-unity-is-too-much-for-console-cpus<]TR has also posted about it.[/url<] The thing is, while there's a lot of massively parallel stuff going on in a video game (graphics, obviously, but also physics and to some degree AI and audio processing), the main game thread is always going to be nasty, branchy stuff that doesn't multi-thread well because games rely on constant user input. As a result there's simply never going to be a time when a machine that completely lacks a solid, fast, single-threaded processor is going to be a really good game machine. [url=https://en.wikipedia.org/wiki/Amdahl%27s_law<]It just isn't ever going to happen[/url<]. Just picture ol' crazy Mr. Ballmer up on stage shouting "SINGLE-THREAD, SINGLE-THREAD, SINGLE-THREAD, SINGLE-THREAD."

            • DoomGuy64
            • 3 years ago

            I never said consoles had great cpu performance. However, they are clearly [i<]good enough[/i<] for games, and the GPU is a bigger bottleneck than the CPU is. Also, both the Phenom II and Bulldozer are faster CPU architectures than the console CPUs. It's a non-argument to complain about the consoles because: A: Not available on the PC, and the PC CPUs are superior. B: Clearly good enough to play games. GPU is the real bottleneck. Pointing out the 370 gets 31 fps in DS3 is also a non-argument. 31 fps still sucks, and that's on a card with it's own dedicated memory, while the consoles share memory bandwidth. Neither console could get 60 fps with the same GPU even if paired with an i7. Just not gonna happen, unless you drop resolution. There is absolutely no basis to claim slower AMD CPU's are incapable of playing games, because they clearly are. It's merely a question of price/perf when it comes down to it, and AMD does pretty well in that category. I don't think Zen is going to be an $1000 Intel workstation CPU killer. But it will be a good repeat of the Phenom II, where AMD finally has a decent CPU at a decent price, which could easily sway budget minded consumers. So far, it looks like Zen will do just that.

            • tipoo
            • 3 years ago

            That’s part of why I’m not hyped about the iterative consoles – Neo seems to still use Jaguar and Scorpio likely does, so they doubled and quadrupled down on GPU performance, but left the sore spot of the CPU barely touched with a 20% upclock.

            People are hoping the revisions allow for 60fps game patches – but I don’t think they realize that would mean the game was leaving over 30% of these weak CPUs untouched right now. I find that very unlikely for most AAA games.

            I’d have taken modestly weaker GPU updates for moving to a stronger line of CPUs, even waited for Zen.

            But then the gap might get so large that it becomes essentially a conventional generation, not a X.5, with the new ones able to do too much more.

    • Tristan
    • 3 years ago

    Zen look like winner. Together with Polaris, will allow to AMD recover and beat NV and Intel.
    Great plans great execution

      • Firestarter
      • 3 years ago

      I’d love for that to be true if only for more competition in both the CPU and GPU markets, but everything so far indicates that comments like yours are wildly optimistic bordering on the delusional

      • Kretschmer
      • 3 years ago

      Polaris is out, and it’s…low-mid range. Zen is very unlikely to press Intel. Looking to AMD for execution is like looking at Elvis for restraint.

      • Jigar
      • 3 years ago

      Good try on the sarcasm, you still have a long way to go.

    • tipoo
    • 3 years ago

    Important to note since this is going around a lot: the CPU statistics were pulled from AotS’ GPU mode, which means that CPU cores were capped at 4. This is not a matter of Zen needing more cores to beat Intel Haswell performance. Both are using 4 here.

    So things aren’t nearly as grim as 8 engaged cores trading blows with 4.

      • DrDominodog51
      • 3 years ago

      Wouldn’t this have higher IPC (in this particular workload) than Haswell because an equal amount of cores was being used and it was clocked lower than a Haswell it beat or am I missing something here? I feel like I’m missing something because it would be a miracle if this even matched Haswell.

        • tipoo
        • 3 years ago

        I was looking at the Guru3D charts, which seem to contradict the WCCF ones

        [url<]http://www.guru3d.com/news-story/amd-zen-engineering-sample-aos-further-analysis.html[/url<]

        • Vaughn
        • 3 years ago

        From some of those post that I’ve seen on the net.

        This benchmark was running in GPU mode which caps it to 4 threads.

      • Redocbew
      • 3 years ago

      What is the purpose of “GPU mode” supposed to be? I mean, why is it even there? This game is looking more and more like an oddball that’s been propped up for show and tell all the time.

      • LocalCitizen
      • 3 years ago

      i completely disagree with 4 core cap.
      look at i7 65.4 fps and i5 52.6 fps, 24.3% difference.
      the cpu clocks are i7 3.6/4.0 and i5 3.4/3.8 Ghz, 5-6% difference.
      how do you explain the extra 18% performance if extra cpu threads are not used?

        • RAGEPRO
        • 3 years ago

        The i7 also has more cache, and hyper-threading can help prevent system tasks (or secondary threads related to the game but not part of the game’s code, like, say, a driver) from interfering with game performance. Hyper-threading is a big benefit even if your software only uses 4 threads.

          • Redocbew
          • 3 years ago

          Yeah, let’s not forget that there’s usually a few hundred threads across the system in various states of execution at any point in time. Most of them are asleep until the scheduler picks them up again, so even if one particular app isn’t using all the hardware threads available to the OS you can be sure something else is.

            • LocalCitizen
            • 3 years ago

            18% !!
            question for the two of you: does 2M of cache speed up frames rates by 18%?
            do the hundreds of background jobs slows down the system by 18%?
            even if that’s the case, shouldn’t Zen ES also benefit from the extra cores?
            should we reduce the Zen fps by 18% (49fps) and call it the performance of a 4c8t Zen? might as well stick with bulldozer then.

            • RAGEPRO
            • 3 years ago

            I think you’re being a little excitable, and quick to jump to conclusions.

            We’re dealing with a static number here (4 threads) that doesn’t change based on the hardware. Assuming the game uses 4 threads, and there’s enough load elsewhere to occupy a few more threads (thus allowing hyper-threading to do its magic), that still leaves half of the Zen processor idle. In other words, a 4C/8T Zen should in theory provide very, very similar performance to this.

            And yeah, a 33% increase in L3 cache size is a pretty big deal for games. 🙂

            • LocalCitizen
            • 3 years ago

            yeah. i’ll tone down a bit.

            also, multithread never scales perfectly, so the software difference between 8c16t and 4c8t is smaller than diff between 4c8t and 4c4t.

            i would just like to point out that this zen es in the leak is not very impressive… it’s bad news actually.

            of course, i know very well it’s ridiculous to speculate on cpu performance using 1 questionable game leak. given that we don’t know: final GHz / turbo, core count, cache size/type, watts, new instruction sets, # of memory channels, etc etc etc, price !

            • Waco
            • 3 years ago

            I don’t see what bad news you see here. There’s almost no way to interpret these results in anything but a favorable way for AMD. Zen being fast enough to compete with Haswell is all AMD needs, and it seems like they’re within spitting distance with an ES chip, at least in this game.

            What doom and gloom do you see?

            • Redocbew
            • 3 years ago

            The point about hyperthreading is that there’s more to this than just how many threads are involved. Even in a well threaded application there are limits to how much parallelism can be extracted from a single thread. That’s why SMT was created in the first place.

            Do you know where that 18% is coming from? I don’t, and I don’t know anything about Zen’s implementation of SMT either. Do you? What I do know is that someone ran a GPU benchmark on a prototype CPU with a brand new GPU and got weird numbers, but it can’t be that simple, right?

            • ronch
            • 3 years ago

            Exactly. When people see these ‘capped at x cores’ they automatically flag the benchmark as though the game is the only thing running in the system.

            • LocalCitizen
            • 3 years ago

            exactly, this zen es 8c/16t has about 10% more total performance than i5 haswell 4c/4t.
            don’t get too excited.

      • chuckula
      • 3 years ago

      Can somebody who has benchmarked AoTs describe the different benchmark “modes” that are purportedly in play here that artificially limit a CPU to 4 cores?

      I have seen a crapton of AoTs benchmarks splashed around, but I’ve never seen a disclaimer that the benchmark was run in some sort of special “mode” that caps the CPU. There’s only ever been one benchmark.

      If the 4-core cap is true, then how do you explain the hyperthreading results of a 4790 (not a 4790[b<]K[/b<] just a regular 4790) that has less than a 5% clockspeed advantage over the 4670K and the exact same number of physical cores (but hyperthreading) wholloping the 4670K in those exact same benchmarks?

      • DPete27
      • 3 years ago

      If that’s the case, then how does the i7-4790 outperform the i5-4670 by a staggering 24% while only being clocked 5% higher in both base and boost clocks?

    • tipoo
    • 3 years ago

    Isn’t WCCFtech of questionable reputation?

      • brucethemoose
      • 3 years ago

      Questionable at best. They’re responsible for quite a few false rumors.

      • NovusBogus
      • 3 years ago

      Yup–whenever I see them pop up in Shortbread I read the name as WTFtech, it’s more accurate. What we have here is a context-free statistic from an unreliable narrator from a single game that’s known to be wonky about core count…nothing matters until Zen finds its way into the hands of known, independent reviewers.

        • tipoo
        • 3 years ago

        Lol I think I’ve been mentally reading it that way for a while

      • ImSpartacus
      • 3 years ago

      I think they turn out a lot of rumors and are bound to have a percentage of then be incorrect. I generally interpret their rumors as a fun “what if” rather than a strict, “this is the future”.

    • EndlessWaves
    • 3 years ago

    [quote<]Assuming any of the results check out, this bodes well enough for Zen[/quote<] Really? I thought they were pretty disastrous results for any chance of it being adopted in the game market. 8 cores with SMT can't even keep up with a Haswell 4+SMT in a benchmark where Intel's 8 core chips can enjoy a 30-40% advantage over the 4 core ones. That implies that in the majority of games that can only use four cores these models might be competitive with an i3. In fact, I'm struggling to believe they're that bad. If these things have got 8 FPUs then wouldn't that mean worse performance per core than an FX series?

      • brucethemoose
      • 3 years ago

      No, it’s still better than the FX CPUs as the G3D article shows.

      People tend to forget it these days, but the FX CPUs were [i<]that[/i<] bad.

        • tipoo
        • 3 years ago

        It’s better than FX, but again as the G3D article shows, it’s not even halfway to bridging the gap from FX to Haswell IPC on the same core count/clock speed.

        Yikes. Frig.

          • brucethemoose
          • 3 years ago

          It’s a big gap too.

          Zen could be designed as a smaller core than Skylake or Bulldozer, which means 8 of them on a die would be pretty cheap… But we’ll have to wait for die shots and see.

          • Meadows
          • 3 years ago

          For what it’s worth they could still tweak things this year and it’s likely part of the reason why the CPU was pushed to 2017, but major changes are obviously unlikely.

          If the final product clocks as high as the current FX line and AMD charges an appropriate price, I’d still consider one.

      • tipoo
      • 3 years ago

      The CPU statistics were pulled from AotS’ GPU mode, which means that CPU cores were capped at 4. This is not a matter of Zen needing more cores to beat Intel Haswell performance.

        • brucethemoose
        • 3 years ago

        EDIT: nvm I see now.

      • xeridea
      • 3 years ago

      These are engineering samples so the clocks will likely go up before launch. The early samples of Bulldozer were clocked at 2.5GHz IIRC. Even so they put in a decent showing here. They don’t have to outright beat Intel, just get a massive performance increase over the BD line, and come within reason to Intel chips at reasonable power.

        • terranup16
        • 3 years ago

        That’s really the thing, honestly. I don’t want to go anywhere near Bulldozer chips right now in large part because of price/power/perf. It’s not even the price impact of power, either. It’s just that I have an i7-950 and if I’m not going to a new HEDT, then I may as well make my room a little cooler (for as long as I have this GTX 750Ti, my CPU is absolutely the biggest heat producer in my rig right now).

        Zen promises to match Intel’s mainstream TDP, so if it can pull performance into reasonable territory there and can slice some price, then we’re doing pretty well for a potential purchase.

        If these four-core-capped results scaled to clock-for-clock hold even close to true, then Zen may even be a performance coup in the long-run as DX12 and Vulkan leave more opportunity for games to pull in more than four cores.

      • peaceandflowers
      • 3 years ago

      Doesn’t the CPU load mostly consist of INT stuff for games? That would make sense: As far as I’m aware, BD’s FPUs are reasonably beefy – there’s not many of them, but enough to handle the number of threads legacy games would throw at it. It’s more constrained in its integer performance as far as I’m aware.

      As for competitiveness, well, there’s plenty of gamers that use i3’s – they can be quite good, with their high frequencies. So it mostly becomes a matter of price (as always), and I find it quite likely that AMD would go for a somewhat simpler design than modern Intel cores. In fact, there’s hearsay that that’s the philosophy behind Zen: Focus on what’s good now, and leave what might be relevant to the average user in several years from now out (i.e., very wide FPU paths). On top of that, I’d guess they aim a little lower than Intel in general. Then it’s just a question of how much it costs AMD to make these chips, no clue there…

      Anyway, they’re still engineering samples, and the competition of its time is also not yet assessed, so… We’ll see 🙂

      • flip-mode
      • 3 years ago

      Why worry about core counts and clock speeds so much? What actually matters when you pick a CPU is price, delivered performance, and power consumption. If it has 30 cores running at 13 Hertz and delivered the same performance at similar power as intel’s 4 core 4 gigahertz CPU, then you’ve got two totally different but equal products.

        • terranup16
        • 3 years ago

        Short of using a VISC or something to that effect, core count will dictate how much of the CPU’s total deliverable performance is realizable in certain applications. For example, if I’m just running HA Proxy or Nginx, I usually want a CPU with a small number of cores but extremely high frequency + IPC. If I’m running SSL termination in that setup though, I want to consider solutions that offer more cores while sustaining good frequency + IPC.

        Alternately, look at GPUs- a single, beefy GPU tends to realize its maximum possible performance more consistently than a dual GPU solution. You can complain that’s because of Crossfire/SLI/game developers, but CPU cores are likewise at the mercy of software developers.

        • tipoo
        • 3 years ago

        I’d agree on clock speed, but not core count, even with good software scaling having the same performance in 4 cores is far preferable to 30, Amdahl’s law, yada yada, and most games are far from perfect scalers even with DX12/Vulkan.

      • BaronMatrix
      • 3 years ago

      It depends on how math works in your mind… Zen is an ES clocked lower than those chips and running on a pre-production platform… It’s approximately 30% lower clock than FX and leads by 38%… It’s 25% lower clock than i7 4970 and just 10% behind…

      Not sure how i3 works into this, unless you insist that i7 isn’t worth it either..

      • ikjadoon
      • 3 years ago

      Nobody cares about how many cores it has, though….if it is faster than the i7-4670K, people will buy it (if the price is right).

      Read the performance charts, not the specification sheet.

    • tipoo
    • 3 years ago

    At only 2.8-3.2GHz, that indeed bodes quite well. Assuming they can ramp up the clock without a horrible loss in efficiency and not having to do crazy wattage CPUs again.

      • brucethemoose
      • 3 years ago

      Not really. Remember, this is 1/2 the core count of the Intel chip.

      People were expecting parity with Haswell, but this puts it WAY below that.

      • Geonerd
      • 3 years ago

      I’d gladly buy a 125W TDP Zen if it was otherwise fully competitive. Even if running my computer flat-out 24/7 (which I never seem to do.), another 30w is nothing to lose sleep over.

        • FuturePastNow
        • 3 years ago

        For computer enthusiasts, higher power usage is acceptable if price and performance are competitive. Apart from a tiny niche market of people who build very small PCs, as you say, an extra 30W isn’t something to worry about.

        It’s a different story for server users, though.

        • tipoo
        • 3 years ago

        Well sure, though the lower wattage Zen can run at with good performance, the better chance it has of turning AMD around. Lower power = more semicustom wins, laptops, etc.

    • chuckula
    • 3 years ago

    [quote<]With the recent graphics card releases more or less out of the way, it's now time to turn our collective attention to everything Zen.[/quote<] I don't think so.

      • mnemonick
      • 3 years ago

      I don’t always see…

      But when I do, it’s what you did there.

      😀

      • RAGEPRO
      • 3 years ago

      C’mon, this song isn’t that old that you guys don’t get it. It only came out like 12 … uh … twenty-two years ago… aw man.

        • JustAnEngineer
        • 3 years ago

        Here’s one from February 1988:
        [url<]https://www.youtube.com/watch?v=bTH2cz96JWA&t=27m37s[/url<]

      • ImSpartacus
      • 3 years ago

      No kidding. We’ve got big Vega, little Vega, gp107, and non-titan gp102. And that’s just from now til early-mid 2017. Then we start looking towards the gen after that.

        • RAGEPRO
        • 3 years ago

        It was a joke reference to [url=http://www.azlyrics.com/lyrics/bush/everythingzen.html<]the Bush song[/url<] linked in the beginning of the post. 🙂

    • chuckula
    • 3 years ago

    This is a related article from Guru3D: [url<]http://www.guru3d.com/news-story/amd-zen-engineering-sample-aos-further-analysis.html[/url<]

      • brucethemoose
      • 3 years ago

      This is alot more relevant than the WCCF article, as it’s pitting an 8C/16T CPU against Zen at the same clockspeed.

        • ronch
        • 3 years ago

        What do you expect from wccf? I stopped visiting then after a writer there reacted so strongly with ***expletives*** against an article commenter that suggested shady things may be going on with regards to who they give favorable reviews to. What a bunch of *icks. As though they’re above and beyond such things. Hint : no one is.

          • Prestige Worldwide
          • 3 years ago

          WCCF is pure trash. Nothing but click bait, repost of other publications articles, tabloid junk, and flame bait to file up their comments section because it gets them page views. They will post absolutely anything as long as it gets them ad hits.

      • tipoo
      • 3 years ago

      Yikes. So at the same core count and clock speed it’s not even reaching Haswell….

        • derFunkenstein
        • 3 years ago

        Yeah, that’s pretty sobering, but depending on pricing they could still make it attractive. Probably not what they were hoping for, though.

        • anotherengineer
        • 3 years ago

        But I don’t think anyone ever claimed that? IIRC I thought from AMDs own IPC estimates (40% over bulldozer) that it would put Zen around Sandy Bridge IPC??

          • tipoo
          • 3 years ago

          Looking at the Guru3D slides, I’m pretty sure Sandy Bridge to Haswell was nowhere near that gap

      • kalelovil
      • 3 years ago

      1. Shouldn’t they have used 2.8Ghz as their point of comparison? If Zen is like other modern multi-core CPUs, it won’t reach its Boost clock when heavily loaded on all cores.

      2. I would have liked to see them test the 5960x with dual-channel memory, to see if that is a significant part of the difference in AoTS.

      3. The 5960X also has more per-core L3 cache than Intel’s consumer (i5/midrange i7) or low-end server (xeon-d) products do.

Pin It on Pinterest

Share This