Take a sneak peek at our Core i9-7960X and Core i9-7980XE results

The embargo for performance results of Intel's Core i9-7960X and Core i9-7980XE lifts this morning. Right now, in fact. I'd have our full review for you, but other things kind of got in the way, and let's be real: you're not going to read thousands of words about CPUs at two in the morning. Here's a sneak peek at some of the results we'll be talking about soon:

In a word, impressive and a bit weird all at once. Stay tuned as I put the finishing touches on my article. If you're suitably impressed by the above chart, the Core i9-7940X, Core i9-7960X, and Core i9-7980XE should be available at e-tail today.

Comments closed
    • Unknown-Error
    • 2 years ago

    7960X and especially 7980XE just look dumb. The real game from Intel is in the 7920X, 7900X, 7820X and 7800X (and maybe the 7940X). These provide great all-round performance for the price-tag and unlike Threadripper (despite being excellent value for money) you don’t need to go through that idiotic gamer mode, creator mode nonsense with Intel. Directly out of the box the 7900X will give your great gaming and non-gaming performance. If you need a lot of cores in your profession then the $999 1950X is great value. I really don’t know what the 7960X and 7980XE are there for.

      • Krogoth
      • 2 years ago

      They are just “panicked” responses to Threadripper (Blame marketing guys). 7920X easily holds the line against the 1950X in majority of workstation loads and pulls ahead if AVX is king.

      Intel could get by fine with 7920X as top of the line, but their marketing drones “insist” they need to keep up appearances in the “core count” race.

      As a result, the 7960X and 7980XE are just minor improvements over 7920X in most HEDT-tier workloads. The extra cores are not worth the reduction in clockspeed (assuming you are running them at stock) when fully loaded for HEDT workloads. HEDT applications don’t scale that well beyond 8 threads and there’s more resource contention going on the CPU.

      X1950 is a in a similar boat over the X1920.

    • brucethemoose
    • 2 years ago

    I hope this is the start of a glorious arms race to filter server chips down to consumers.

    I’m not sure what I’d do with a (unlocked) 28+ core CPU… But I want one.

    • chuckula
    • 2 years ago

    Der8auer [url=https://www.youtube.com/watch?v=rEdXayoA1Es<]using the 7980XE[/url<]. Oh, and there's a Titan XP in there too because why not apparantly. For entertainment purposes only, although viewer discretion is advised if you get squemish watching expensive silicon be abused.

    • ronch
    • 2 years ago

    Q: Based on the plot, how would you describe these new chips?

    A: RIGHTEOUS!!!

    • Goty
    • 2 years ago

    30% more performance for 100% more money! Such a deal!

      • Klimax
      • 2 years ago

      At those target workloads? Yes. Because you get very fast in territory of hours to days saved of total runtime.

        • Goty
        • 2 years ago

        Anyone with workloads that average in the days to weeks (required for the savings you suggest) isn’t going to be using this platform to do it due to the lack of support for mission-critical features like ECC. Even then, I could just go buy an Epyc 7401, get performance in the same ballpark, and save close to $200.

    • jts888
    • 2 years ago

    Jeff, have you noticed anything unusual in the demo sample configurations?

    Heise.de found that their CPU’s power limit was effectively uncapped (1023 and 7/8ths Amp and 4095 and 7/8ths Watt) for instantaneous loads. People at RWT are already starting to grumble about Intel trying to tip review scales with parts that would probably burn themselves out fairly quickly in actual longer term consumer use.

      • chuckula
      • 2 years ago

      Motherboards already limit the power delivered to the CPU so calling a CPU “uncapped” doesn’t make much sense.

      In fact, THG went out of its way to talk about how the 7960X would throttle itself automatically, which doesn’t sound like a product that’s designed to burn itself out: [url<]http://www.tomshardware.com/reviews/intel-core-i9-7960x-cpu-skylake-x,5238-2.html[/url<] Anandtech also noted nothing particularly suprising about power consumption numbers: [url<]https://www.anandtech.com/show/11839/intel-core-i9-7980xe-and-core-i9-7960x-review/14[/url<] Note that the 16-core 7960X is clearly consuming [b<]less[/b<] power under load than the vaunted Threadripper. Doesn't sound like it's designed to "burn out" to me.

        • jts888
        • 2 years ago

        It’s not as though mobo VRMs have infinite capacity, but a CPU not trying to stay near spec limits is not good for a number of reasons. I don’t expect that AMD does a perfect job with this either, but I’m curious to see how well the new Intel parts behave (with their TDPs being ambitious for the core counts) to say the least.

          • chuckula
          • 2 years ago

          I’m saying that motherboards already limit the power deliver to below what the VRMs can inherently deliver by default unless you go out of your way to turn off those protections. The linked article above shows exactly that occurring on the reviewed MSI motherboard.

          I’d appreciate some links to the reviewers who claim that the i9 parts don’t have power/thermal protections turned on since I posted multiple links to large review sites that clearly shows these protections being turned on [b<]even in an overclocked part[/b<]. If it's an issue of some hardware site turning off every conceivable protection in order to win an overclocking competition then it's not some grand conspiracy.

            • jts888
            • 2 years ago

            The criticism was that the instantaneous peaks are unconstrained, which is something the mobo can’t actually control. The VRMs just keep banks of capacitors charged and have no way of stopping a CPU, a screwdriver, or anything else from discharging them at any short-term current limit. The idea is that CPUs letting themselves slurp a kiloamp though their pins for even just a few milliseconds at a time probably isn’t good.

            • psuedonymous
            • 2 years ago

            [quote<]The idea is that CPUs letting themselves slurp a kiloamp though their pins for even just a few milliseconds at a time probably isn't good.[/quote<]Extremely high instantaneous peak loads aren't exceptional, they're the norm.

            • cegras
            • 2 years ago

            By extremely high, you mean as high as what jts888 originally said? Give a number to the ‘norm’ you are talking about.

    • Thresher
    • 2 years ago

    Could you put the mainstream chips on there as well, so we can compare them to the HEDT chips?

    • Chrispy_
    • 2 years ago

    Looks to me like Skylake-X has about a 20% IPC advantage over Zen when looking at Core/clock parity.

    That’s impressive, but at the same time not as big a jump as the Intel zealots were screaming about.

    Additionally, it’s interesting that Intel are still pricing higher than AMD in terms of performance per dollar, looking at the 7900X and the 1950X which are both $999. As Krogoth has already mentioned, performance and price scaling of the 7980XE has totally gone out of the window. These aren’t consumer parts, they’re server rejects and Intel likely doesn’t care about their value nor do they have many of them to actually sell.

      • chuckula
      • 2 years ago

      [quote<]Looks to me like Skylake-X has about a 20% IPC advantage over Zen when looking at Core/clock parity.[/quote<] Where are you getting that from? That graph shows a 16 core, 2.80 GHz base-clock Skylake X way further ahead than 20% over a 16 core 3.4GHz base clock 1950X. Just given those clockspeed disparities if the 7960X had flat out tied the 1950X then it would still have a 17.5% IPC advantage. That's not even taking into account that TR probably has maybe (maybe) one AVX-512 benchmark in the mix for this round of testing.

        • tay
        • 2 years ago

        Why is this down-voted to hell?

          • mistme
          • 2 years ago

          Anti chuckula brigade.

        • the
        • 2 years ago

        Base clock but were the actual clocks using turbo of each under load for testing?

          • chuckula
          • 2 years ago

          Considering that the 7690X under a best-case 16 core workload only hits 3.6GHz as you can see [url=https://www.anandtech.com/show/11839/intel-core-i9-7980xe-and-core-i9-7960x-review<]here[/url<] and the vast majority of TR's non-gaming benchmarks are designed to stress the cores, it's pretty clear that RyZen gets a clockspeed advantage unless its turbo boost levels are basically fabrications. Worst-case with no turbo boost at all RyZen is running all 16 cores at 3.4GHz and the 1920X has a worst-case of 3.5GHz. There is probably a very small single-core clockspeed advantage for the 7960X of 100MHz since the 1950X is purportedly capable of hitting 4.1GHz... assuming AMD is being truthful. That means that giving AMD the most possible doubt in assuming that the 7960X is literally always at maximum turbo frequency without any variation and the RyZen 1950X literally never turbo boosts at all without any variation, then the 1950X parts are "handicapped" with at most a 5.5% clockspeed disadvantage in multi-threaded workloads and at most a 2.4% clockspeed disadvantage in single-threaded workloads.

        • Chrispy_
        • 2 years ago

        Well it’s napkin maths based on that graph and I used the words “about 20%” so don’t pull it apart too hard or it’ll unravel.

        Here’s what I did:
        7960X: 16 Cores at 3.6GHz all-cores speed = 205% on that graph
        1950X: 16 Cores at 3.4GHz all-cores speed = 165% on that graph

        (205/3.6)/(165/3.4) = 1.173 (or 1.2 when I was doing it in my head, hence the 20%).

        I was under the impression that Threadripper didn’t boost much when all cores were loaded but a quick google shows lots of screenshots with it running at 3.5 and 3.6GHz which would change the IPC difference to 21% and 24% respectively, but then again, I don’t know for definite yet if the 7960X sticks to it’s all-core speed, or boosts higher than 3.6 when fully loaded, either.

        Like I said, it’s napkin maths based on a few unknowns still, so I’ll wait for Jeff’s full review.

    • chuckula
    • 2 years ago

    So Intel just sent you multiple review samples before the NDA deadline without requiring us to publicly complain about it?

    The only thing less professional than that would be if Intel didn’t use ridiculous packaging for review units with a separate NDA for product unboxing on Youtube!

    Amateur hour Intel, amateur hour.

    Please tell me they at least packaged the parts in a huge, flashy, non-recyclable container instead of those boring easily recyclable cardboard boxes!

    • Krogoth
    • 2 years ago

    7960X and 7980XE are just server-tier chips trying to pass themselves off as HEDT chips without any of the server-tier goodness and while still keeping the price tag.

    If AMD wanted to be cheeky. They could try pimping out a SP Epyc as a “Threadripper FX” to up-one these chips in core count and most enterprise, non-AVX depended workloads. I’m quite aware that you have to get a SP3 board for it though since TR4 doesn’t have the imposer/tracing for extra CCX/dies.

      • chuckula
      • 2 years ago

      Oh yeah, these numbers [b<]totally[/b<] prove that AMD is just toying with Intel right now. Oh wait, you probably wrote that comment 6 weeks ago. If Intel wanted to be "cheeky", they could end AMD's hopes in this entire market segment by dropping the price of the 7960X to $1000 and calling it a day. The fact that they don't do so shows how much Intel goes out of its way to keep AMD viable, not some magical technical superiority of slapping multiple chips together to get moar coarz. Unlike your comment, mine wasn't copy-n-pasted and it's based on TR's actual numbers.

        • NTMBK
        • 2 years ago

        Except Intel’s yields on the 7960X are probably in the tank due to the fact that it’s an enormous 18-core die, while AMD can just take two high-yielding 8-core dies. Die size vs. yields scale non-linearly… so have fun with that price war.

        Intel clearly agrees that the multi-die approach is better, because they plan to adopt it:

        [quote<]On speaking with Diane Bryant, the 'data center gets new nodes first' is going to be achieved by using multiple small dies on a single package. But rather than use a multi-chip package as in previous multi-core products, Intel will be using EMIB as demonstrated at ISSCC: an MCP/2.5D interposer-like design with an Embedded Multi-Die Interconnect Bridge (EMIB). [/quote<] [url<]https://www.anandtech.com/show/11115/intel-confirms-8th-gen-core-on-14nm-data-center-first-to-new-nodes[/url<]

          • chuckula
          • 2 years ago

          [quote<]Except Intel's yields on the 7960X are probably in the tank due to the fact that it's an enormous 18-core die[/quote<] People like to project without proof. Lemme put it to you this way. Anandtech and those Youtube reviewers didn't get individually laser-etched i9 parts with their name and some phony product number on them. That's because you wouldn't feel all that special about receiving product 1,425,071 would you? As for EMIB, that's a next-generation interconnect that's designed to tie together multiple different types of silicon that are not necessarily even made on the same lithographic process. It's performance numbers are good enough to get two pieces of silicon acting like they are actually a single die -- which a couple of DRAM channels worth of bandwidth can't do in a supposedly high-end chip -- but EMIB goes way way beyond merely slapping moar coarz onto a PCB.

            • NTMBK
            • 2 years ago

            There’s a reason I put “probably” in that sentence you quoted, genius. I’m making a guess based on the fact that a) the economics of how yields scale with die size are well known and b) these parts are launching several months after the parts based on the smaller 10-core die.

            EMIB is a better execution of the idea, for sure, but it’s the same concept. Process shrinks are getting harder and harder (see also repeated Cannonlake delays), meaning that the economics of splitting a huge chip into multiple chiplets makes a lot more sense. NVidia are researching it, Intel are researching it, AMD is just the first to ship products around it.

            • chuckula
            • 2 years ago

            If there’s a “probably” in there then it’s awfully ironic that AMD isn’t held to the same standard.

            After all, the purportedly $599 [snicker] Vega 64 is made on a die that’s practically the same size as a fully-enabled 18-core 7980XE. Funny how Vega’s supply problems are all pinned in big-bad HBM2 when according to you it’s clearly not possible for GloFo to make chips that big for the consumer market while Intel is targeting chips of the same size at the much smaller enthusiast market.

            • Waco
            • 2 years ago

            HBM supply is limited due to fabs switching a ton of capacity to NVRAM (flash). DRAM will be following suite in the near future (price hikes and lack of availability) as well…

            • chuckula
            • 2 years ago

            My deeper point is that it’s a lousy argument that’s not based in fact to claim that Intel can’t have good yields on chips that are practically the same size as what are supposed to be consumer-grade Vegas that are supposed to have cheaper MSRPs than 2016-era GTX-1080 parts on 320mm^2 dies. Especially when the core i9 parts are not intended to be all that cheap in the first place.

            Lemme put it to you this way: How many people here think with a straight, non-AMD delusional face, that AMD makes a bigger profit selling Vega even at the inflated $700 price vs. if Intel decided that the 7980XE part — with the same die size as a Vega 64 and absolutely no need for an interposer or HBM2 — should be sold for $1000. I’m sure there are koolaid drinkers here who think that, but that conclusion isn’t based on this thing called “reality”.

            • DragonDaddyBear
            • 2 years ago

            You do realize that the best Vega dies are being sold as enterprise parts for well over $1,000, right?

            • chuckula
            • 2 years ago

            You do realize that the best HCC dies are being sold as enterprise parts for well over $2000 right?

            And be “being sold” I actually mean they are being bought by the truckload for enterprise buyers.

            • Waco
            • 2 years ago

            Oh, I didn’t delve into that argument for good reason. 🙂

            • NTMBK
            • 2 years ago

            When AMD starts clocking their GPUs to >4GHz, I’ll start holding GPUs to the same standard.

            • chuckula
            • 2 years ago

            That’s pretty much irrelevant since the clockspeeds are based mostly on core logic design. But I do find it amusing how you defend a 200mm^2 chip from AMD that can’t break 4GHz reliably as being “high yield” while insulting a true 16-core chip that can do an all-core overclock north of 4GHz almost by accident.

            Yeah, Intel can’t produce chips. That’s it.

            If anything, Intel should have it easier because transistor-count wise Vega is a bigger part than the HCC Xeons, and you have to have all your transistors working in a functional chip regardless of their clockspeed.

            Once again we get easy-upthumbs from the koolaid gang in claiming that Intel can’t produce complex chips. Then next month when AMD makes it blatantly publicly clear why Raj just left to “spend time with family” and Intel reports another massive profit we’ll learn why these feel-good upthumb bait statements are just fantasy.

            • Waco
            • 2 years ago

            Irrelevant comparison.

            • jts888
            • 2 years ago

            EMIB will be great if/when it works, but it’s gonna have substantial assembly costs and defect rates for the near future. And while interposers and EMIBs are only needed for pretty high lane trace densities, Epyc shows that clustered designs with more limited inter-cluster bandwidth can at least sometimes get away with lower density interconnects over traditional organic substrates.

            In some hypothetical near-term mixed-node processor, what components do you foresee needing >100 GB/s connectivity?

            • Waco
            • 2 years ago

            On-die cache, memory controllers, IO controller, network connectivity, etc.

            There are many things that are compromised for a single-fab node/die solution.

            • jts888
            • 2 years ago

            Interposers/EMIBs are only useful and necessary for situations where you need several hundred or more parallel signal traces per mm and can tolerate a few extra clocks latency.

            Network IO doesn’t come close to the signal density pain point for normal organic substrates, and even DDR4 memory controllers don’t really quite get there. On the other side of things, caches could definitely require enough bandwidth, but the extra latency cost is an argument for keeping them on-die.

            I can only really see EMIB succeeding as a replacement for things where interposers are already used: big HBM local memory pools, and maybe super tightly coupled coprocessors. Intel could maybe slice up a big mesh topology CPU with EMIBs in a way that’s not feasible with substrates, but it’s hard for me to imagine them saving any money using them for IMCs, south bridges, caches, etc.

            • Waco
            • 2 years ago

            Network IO is on the order of many tens of GB/s for the upcoming designs. I can imagine many ways to bundle the IO portions of the system into something that’s connected directly to the die at >100 GB/s.

            • jts888
            • 2 years ago

            400 GbE is a ways off, much less on-CPU 400 GbE, if for no reason other than being choked on off-package DDR bandwidth. Remember that one of Intel’s first attacks on Epyc was that it lacked enough memory bandwidth to service all it’s theoretical PCIe traffic. There are certainly some speculative use cases where EMIB looks attractive, but Intel will need some more compelling near-term ones.

            • Waco
            • 2 years ago

            The next few years isn’t “a ways off” in my mind.

            • jts888
            • 2 years ago

            There might be 400 Gb switch interlink and uplink ports before then, but a single port 400 GbE NIC will require a x16 PCIe 5.0 port, the spec for which isn’t even expected till 2019. And on-CPU MACs won’t be for a couple more years after that, EMIB or not.

            So my expectation is 3+ years in the absolute best case, not a “few”. I’d love to be wrong (mostly so 100 GbE gets pushed into sane price brackets), but I won’t hold my breath.

            • Waco
            • 2 years ago

            You’re limiting your thinking to PCIe. 🙂

            • jts888
            • 2 years ago

            I’m limiting myself to the belief that discrete network adapters will always precede lower level integration variants. EMIB could in theory change things somewhat, but I’d still be amazed if integrated 200/400 Gb interfaces come out nearly as quickly, with the possible exception of dedicated NPUs like Cavium’s.

            • Waco
            • 2 years ago

            Perhaps. We’ll see in the next few years. 🙂

            • Krogoth
            • 2 years ago

            It is just a trade-off. Massive single-die designs are difficult to fab without yielding issues. Economic realities seem to make multi-die designs to be the future of high core count SKUs.

            • tay
            • 2 years ago

            Testy…

            • chuckula
            • 2 years ago

            I’m getting a little tired of blind worship of AMD being considered OK but any rational and reasonable statement about a very exciting and actually innovative product from Intel being attacked by the echo chamber.

            The Skylake X line shows a lot of actual innovation that goes way beyond merely blabbing about “more cores.”

            Even this snippet from TR’s preview shows that given that the performance delta from the “miracle” Threadripper with 60% more cores and higher clockspeeds vs. the old Broadwell 6950X is noticeably [b<]smaller[/b<] than the performance delta from the 1950X up to the i9 7960X with an equal core count, lower clockspeeds, smaller cache, and higher power efficiency than Threadripper. Oh, and TR's review doesn't even turn on AVX-512. So imagine what happens after the 7960X has one hand untied from behind its back? I don't call anything that Intel did a "miracle" like the koolaid gang did when AMD copied Haswell. I call it solid engineering and Intel deserves to be congratulated for designing the chips that AMD fanboys will be drooling over when AMD gets the copies out in 2021.

            • Krogoth
            • 2 years ago

            Intel worship is just as irritating. Zen is hardy a “Haswell” copy either. It makes as much sense as calling Skylake and its younger siblings sup-up Pentium 3s when this is hardly the case.

            • chuckula
            • 2 years ago

            You’re right of course.
            Haswell has a proper AVX implementation and a cache hierarchy that scales better.

            • Krogoth
            • 2 years ago

            It is because AVX is Intel’s baby. It would be an embarrassment if Intel couldn’t get it right on the first try. It would a further embarrassment if their competition had a better implementation. AMD only got AVX via x86 cross-license agreement. They don’t have a vested interest in it either.

            Scalability in cache hierarchy is entirely depended on the workload you are throwing at the CPU. Saying Haswell is outright superior is being factitious. They have their own strength and weaknesses.

            • Klimax
            • 2 years ago

            Same goes for all SSE versions with one exception… Your point is not exactly valid.

            • Goty
            • 2 years ago

            Maybe they’re just fanboys like you?

        • Krogoth
        • 2 years ago

        Such projection.

        Intel could easily engage in a price war, but I doubt their shareholders are going to be willing to give up those tasty profit margins since Sandy Bridge E so easily.

        If anything AMD just put a fire under Intel to release something beyond a minor evolution of Sandy Bridge E in the HEDT market. They brought sorely needed competition for this market.

        • kuraegomon
        • 2 years ago

        No, it just proves that Intel has no particular desire to find out whether the current administration will _completely_ defang the FTC and DoJ Antitrust Division, or not.

      • floodo1
      • 2 years ago

      Keep dreaming

    • NTMBK
    • 2 years ago

    The 7980XE looks like a total waste of money, but that’s pretty much always been true of the top end Extreme Edition model. 7960X looks a little bit overpriced, but you definitely get a nice jump in performance over Threadripper.

    It’ll be interesting to see how the individual benchmarks break down. Are there any with AVX-512 support yet?

    • jarder
    • 2 years ago

    Hmm, only thing I’m getting from that graph is that the 7960X looks good, a little bit on the expensive side, but that’s to be expected at the top-end of the market.

    The 7980XE on the other hand is a complete waste of time the e-peen of CPU market if you will.

      • Klimax
      • 2 years ago

      Those extra cores will work, but it is unlikely there are many benchmarks that can actually use them effectively.

Pin It on Pinterest

Share This