Rumor: Intel Coffee Lake will take hexa-core CPUs mainstream in 2018

Laptops with Intel's Kaby Lake CPUs inside are becoming more and more common, but the company's seventh-generation CPUs have yet to materialize on the desktop. That's not stopping the rumor mill from dredging up information on products that are further out still. Today's leak comes from BenchLife and contains info about Coffee Lake CPUs, which are purported to come out in 2018. Most tantalizingly, the new chips might come in variants including a six-core desktop part.

Source: BenchLife

What we can glean from Google's translation indicates that Coffee Lake (henceforth known as CFL) come in U, H, S, and X lineups. The H version seems to be aimed at mobile devices, the S version appears to be targeted at desktops, while the X model will be the high-performance model for Socket 2011. BenchLife suggests the U variant has a "4+3e" die arrangement, which we can interpret as a quad-core CPU with Iris Pro graphics and eDRAM on board. The listed die size is particularly large, something that at first sight would indicate a desktop part. However, the U label has historically been reserved for mobile parts, so your guess is as good as ours.

The rumored desktop S parts will come in four- and six-core flavors. Don't expect much in the way of GPU changes in the new lineup, since all the listed variants mention Kaby Lake's "Gen9" graphics architecture. Meanwhile, CFL-H is only available with six cores. Interestingly enough, there's no mention of any dual-core parts at all. That might lead one to wonder if Intel will just keep Kaby Lake humming along to fill out the range of low-end CPUs.

Source: BenchLife

The leaked slide mentions an "LGA 37.5 mm x 37.5mm" socket for CFL-S, but it's tricky to assume that means Intel will continue using the LGA 1151 socket, especially since a new chipset called Cannon Lake will reportedly underpin the CFL chips. Cannon Lake should bring native USB 3.1 support and direct connectivity to NFC readers and fingerprint sensors, in what we presume is a Secure Enclave-like arrangement.

BenchLife says all these new chips will arrive in 2018. The desktop S version will reportedly arrive in February 2018, while the U variant will come in March of that year. Meanwhile, CHL-H should arrive in April 2018.

Comments closed
    • mganai
    • 3 years ago

    I think you made a typo with CHL.

    Also, Cannonlake will be for dual cores for U/Y, topping out at 28 W. I’m guessing CFL-U will be non-HT quad cores.

    I’m confused about what they’re doing with the -X right now. Is it going to be in between -S and what’s been known as -E?

    • ronch
    • 3 years ago

    6-core is only now going mainstream? I can go to the nearby grocery store just a few meters from our house where there’s this small booth selling local brand cellphones that have 8-core Mediatek chips for just around $90. If that’s not mainstream I don’t know what is.

      • Pancake
      • 3 years ago

      That’s Chinese numerology and marketing for you. 8 is a lucky number and MOAR COARZ!!! Anybody who could afford it would rather buy an iPhone with 2 cores.

    • GrimDanfango
    • 3 years ago

    When are they going to do another i7-5775c? I’ve seen evidence of marked benefits in quite a few games, often above or at least equaling Skylake’s performance, due to the Iris Pro’s memory used as an L4 cache, yet MS appear to have all but abandoned the idea of socketed Crystalwell, in any quarter.

    Coincidentally, it also appeared to have significant benefits for computational fluid dynamics, so if they ever decided to slap Crystalwell into one of their 6/8-core “extreme” processors, I’d absolutely take their friggin’ arm off! I mean really, drop the actual Iris GPU for all I care, just gimme the cache, and I’ll give you the cash!

    Why do they seem so uninterested in pursuing that approach?

      • UberGerbil
      • 3 years ago

      [quote<]yet MS appear to have all but abandoned the idea of socketed Crystalwell[/quote<]Either you know something about how these products come about that the rest of us don't, or you've mixed up your Evil Tech Corps™

      • Anonymous Coward
      • 3 years ago

      Notably IBM is eating a bit of Intel’s server market using processors with huge caches (and heavier use of SMT, and more bandwidth).

      • TheRazorsEdge
      • 3 years ago

      Cache takes up a lot of space. This reduces the number of CPUs they can get from each silicon wafer, which increases costs and slows production.

      Parts with eDRAM will remain a small niche in the workstation market.

      It is far more likely to see widespread availability in the server market where the demand and the prices will be more appealing.

      • techguy
      • 3 years ago

      It’s a great question, and you’re preaching to the choir with this audience I would say, due to our awareness of the performance of Crystal Well type solutions thanks to TR’s excellent piece on the 5775c awhile back.

      • Krogoth
      • 3 years ago

      i7-5775 is just a happy accident. The “L4 cache” on it was really meant for the integrated GPU.

      Intel wants you to get their Xeon E5 and E7 series chips if you want massive on-die caches.

        • travbrad
        • 3 years ago

        Yep I agree. It still sucks for those of us who would be interested in such a chip though, and I bet they would have made it happen if they had any competition from AMD.

        I’d actually be willing to pay more for a 4c/8t CPU with L4 than even a 6c/12t CPU without, let alone a regular 4c/8t CPU….just not Xeon prices and especially not giving up the clockspeed which going Xeon forces you to do.

          • Anonymous Coward
          • 3 years ago

          I’d consider 2c/4t with a massive L4 and high turbo. Certainly going past 8t is impossible to justify on a desktop or laptop, for me.

            • Krogoth
            • 3 years ago

            The market that would like such a processor is so tiny that it doesn’t justify the R&D and production costs to fabricate a separate die for it.

            • Anonymous Coward
            • 3 years ago

            Is the market small because it lacks merit, or because of need-perception?

            I agree that the actual cost of making such a processor would be too high, because the CPU cores themselves would be a pretty modest part of the chip. But they could sell me a 4c/8t with a disabled core or two. Or a 4c/4t if they like.

        • jihadjoe
        • 3 years ago

        A Bob Ross moment!

    • short_fuze
    • 3 years ago

    It needs its own [url=http://www.darkhorse.com/Comics/97-341/Too-Much-Coffee-Man-Special<]logo[/url<].

    • VincentHanna
    • 3 years ago

    If bringing hexacores mainstream is going to be the catalyst for doing away with 2core CPUs, I’m all for it.

    If we are all to maintain the fiction of 2+2 cores (*cough* i3) being a legit product segment in 2017 then I don’t care that much.

    I can, however, say that I have been very happy with my 5ghz sandybridge hexacore and welcome the opportunity to share the experience with more of my less fortunate brothers.

      • Anonymous Coward
      • 3 years ago

      Dunno I think the truth is 2 cores + HT meets the needs of almost everyone. The sad truth. Going 4 cores as standard doesn’t mean there is going to be something useful for them to do.

        • VincentHanna
        • 3 years ago

        [quote<]I think the truth is 2 cores + HT meets the needs of almost everyone. [/quote<] So does 1000 grit toiletpaper. If you want to define "need" that way.

          • travbrad
          • 3 years ago

          Please enlighten us about all the tasks where the average user would see a benefit from 4+ cores.

            • VincentHanna
            • 3 years ago

            Please see my sandpaper analogy.

            It’s not about accomplishing MOAR tasks, its about a better experience accomplishing the same tasks. Yes, even your Aunt Gertrude will be able to appreciate the difference in a side by side comparison.

            However, i will say that we have gone off the rails a bit. My original point was not “everybody needs 4+4 to be happy” my original point was the i3 2+2 segment needs to die. 4+0 should be the BASE processor. 4+0 is cheaper to make, can clock higher, uses virtually the same silicon, costs less in R&D and Royalties and has a slight advantage over 2+2 for multithreaded tasks.

            Put simply, 2+2 should not exist.

            • Chrispy_
            • 3 years ago

            There’s no way 4+0 is cheaper than 2+2, sorry.

            cost is directly related to die area, and the SMT provides a way of processing two threads for <5% more die area. In most cases the performance gain from the SMT is far in excess of this tiny die-area gain, sometimes up to 60%

            It’s more a case of 4+0 should not exist, everything should have SMT and the artificial segmentation to disable it on some Intel chips is wasteful.

            • Peldor
            • 3 years ago

            Exactly. Well said

            • VincentHanna
            • 3 years ago

            I would be fine with 2+2, 4+4 and 6+6 as an alternative product seg to 4+0, 4+4, 6+0.

            What I am not okay with is 2+2, 4+0, 4+4 because it is borderline dishonest.

            That said, I still disagree with you. The i3 and i5 die sizes are virtually identical, so I doubt there is much difference in the cost to produce silicon-wise. 60% for <5% or 100% for <20%. In both scenarios it is far outweighed by other costs.

            The only reason both chips exist, is because of the [b<] "cost" [/b<] of not charging premium prices for the 4/4 and 4/8 chips.

            • Chrispy_
            • 3 years ago

            [quote<]I would be fine with 2+2, 4+4 and 6+6 as an alternative to 4+0, 4+4, 6+0[/quote<] Hopefully AMD being competitive next year will force Intel to dump their silly artificial segmentation that wastes the SMT feature on some dies. 4+x is still more expensive than 2+x. Die area differences may not be that much smaller (I think it's about 130mm^2 for quad+GT2 vs 100mm^2 for dual+GT2) but die area alone is not the only consideration. It's a indicator of thermal design limits, interconnect complexity, cache size and other things. The GT2 graphics are more than 50% of the chip but we have to accept that CPU progress is being driven by the mobile sector and the desktop market is dead/dying. I don't like that, but without any significant competition or performance progress in the last half decade, Software devs have played it safe so mobile has become more viable than ever, and actually overtaken desktop by a huge margin. Efficiency and adequacy are the drivers of CPU design now, rather than performance and progress.

            • Anonymous Coward
            • 3 years ago

            You don’t seem to be interested in other points of view.

            • VincentHanna
            • 3 years ago

            Nope. Not really.

            Someone can argue all day to me about the merits of 56k modems. How great they are, and how they meet everyone’s [b<]needs.[/b<] That's the word you used. And then they can put their money where their mouth is. Faster boot times, faster web browsing, better internet video playback, faster program access times. On and on. Its a noticeable difference, and I've experienced the lack often enough to know how grating it is when the cpu bottleneck is present, even on "common" workloads.

            • Anonymous Coward
            • 3 years ago

            Your analogies are terrible. Sandpaper and 56k modems?

            Also I have a 2×2 machine at home, it works as well as my 4×2 machines for the light tasks it needs to do.

            Also at work, they are buying 2×2 laptops to replace 4×2 laptops because it is not necessary to continue with the quads. Few people in the company give a crap about the power of the old W520 and W530 machines which are being retired, they want thin and light with SSD and plenty of RAM.

        • kamikaziechameleon
        • 3 years ago

        umm… sure…

        I’m not sure that is truly reflective of efficiency.

        Without considering wattage (cause I don’t know how that figures in. ) please consider this… we’ve long since surpassed the CPU bottle neck for the average user. Hard drive, ram, and chipset design is the greatest set of issues in terms of sluggish UI for locally operated applications.

        BUT does that mean we don’t want better CPU’s??? The reality is we should continue to progress the efficiency and power of these components (while we let everything else catch up) because ultimately new functionality will be unlocked as a result.

        4 cores clocked at 2 ghz will outperform 2 cores at 4 ghz when comparing equally optimized code. Again not sure how this translates to watt/performance, but I think it actually favors more lower clocked CPUs… feel free to correct me.

        Perhaps I misunderstand your statement.

      • Krogoth
      • 3 years ago

      Single-core and dual-core CPUs will be around because of yielding and [b<]power consumption[/b<] reasons. It is like arguing that there's no place for small displacement/low cylinder count engines.

        • Anonymous Coward
        • 3 years ago

        That engine analogy is pretty poor considering the trajectory of engine designs worldwide.

          • Krogoth
          • 3 years ago

          The point is that you don’t *need* massive power plants and engines for everything nor it is desirable.

          Likewise, you don’t need quad-core processors for embedded environments where power consumption is a massive factor. There are still places for single-core and dual-core CPUs in 2016.

    • lycium
    • 3 years ago

    Totally agree with the comments here: too many lakes, too few cores.

    I guess Intel are leaving the core count scaling until the very end of transistor scaling, and while I would probably do the same thing if I were in charge of answering to the investors, it’s just a sad, sad state of affairs for consumer computing.

    I bought my quadcore i7 920 with 12GB ram in 2008 and it’s ridiculously competitive almost a decade later, something like a factor of 2 compared to current i7 quadcores.

    • travideus
    • 3 years ago

    Just call it Coffee Lake. CFL is dumb.

      • jihadjoe
      • 3 years ago

      I really hated it when people started using FTL as a shortcut for ‘For The Lose’.

    • Mr Bill
    • 3 years ago

    This Coffee Lake come in different roasts, er SKU’s? Ho HO Welcome to the dark side!

    • Krogoth
    • 3 years ago

    It will likely be only for high-end SKUs though. The rest of the line-up will have four-cores and four-cores w/HT.

    • dikowexeyu
    • 3 years ago

    6 cores today is way too little…

      • lycium
      • 3 years ago

      *few

        • Srsly_Bro
        • 3 years ago

        nice post. Few people notice the difference.

          • Anonymous Coward
          • 3 years ago

          But little people do notice.

        • EndlessWaves
        • 3 years ago

        Actually, little is more common:
        [url<]https://books.google.com/ngrams/graph?content=way+too+few%2Cway+too+little&year_start=1925&year_end=2008&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cway%20too%20few%3B%2Cc0%3B.t1%3B%2Cway%20too%20little%3B%2Cc0[/url<]

          • BurntMyBacon
          • 3 years ago

          That link doesn’t tell you much about usage. “Way too little” is appropriate to use with quantities (I.E. There is way to little water in the pool.). “Way too few” should be used for counts or discrete units (I.E. There are way too few people to do the job.).

          Also, point of interest: Just because 98 out of 100 people think that something is correct doesn’t necessarily make it so. I read about a survey some years back that, due to the way it was worded, convinced 98% of people surveyed were that water shouldn’t be allowed in the local river system. The point is, popularity is rarely a marker I would use to prove correctness.

            • EndlessWaves
            • 3 years ago

            [quote<]Also, point of interest: Just because 98 out of 100 people think that something is correct doesn't necessarily make it so.[/quote<] For an independent fact, no. For words, it absolutely does. Or at least, the way most people in a group use words is correct for that group. Sometimes the way people use words and the way they say they use words are quite different. [quote<]"Way too little" is appropriate to use with quantities (I.E. There is way to little water in the pool.). "Way too few" should be used for counts or discrete units (I.E. There are way too few people to do the job.).[/quote<] That's just a rule for people wanting to right in a particular style. The standard usage of those words is much more jumbled up with lots and examples of countables where less is the standard usage. You may be contacted in 28 days or less, go through a fast checkout because you've got 5 items or less, or prefer novels with less than 500 pages. [quote<]That link doesn't tell you much about usage.[/quote<] That's perfectly true. I'm not an American English speaker so maybe my assumption that it was a stock phrase rather than item specific was off. Let's try a specific phrase relating to cores and see how it's used then. How about Four cores or _____? There were 9,280 ghits for "four cores or less." There were 5 ghits for "four cores or fewer." If we change it to a number we have: 15,100 ghits for "4 cores or less" 1,560 ghits for "4 cores or fewer" It still looks like little is preferable to few in this context, but I am interesting in seeing evidence otherwise.

            • raddude9
            • 3 years ago

            [quote<]That's just a rule for people wanting to right in a particular style. [/quote<] .... and you're just lost your credibility as an authority on language...

      • Krogoth
      • 3 years ago

      There are SKUs with more cores available on the market. If you wait just a little longer you can get yourself a 32-core Skylake-EP chip.

      You are not the intended market for these chips.

      • Mr Bill
      • 3 years ago

      “Way too little” computing power? parallelism?

    • Welch
    • 3 years ago

    I’d buy one of these just to say I’ve got a coffee in my machine to much confusion of those not in the loop.

    Coffee on my desk, coffee in my puter, coffee at dinner time. When coffee’s in your computer, you can have computing anytime!

      • Redocbew
      • 3 years ago

      Coffee in your computer makes it faster, right?

        • EzioAs
        • 3 years ago

        Too bad you lose the ability to set it to sleep too.

      • Chrispy_
      • 3 years ago

      Don’t forget to install Java.

      • Srsly_Bro
      • 3 years ago

      Custom loop with coffee?

    • UberGerbil
    • 3 years ago

    We already have a perfectly good use for “CFL” denoting a kind of lighting, and acronyms are so 20th Century anyway. We’re well into the teen years of this new century, so the only way forward is to embrace it, slamming the door of our room and turning the music up loud and rapidly typing the following because our parents just don’t understand us: ☕⛵-U!

    • sweatshopking
    • 3 years ago

    TOO MANY LAKES

      • EndlessWaves
      • 3 years ago

      You have entered the lake district!

      woo-woo-woooo

      • NovusBogus
      • 3 years ago

      Not enough lakes.

      -Minnesota

        • Wirko
        • 3 years ago

        Not enough bridges, wells and lakes.

        – A gardener, Japan

    • EndlessWaves
    • 3 years ago

    The diagram is more interesting that the core counts.

    I see mention of DDR4-2400 (boo!), USB Power Delivery (hooray!), four USB symbols next to Alpine Ridge Chips (4x Thunderbolt 3?) and also in a seperate part of the diagram a ‘DP 1.2 to HDMI 2.0’ box (???)

    Also, no sign of GT4? Does that mean Intel have found themselves bottlenecked by memory bandwidth at GT3? If so, why only DDR4-2400 support? Curiouser and Curiouser.

    p.s. Gen 9.5 media? VP9-Profile 2 maybe?

    • NTMBK
    • 3 years ago

    Erm, I thought Cannon Lake was the 10nm CPU architecture? Not a chipset?

      • jts888
      • 3 years ago

      yeah that seems weird since the Cannon Lake name has been used for the 10 nm CPU shrink for years.

    • Flying Fox
    • 3 years ago

    Hmm… should I hold on for another year? 😛

      • Srsly_Bro
      • 3 years ago

      Yes

    • DancinJack
    • 3 years ago

    I think the BenchLife link on the second img is borked…

    • barich
    • 3 years ago

    Intel has been stuck at 4 cores for mainstream high-end desktop parts since 2007. Which is somewhat understandable since most applications don’t scale well beyond that. AMD’s ever increasing core count was more out of desperation since they couldn’t compete in single threaded performance.

    But I think it’s about time to move on. My nearly 4-year-old Sandy Bridge-E system still outperforms the fastest Skylake CPUs at rendering tasks that take advantage of the additional cores.

      • jts888
      • 3 years ago

      We’ve reached the point where individual browser tabs can saturate a core doing god knows what kind of JS witchcraft, so I’m personally OK with a shift to 6/8 core workstations for even non-“serious” work.

        • Airmantharp
        • 3 years ago

        Hell, this has been the main reason for making sure you get the hyperthreaded CPUs, even for gaming. And as long as Intel offers the larger CPUs at competitive clockspeeds (not losing more than a few hundred MHz, if any, to the quad-core parts) then they may find a future customer for these right here.

        • meerkt
        • 3 years ago

        I know what all that JS is doing. It’s called “ads”.

        The future is moar cores, moar ads! With Intel’s nextgen shrink (Milk Lake) your multi-tab ads could even each have a dedicated 3D processor to render it in glorious HDR UHD!

      • Krogoth
      • 3 years ago

      I wouldn’t say that for Skylake chips that have HT. They will manage to overtake that old Sandy Bridge-E despite having only “four cores”. There have been major strides in HT on post-Ivy bridge chips.

        • Anonymous Coward
        • 3 years ago

        Hmm, I both curious and a bit lazy… do you have a good link showing a modern hyperthreading benchmark? I’ll go try and find one myself if you don’t have one in mind already. I’m thinking about the work-related aspects of this, maybe I need to refresh my expectations of HT.

        • barich
        • 3 years ago

        Benchmarks I’ve seen of what I have (the i7-3930K) are generally a bit faster than the i7-6700K in applications that can use all 6 cores. Applications that can’t take advantage of that many cores (most of them, really) are slower.

    • WhatMeWorry
    • 3 years ago

    I miss those days when I used to get excited about new CPU rumors.

      • chuckula
      • 3 years ago

      The only thing to get excited about is the rumors.
      The actual CPUs themselves never seem to live up to the hype.

    • atari030
    • 3 years ago

    Newsflash: Hexa-core CPUs went mainstream a long time ago (2010) with the Phenom x6’s.

      • chuckula
      • 3 years ago

      Don’t know why you’re being downvoted since you’re right.
      Not that those 6-core Phenoms were all that great* but they had 6 cores.

      * Although I’d take one of them over Bulldozer given the choice.

        • snowMAN
        • 3 years ago

        > Don’t know why you’re being downvoted since you’re right.

        Because they were/are outperformed by Intel quad-cores.

      • NTMBK
      • 3 years ago

      So at this rate Intel will catch up to AMD’s 8 core mainstream some time in the 2020s…

        • chuckula
        • 3 years ago

        WE CERTAINLY HOPE THEY’LL CATCH UP WITH US!
        — AMD

          • ronch
          • 3 years ago

          Well they eventually will. Look at how they eventually caught up with integrated memory controllers. AMD sure had them scrambling for 5 years figuring out how to do it.

      • Hattig
      • 3 years ago

      At least these ones come in a 45W configuration though (in the H SKU). Still, that’s a year after Zen, and we know there will be 65W 8C Zen SKUs so it’ll be interesting to see the landscape in 2018.

      • Anonymous Coward
      • 3 years ago

      I miss those chips, solid and no-nonsense. 6 real cores sits better in my mind than 4 cores with 2-way HT.

        • Mr Bill
        • 3 years ago

        I still use my 1100T.

          • JosiahBradley
          • 3 years ago

          I’m still running a 1090T passively cooled in my Linux workstation because its cores actually get utilized. Compiling and database loads are still well threaded even if lightly.

      • sophisticles
      • 3 years ago

      Those X6’s were pretty nice, under Linux they almost as fast as the much higher clocked “8 core” Piledrivers in many tasks.

      • ronch
      • 3 years ago

      Followed by 8-core chips just one year later, in 2011. And AMD gives it to you for just around $270 at introduction, cheaper than 4-core i7 models. Go, AMD!!

      Um..

    • PrincipalSkinner
    • 3 years ago

    2018? Tsk.

      • chuckula
      • 3 years ago

      They are totally not getting to say first with a launch date in 2018.

    • chuckula
    • 3 years ago

    I didn’t choose the BenchLife.
    The BenchLife chose me.

    [Edit: “The leaked slide mentions an “LGA 37.5 mm x 37.5mm” socket for CFL-S, but it’s tricky to assume that means Intel will continue using the LGA 1151 socket, especially since a new chipset called Cannon Lake will reportedly underpin the CFL chips. ”

    The who to the what now?]

Pin It on Pinterest

Share This