Intel’s next Core chips will be built on 14-nm tech again

Ashraf Eassa from The Motley Fool was at Intel's annual Investor Day in California yesterday, and he Tweeted a couple of interesting things. In the most surprising bit of news, Eassa confirmed that the next generation of Intel Core processors will once again be built on 14-nm technology. In a reply to Anandtech's Ian Cutress, he also reported that Intel's Vice President and General Manager of Client and IoT says that "process tech use will be fluid based on segment and what makes sense."

Intel slide from Ashraf Eassa

Those two bits of news don't seem to mesh all that well on their own, but Diane Bryant (Intel's Executive Vice President and General Manager of its Data Center division) displayed a slide with the statement "data center first for next process node" during the same presentation.

That move could imply that rather than letting its Xeons lag an architectural generation during production on the same process as consumer parts, Intel could mean to fab Xeons on a higher-density process first to achieve higher performance and, in turn, higher selling prices and margins from customers thanks to those chips' favored status in its foundry roadmap. It would also imply that Intel's consumer chips will no longer follow a predictable "process-architecture-optimization" model.

Attentive gerbils are no doubt already counting on their fingers, since this means the "8th-generation" Intel Core processors will be the fourth series to be fabricated on a 14-nm process. As we saw with Kaby Lake, Intel's process optimizations alone don't tend to offer large performance improvements over the previous generation. If the company isn't making significant architectural changes in the new chips, we might be in for another mild improvement.

Alternatively, considering the statement regarding "fluid" process tech usage, we may be seeing some mixture of 14-nm and 10-nm process technologies across various product segments. Intel's product stack is already confusing, but it may soon become downright convoluted. Speculation aside, it seems unlikely we'll be seeing 10-nm desktop chips anytime soon.

Comments closed
    • Wirko
    • 3 years ago

    A semi-related observation — Intel’s ARK webpage resisted the low-information-density virus for a long, long time. Not any longer.

    • Stonebender
    • 3 years ago

    People don’t realize how difficult of a process 10nm has turned out to be. The amount of multi-patterning required is insane. This leads to poor yields, as it is immensely difficult to get good metrology numbers while trying to get everything to align properly. This is why I believe 7nm will be a much easier process. Once the critical layers are being manufactured by EUV tools, the amount of multi-patterning will be reduced by quite a bit. It’s also why I have a hard time believing Samsung or TMSC when they say they’re going to be at 10nm or 7nm or whatever any time soon. These guys haven’t even matched Intel’s 14nm density yet.

    As for the design of the processors, it appears that all the low hanging fruit has been plucked. It seems like people believe Intel is purposely limiting processor advancement, while in actuality there isn’t really anywhere left to go.

      • Srsly_Bro
      • 3 years ago

      [url<]http://www.eetimes.com/document.asp?piddl_msgid=368302&doc_id=1331317&page_number=2[/url<]

        • Stonebender
        • 3 years ago

        That doesn’t really say much of anything. The Ryzen core is smaller, ok. How many transistors does it have?

          • Srsly_Bro
          • 3 years ago

          I just wanted to post it. I wasn’t trying to say anything.

        • Klimax
        • 3 years ago

        No 256-bit unit for AVX and other similar things will give you that…

    • brucek2
    • 3 years ago

    Physical hardware improvements getting harder? They could simply drop the unwanted but mandatory embedded GPU that goes unused by anyone with a discrete card, and recover all that space for more cores or less cost. Invest some of the billions that would go into a new fab process into tools making it easy for games & application developers to leverage those extra cores. That’d probably bring a much more meaningful improvement than a few more percentage points of gain on the current models.

      • BoilerGamer
      • 3 years ago

      no IGP would be DOA for their biggest & most profitable Core(non-server) chip user base: LAPTOP OEM, espically in the no dGP LAPTOP market.

    • Chrispy_
    • 3 years ago

    Does it have RGB LEDs though?

    • lycium
    • 3 years ago

    Cynically I think the following plan was drawn up at Intel after AMD’s Athlon:

    Plan A – Make lots of big improvements in short timespan to try to kill AMD
    Plan B – Just slow down to match AMD’s improvement pace

    Plan A failed, so now it’s time for Plan B.

    • cygnus1
    • 3 years ago

    So, sounds to me like they’re going to prioritize the next node for their highest margin (Xeon) and highest volume (mobile) parts. Not unreasonable.

    Could that also signal that they might actually be afraid of AMD clawing back some market share so they’re going to put whatever advantage they can to maintain max advantage in the most lucrative markets?

    • tipoo
    • 3 years ago

    “First we brought you Tick Tock, then Process Architecture Optimization. Now, presenting our new multi-step method, knick-knack paddywhack, Give the dog a bone”.

      • Wirko
      • 3 years ago

      Just like every new release of MS Windows is going to be called Windows 10, and that’s forever, Intel can keep the name Fluid Lake until something new displaces digital/quantum computers.

    • Stochastic
    • 3 years ago

    So what can we expect from CPUs over the next 10 years? Have we finally hit a brick wall where things will only incrementally improve until we transition to fundamentally different technology? Are we going to have to wait decades for this to happen?

      • snowMAN
      • 3 years ago

      It used to take 1-2 years to double cpu performance, recently it has become 5+ (single threaded).

        • rudimentary_lathe
        • 3 years ago

        Actually, single threaded performance hasn’t come anywhere close to doubling in 5 years. Unless software is taking advantage of new instructions, that is, which hardly any software does.

          • RAGEPRO
          • 3 years ago

          [url=https://techreport.com/review/31179/intel-core-i7-7700k-kaby-lake-cpu-reviewed/13<]No, not quite double[/url<] — although a 45% improvement is pretty good really.

      • Pwnstar
      • 3 years ago

      Where have you been the last eight years? “incrementally improve” is the name of the game.

        • Stochastic
        • 3 years ago

        I can’t imagine that things will continue advancing at this glacial rate forever, though. I would hope that in the decades to come there will be breakthroughs that will enable 100X or even 1000X performance increases, at least for certain applications. Or perhaps this will just remain a pipe dream.

    • davidbowser
    • 3 years ago

    I recognize that several commenters are pointing towards limits of the tech, but I just don’t buy it. Squeezing performance and efficiency has been the bread and butter of Intel for the last 15 years, but they just about quit recently. I see this as a direct result of lack of competition in the PC and Server processor markets. I certainly think they are pumping the dollars into researching the next big thing, but why would they bother to spend billions on Fab upgrades when they can just increase profits by standing pat until AMD releases a compelling bit of tech?

    I think pretty much everyone can agree that AMD has sucked at delivering in a timely fashion on their new processor tech (even when it is compelling at the low-mid range). So I don’t think blaming Intel for AMD’s lack of execution is accurate, but it’s tough to ignore the years of predatory behavior when competition was real. The end result is the same for consumers, regardless of fault, in that Intel will NOT lower prices and we will NOT see any meaningful improvements to what already exists.

      • Goty
      • 3 years ago

      Semiconductor fabrication is hard. I don’t think this is Intel resting on its laurels (at least not entirely) and it’s impressive that they’ve had the R&D might to push this far before running into serious roadblocks. Most other players in the industry started having serious trouble around the 28nm node.

        • mesyn191
        • 3 years ago

        He isn’t talking about semiconductor fab tech per se, he is talking about getting more performance and efficiency.

        You don’t necessarily need a improvement in fab tech to improve performance and efficiency. Design counts for a whole lot too as we can see with Bulldozer or Netburst.

      • frenchy2k1
      • 3 years ago

      You are seriously underestimating the difficulty of improving semiconductor manufacturing.
      To have an idea about the industry, look at the 4 major players: Intel, TSMC, GloFo and Samsung.
      Currently, 16/14nm tech is in use.
      Next steps are 10nm and/or 7nm. However, the industry is faced with huge difficulties and tough decisions:
      – go with quad patterning in immersion, a slight improvement on the current triple patterning in immersion used for 14nm, with increased costs (you now need 4 passes instead of 3) and big restrictions (lines are thinner in one dimension and chip design needs to accomodate this). This can only work for 10nm BTW, as already important expanse.
      – wait for extreme UV tech to be ready and work, at even greater expense.

      What intel is saying here is, we bet on EUV and it did not pan out. We cannot switch our full production to it. GloFo has already announced they would skip 10nm and go directly to 7nm. Samsung is going to 10nm, but this might be a restricted node (mobile only, like 20nm maybe?).

      Intel cannot beat the physics of it all. Semiconductor manufacturing is some of the most advanced field in the world and required lots of people, discoveries and improvements to align properly.

        • mesyn191
        • 3 years ago

        He isn’t talking about semiconductor fab tech per se, he is talking about getting more performance and efficiency.

        You don’t necessarily need a improvement in fab tech to improve performance and efficiency. Design counts for a whole lot too as we can see with Bulldozer or Netburst.

          • Klimax
          • 3 years ago

          Doesn’t matter. Still wrong.

            • mesyn191
            • 3 years ago

            If he is talking about something else than that is incorrect.

            Fab and process improvements are not the same thing as design efficiency and performance improvements.

            Yes they’re related but that still doesn’t mean they’re the same thing.

      • Klimax
      • 3 years ago

      Lack of competition is red herring. Monopoly or anything like that is simply irrelevant.

      We are not talking about some crappy fashion item or short-lived stuff like smartphones. CPUs can work almost for decades and if there is no need there are simply no upgrades and thus NO money for Intel. Only replacements of broken stuff. (Far smaller)

      If they had some “magical” tech that would get them nice jump in performance, they would deploy ASAP. You can’t have R&D without money, but not selling any CPUs because no good improvement won’t give you it.

        • davidbowser
        • 3 years ago

        TL/DR – my point of Intel not being in a big rush to release innovative desktop procs stands and the server space is in pretty much the same boat, as Intel has already stated that growth is strong there.

        Someone having the opinion that Intel is a great company that makes great products (both of which I would say are true) and sticking one’s head in the sand about behavior of monopolies are two concepts that do not have to be shared, which seems to be what you are doing. Are you stating that Intel HAS HAD upgrade worthy improvements since the i7 debut? Maybe for gamers, but that is obviously only a slice of the PC pie.

        TR’s benchmarks have show that there have been modest (at best) clock for clock improvements to Intel chips since Sandy Bridge released in 2011 and frankly most people can/do work fine with first-gen Core-i7 procs released in 2008. That is 9 YEARS of die shrinks and decent power savings, but the prices during that time have gone up (also mentioned on TR before) and the last few iterations have seen little to no improvement and yet the beat goes on.

        Coincidentally (not really), AMDs last serious competition (at least price/performance) for the i7 was released in 2008.

        I’m not sure how you came to the conclusion that Intel was not selling any CPUs because their upgrades were not compelling, but during that time of nearly ZERO reason to upgrade, Intel’s yearly profits have been in the $10B range. Their Net Margins have been in 10-25% range with only a couple quarters of dips over the last 10 years. Even when you break down the numbers for their datacenter and desktop segments, they are still MASSIVELY profitable.

        In summary, Intel does NOT have to release anything really new in the desktop or server space because the alternatives are not compelling right now. Intel is obviously investing lots in R&D, but has shifted the dollars and the bodies away from the PC/Server PC space into mobile devices and embedded tech in the IoT business unit. It’s good business for them, but the net result for PC consumers is still the same.

          • Klimax
          • 3 years ago

          Sorry, complete miss. Large number of words won’t fix that. May\ one day you will get it…

            • kamikaziechameleon
            • 3 years ago

            Your lack of words or evidence won’t fix it either.

      • Vulk
      • 3 years ago

      The chip optimizations you are talking about have to be completely transparent though. Intel has added all sorts of optimizations and improved it’s performance immensely. The fact that you think there is more that they can do is actually proof about the fundamental problem I’m trying to highlight. I’ve seen over x100 fold increases in performance from the Transaction Memory Extensions and the new fixed function hardware in the chips, especially with AVX and AES instructions. But I can only use them in academic and data software where I know I can target modern CPUs. I can’t do that in the games or general use software, because I have to assume that even enthusiasts can have 5 year old CPUs and still apply the term to themselves. The only other place I can really leverage everything a modern CPU gives me is on Consoles, and to a lesser extent the iPhone. Everywhere else, you just can’t assume the user has it, so you can’t use it, so there’s really very little point in Intel even putting it in the chip in the first place. It was the same issue for AMD with their heterogeneous architecture.

      Any time you talk about software having to make assumptions about the hardware it is running on to leverage a programming feature… It’s not going to happen until a massive critical mass of users can leverage it. That’s just economics.

      So although you aren’t wrong… You kind of are. For everything Intel COULD do… They can’t, because they’re trapped by their own history, and the market place they are selling into as much as they are by physics.

        • bhtooefr
        • 3 years ago

        Except you can have multiple code paths based on what instructions are available, which is a technique that’s been used for a long time.

        In fact, Intel got sued by AMD for not enabling the SSE code path on AMD processors in their compilers.

      • riblitz
      • 3 years ago

      I think Intel is and has been quietly positioning itself for a big move. Over the years this has been how they operate, which is why I’m leaning towards this belief. But, I don’t need to elaborate on that right now.

      It’s a common perception that after losing untold hordes of cash Intel has exited the phone market, they seem to be resting on their laurels riding out their share of the collective semi conductor market. But, Intel has not sold any of their mobile or signal processing patents. This is very telling to me.

      Now, the rumor mill is churning out tales of Intel putting a new “x86” ish architecture in place. For the first time since the 8086 (circa 1978) all the way to the present and upcoming generation of processors. Intel will be breaking backwards compatibility at the hardware level. It’s kind of mind boggling to me imagining code from 1978 needing to be on the metal to execute in a case where virtualization woudn’t get it done.

      I’m speculating that this will cut a significant amount of dead weight from their processors. This is my best guess here but, I have watched the “Inpire” strike back a half dozen times in my life. When they do, they don’t just take market share, they destroy consumer confidence in their competitors.

      So, I think Intel is coming to phones. I think when they put a fully quad bus proc in a phone with the same coddling for in the architecture for Google that Microsoft has enjoyed all these years. They will pretty much have their boots on the back of ARM & Friends Heads in 2″ of water right next to AMD, or nVidia (in the gpgpu market), or like Sun’s Sparc, Texas instruments (lol, anyone feel free to add to this list.).

      All those annoying little things your current phone does that you just put up with? You won’t have to with Intel, it’ll just work better all over, at every aspect, consistently. That’s why I buy Intel, why I buy nVidia. I don’t wanna spend my time hacking around anymore issues than I have to.

      I think the days of coming home, docking your phone to your keyboard, mouse and video and computing away whether on Microsoft or Google/GNU Linux are right around the corner. I think that’s what Intel is quietly aiming for now.

      Tear me down boys!

      • Eversor
      • 3 years ago

      Not just lack of competition, Intel has been catching a ride on the GPU refreshes last year. Given massive sales, they really aren’t being pushed in any way to innovate right now. Hopefully Zen will be what it takes to wake them up.

    • cmrcmk
    • 3 years ago

    Judging by the latest TR poll and one slightly older (about your PC’c CPU generation), most of us aren’t chomping at the bit to upgrade anyway. Personally, I’m not that interested in replacing my Haswell system until building an Optane-booting system is “affordable”.

    • derFunkenstein
    • 3 years ago

    Intel [url=http://www.eteknix.com/intel-8th-gen-14nm-15-faster/<]also said[/url<] that 8th gen will do ">15%" better in SYSMark, but that just might be clock speed bumps or similar. Baby Lake mobile chips saw a decent clock speed bump, at least, so that probably explains the first arrow on that slide.

      • Flying Fox
      • 3 years ago

      [quote<][u<]Baby[/u<] Lake[/quote<] (emphasis mine) What gen is this on the roadmap? 😛

        • derFunkenstein
        • 3 years ago

        Safari does that to me from time to time. I just go with it at this point.

          • UberGerbil
          • 3 years ago

          And here I was thinking it was slang for some Celeron or Atom chip I’d missed an announcement for.

          • tipoo
          • 3 years ago

          It’s not just me that thinks their autocorrect went completely off the rails recently, right? I’ll be one letter off from a common word and it’ll pick something completely different unrelated to the context, and annoyingly keep a capital if it corrected to a name when I go back to fix it.

          But…Baby Lake is a good name for Kaby Lake anyways. Baby steps.

            • derFunkenstein
            • 3 years ago

            That’s basically my experience too. Apple autocorrect is getting weird.

        • JustAnEngineer
        • 3 years ago

        Krogoth Lake
        [url<]https://techreport.com/news/31190/intel-unveils-its-full-range-of-desktop-and-laptop-kaby-lake-cpus?post=1015159#1015159[/url<]

    • Mr. Robot
    • 3 years ago

    So this is just confirmation of Coffee Lake, which has been rumored for a while now, right? Other than that, the only new (or at least new to me) tidbit is that Cannonlake will also be pushed in enterprise and not just mobile.

    Thus, we’re looking at Coffee Lake @ 14nm and Cannonlake @ 10 nm co-existing in mostly different use cases starting at the end of this year until they both get replaced probably about a year later by a 10nm successor at around Q4 2018 or or maybe Q1 2019 given all the trouble they’ve been obviously having.

    • raddude9
    • 3 years ago

    So we’ve gone from “Tick-Tock” to “Process-Architecture-Optimization” already, what’s this new development model called?

      • NTMBK
      • 3 years ago

      Process-Architecture-Optimization-Optimization-Optimization-Optimization-Optimization-Optimization-Optimization-Optimization-Optimization-Optimization-Optimization-Optimization-Optimization-Optimization-Optimization….

      • Anonymous Coward
      • 3 years ago

      Tick-Tock-Optimization-Spin

        • UberGerbil
        • 3 years ago

        I think we have a winner.

        • lycium
        • 3 years ago

        I remember when Intel marketed the Pentium 3’s SSE instructions as improving your internet speed…

          • Klimax
          • 3 years ago

          I sort of remember those ads. I don’t think it was about connection speed. (Although efficiently moving data in memory can improve it)

      • MrDweezil
      • 3 years ago

      “Process, Architecture, sit-on-hands-for-7-10-years-until-AMD-catches-up”

        • freebird
        • 3 years ago

        yep, Intel burnt Billions of $$$ just to keep an 18-24 month lead on all the other semi-manufactures; specially when AMD owned their own plants in Dresden and planning the ones in NY only to be turned over to Global Foundries. People forget that AMD’s chips had ALWAYS had to compete with the disadvantage of being at least one node behind… although SIO helped with some of that at times they still had to fight an uphill battle.

        And then again there was the time when AMD was making inroads with NOR memory and Intel decide to jump into the NAND making business to help drive the prices down fast which had the end effect of much of the market going the NAND route and NOR flash memory drying up (and drying up AMD significant profits in NOR to losses (and selling part off to Toshiba I think before dumping it entirely. That was one of the biggest blows Intel inflicted on AMD to prevent them from becoming a “REAL” competitor to Intel back in the early 2000s and of course their anti-competitive contracts to lock out AMD from some major brands.

        But physics is now closing the lead no matter how many BILLIONS of $$$ you burn…
        Things will get very interesting when everyone is competing on a near “equal” process node.

          • chuckula
          • 3 years ago

          [quote<] Intel burnt Billions of $$$ just to keep an 18-24 month lead on all the other semi-manufactures; specially when AMD owned their own plants in Dresden and planning the ones in NY only to be turned over to Global Foundries.[/quote<] Cool story bro. Flat out wrong in many respects other than Intel spending a lot of money to develop new fabs, but cool story. While it's true that since 2011 or so AMD/GloFo have clearly (and basically intentionally) dropped the ball on being a leader in semiconductors, back in the supposedly "good ol' days" AMD and Intel were usually relatively close to each other in fab technology. Cases in point with the relevant TR articles you might want to read: 1. 90nm process? Debuted for AMD in 2004 in basically the same time frame as Intel debuting 90nm. If anything most people thought AMD had a fab advantage on 90nm. [url<]https://techreport.com/review/7134/amd-athlon-64-3500-processor[/url<] 2. The first "true quad core" Phenom launched in 2007 on a 65nm process -- a silicon-on-insulator 65nm process I might add, which according to the fanboys was massively better than Intel's non-SOI process -- on the day Phenom launched, Intel was only producing 65nm bulk silicon. What unfair process advantage? [url<]https://techreport.com/review_full/13176/amd-quad-core-opteron-2300-processors[/url<] 3. Bulldozer... remember, that chip that AMD is finally replacing next month? Launched in 2011 on a 32nm process to compete with Sandy Bridge that was launched in 2011 and made on Intel's cutting-edge... 32nm process. Incidentally, Bulldozer wasn't even AMD's first 32nm part and Llano appeared right after Sandy Bridge launched. Once again AMD was using an "SOI" 32nm process that was supposed to be superior to Intel's. [url<]https://techreport.com/review_full/21813/amd-fx-8150-bulldozer-processor[/url<] So it's true that AMD has intentionally decided not to actually invest in innovation in fabrication technology in the last 10 years or so. But that doesn't mean AMD was always the "poor little victim" against Intel. Back in the day AMD usually had parity or was even ahead in the fab race, which sometimes led to them getting some victories like in the 2004 - 2006 Athlon 64 era.

            • BurntMyBacon
            • 3 years ago

            I’ve compiled a list of dates (according to Wikipedia) of the first processor architecures launched by AMD and Intel on each process node since 90nm. Let me know (preferably with a source) if I copied a date incorrectly.

            Intel
            90nm – Feb. 2004 (Prescott)
            65nm – Jan. 2006 (Cedar Mill)
            45nm – Jan. 2008 (Penryn)
            32nm – Jan. 2011 (Sandy Bridge)
            22nm – Apr. 2012 (Ivy Bridge)
            14nm – Sep. 2014 (Broadwell)

            AMD
            90nm – Nov. 2004 (Winchester)
            65nm – Feb. 2007 (Lima)
            45nm – Jan. 2009 (Deneb)
            32nm – Sep. 2011 (Bulldozer)
            22nm – N/A
            14nm – Mar. 2017-Est. (Zen)

            Three things of note:
            1) Not once did Intel reach the next-gen node before AMD reached the equivalent node until 14nm.
            2) Not once did AMD beat Intel to a next-gen node.
            3) For most nodes, AMD spent as much or more time competing against Intel’s next-gen node as the equivalent node. (90nm being the notable exception)

            Many don’t realize that AMD’s first 65nm chip was Lima (Athlon64) and not Agena (Phenom). Agena didn’t launch until Nov. 2007, notably, only two months prior to Intel’s 45nm Penryn.

          • Klimax
          • 3 years ago

          NOR died because it wasn’t competitive on many fronts. Not just cost because of Intel. Even without Intel we would end up with NAND dominating. NOR BTW still exists but it is in niche where byte-addressing and stuff like in-place execution matters more.

      • blastdoor
      • 3 years ago

      I think the phrase “you do the hokey pokey” will be in there somewhere.

      • willmore
      • 3 years ago

      Process-Architecture-Optimization-Stagnation

      • EndlessWaves
      • 3 years ago

      Abort, retry, ignore.

    • ronch
    • 3 years ago

    I think it wouldn’t be farfetched to say that we’ve reached about 98% of what CPU technology can ever be at this point. The writing has been on the wall for a while now. We’ve used pretty much all the tricks in the book to push IPC, we’ve been stuck at the same clock speeds for many years now, and process tech has been taking more and more time to get to the next node. We’re almost at the top of the mountain. Unless a breakthrough happens, that’s it. No more looking forward to faster CPUs.

    Edit – let me add a little bit more to clarify. In decades past, from the inception of PCs and x86 CPUs performance and efficiency grew by leaps and bounds. If you graph those and follow to where we are right now and what we get from one generation to the next, it becomes obvious that progress has been slowing down and will continue to slow down as it becomes increasingly difficult to move the ball forward. While that makes PC upgrades less exciting I also somewhat feel privileged to be using today’s highly advanced and evolved computers. I guess this is partly why I totally appreciate my FX CPU. At the same time this slowed progress isn’t so bad because we’re at a point where us regular folks don’t need more performance.

      • cmrcmk
      • 3 years ago

      It might be fair to say we’re approaching mature state with current manufacturing methods, but we know Intel/GloFo/TSMC/Samsung/IBM are all working on fundamentally different chemical and physical theories for how to flip bits. It’s just a matter of time until one of these condenses from vaporware to a real product and starts a new era of refinement and improvement.

      Now whether or not that is relevant to most use cases, well that’s a different question. Big systems always want more, but I think it’s fair to say that for most (99%+) client use cases, power consumption against a battery is the only real concern left. Even a lot of systems which used to be considered big are now considered modest with current hardware.

      • blastdoor
      • 3 years ago

      I think that’s too pessimistic.

      The thing that has become particularly hard to do is increase the speed at which a sequence of dependent instructions is executed. But even in that case we’re further away from the endpoint than 98%. We’ve got 3 die shrinks in the pipeline (10nm, 7nm, 5nm). There are tradeoffs between power, performance, and area. If Intel (or whoever) were so inclined, they could choose to hold power and area constant for each die shrink, plowing everything into higher clock speeds. While that might seem unrealistic, consider the potential for heterogeneous cores. A SOC could have one core with low transistor density that runs at a very high clock speed. Other cores could be more dense and run at lower clock speeds.

      Thinking more broadly, it seems to me that the demands for greater computing performance are not in the area of executing a sequence of dependent instructions. A big part of the future will be AI, and AI benefits greatly from parallel computing. Even if we can’t get past 5nm, die sizes can continue to increase as yields improve. Once the fixed investment costs of 5nm are recovered and yields have improved, then 3d stacking can kick in.

      I bet we’ve got at least 20 years of continued progress. It’s just that the progress will be a little bumpy, not the smooth tick-tock cadence we’ve been accustomed to.

      • rudimentary_lathe
      • 3 years ago

      Agree with others, that’s way too pessimistic a view. We may be approaching the limits of silicon, but there remain other promising chemical stews to explore. Bring on the 100 GHz graphene CPUs! I’d agree though that we may have plateaued for the foreseeable future.

    • chuckula
    • 3 years ago

    [quote<]Alternatively, considering the statement regarding "fluid" process tech usage, we may be seeing some mixture of 14-nm and 10-nm process technologies across various product segments.[/quote<] You will. While it didn't get any attention here, Krzanich showed off a notebook running Cannonlake silicon at CES: [url<]http://hothardware.com/news/intel-demos-10nm-cannon-lake-processor-promises-2017-delivery-window[/url<] The mobile + enterprise segments will start to use 10nm later this year and the mainstream desktop will go to Coffee lake (with 6 cores, which actually is interesting) until Cannonlake gradually takes over.

      • raddude9
      • 3 years ago

      If you find 6 cores interesting wait until you find out about this new chip AMD is bringing out…

        • chuckula
        • 3 years ago

        The one that isn’t launching with 6 cores you mean?

          • raddude9
          • 3 years ago

          Who knows, they did say there would be an 8-core version though, and that’s a bigger number than 6!

          • Generic
          • 3 years ago

          Is Ryzen not simply two Phenom II X3 cores taped together!?

          I am [i<]sorely[/i<] misinformed then.

            • Generic
            • 3 years ago

            Note to self:

            TR still [b<][u<]very[/b<][/u<] upset about "Phenom". #too_soon

      • NTMBK
      • 3 years ago

      Great, they got a single working chip.

Pin It on Pinterest

Share This