Intel delays 10-nm process, third 14-nm CPU to follow Skylake

During the investor conference call for Intel's Q2 financial report, CEO Brian Krzanich confirmed recent rumors that the chipmaker's 10-nm manufacturing process has been delayed. ZDNet has a partial transcript in which Krzanich lauds Moore's Law while stating that Intel's recent node transitions have "signaled that our cadence today is closer to two-and-a-half years than two." 

A two-and-a-half-year cadence would result in a gap in Intel's CPU plans. As a result, ZDNet claims Intel is working on a third 14-nm CPU family after Broadwell and Skylake, supposedly carrying the name Lake. The new processor will apparently be built on Skylake's foundation, but will also include "key performance enhancements," according to Krzanich. The 10-nm Cannon Lake family will follow sometime in the second half of 2017. 

Ben Funk

Sega nerd and guitar lover

Comments closed
    • Phartindust
    • 4 years ago

    This should make things interesting next year when Zen comes online at 14nm. It’s been awhile since AMD and Intel were on the same process node.

    • maxxcool
    • 4 years ago

    I have one 4ghz AMD testbox left out of 5 … 45NM Thuban … holy crap its old.

    • chuckula
    • 4 years ago

    Time for Intel marketing to come to the rescue!

    Tick-Tock will now be referred to as Tic-Tac-Toe!

      • DrCR
      • 4 years ago

      And the Toe, based on keratin instead of silicon, will be an branded as an organic version of inorganic Tic and Tac releases?

      • ronch
      • 4 years ago

      Or maybe they’ll start crashing AMD’s parties too!

    • Ninjitsu
    • 4 years ago

    AnandTech has a pretty good write up on this.
    [quote<] Tick Tock On The Rocks: Intel Delays 10nm, Adds 3rd Gen 14nm Core Product "Kaby Lake" [/quote<] [url<]http://www.anandtech.com/show/9447/intel-10nm-and-kaby-lake[/url<]

      • chuckula
      • 4 years ago

      From what I can gather, Kaby Lake is definitely not a brand-new core, but it will likely be a bigger change that what we saw with Devil’s Canyon where DC was the same Haswell Core with higher clocks.

        • Ninjitsu
        • 4 years ago

        Yeah maybe we see Intel test its fancy stuff with Kaby Lake (morph core?).

        (BTW is Kaby Lake a geographic location/feature? Weird name).

      • blastdoor
      • 4 years ago

      I’m waiting for Wobegone Lake, where all the yields are above average.

        • Ninjitsu
        • 4 years ago

        Those days are gone, I suspect!

    • anotherengineer
    • 4 years ago

    hmmmmmmmmmmm

    [url<]http://www.techpowerup.com/214342/moores-law-buckles-as-intels-tick-tock-cycle-slows-down.html[/url<] edit - hmmm so once they start limiting out against fab size, the next logical steps will be improving that size, such as density, leakage, etc.??? And then the move to other things?

      • chuckula
      • 4 years ago

      While linear dimensional shrinks are definitely not going to last forever (although there’s still room for improvement) there are still wide areas for improvement that have largely been ignored due to the fact that linear dimensional shrinks have been the big driver for the past 50 years.

      Big improvements are still possible in materials, 3D transistor integration, etc. etc.
      They aren’t necessarily “easy” to do, but then again getting to 10nm ain’t easy either.

        • anotherengineer
        • 4 years ago

        ahh, interesting stuff. Always challenging to find a way through a wall once you come to one.

        Check this out, someone hit 22nm !! 😉

        [url<]http://www.techpowerup.com/214264/globalfoundries-launches-industrys-first-22nm-fd-soi-technology-platform.html[/url<]

          • NoOne ButMe
          • 4 years ago

          Eh… It’s a FEOL 14nm and BEOL 28nm. So they call it 22nm… Yeah.

    • hasseb64
    • 4 years ago

    Hello Smartphone! Goodbye DIY desktop!

    • Stonebender
    • 4 years ago

    I find it interesting that Intel already has a name for the interim processor and a time frame for it’s release. The 10nm delay almost feels intentional, allowing for more time to milk 14nm. Also, I found BK’s statement of Intel’s intentions to make it back to a two year cadence strange. How will this play out once we’re beyond 7nm?

      • chuckula
      • 4 years ago

      David Kanter said that he thought the big jump to different III-V materials (gallium-arsenide & antimony) in Intel’s transistors would happen at 10nm. I initially thought that was too soon and that the jump wouldn’t happen until 7nm.

      However, with a big delay being introduced well in advance (not a last minute delay due to yield issues), maybe DK is right (as he usually is) and 10nm is the node where Intel dives in to new transistor materials in a serious way.

      • NoOne ButMe
      • 4 years ago

      There are articles talking about how tick-tock (YES TOCK IPHONE STOP MAKING TOCK INTO DIFFERENT WORDS) was dead at Intel 1.5-2 years ago. If not earlier.

      I still doubt that Kaby is 14nm, Intel’s names tend to come out 3-4 years before commercial launch.

        • Stonebender
        • 4 years ago

        Hence my initial statement. Seems obvious that Kaby was in the works for quite a while, leading one to believe one of two things: either Intel ran into a roadblock with 10nm quite awhile ago and began preparing Kaby as a stopgap, or they intended 10nm to be pushed out all along.

    • maxxcool
    • 4 years ago

    Damn electrons being all rowdy …

    • NTMBK
    • 4 years ago

    “our cadence today is closer to two-and-a-half years than two.”

    What a wonderfully open ended statement. Literally the only thing it means is that the cadence > 2.25 years. It could take 50 years for 10nm to get here, and that statement would still hold true. Great piece of CEO speak right there.

      • Krogoth
      • 4 years ago

      “Moore’s Observation” (it is really just a corollary of exponential growth) has already been invalidated. The next few process nodes will be the final nails in its coffin.

    • Krogoth
    • 4 years ago

    This is the beginning of the end for digital computing on the hardware side. The laws of physics and economics are catching up fast.

    10nm or 7nm process might be the last node suitable for mass production. Going smaller will only be viable for certain niches that are willing to spend the R&D for it.

      • shank15217
      • 4 years ago

      These types of predictions tend to not only get broken, but shattered into a 1000 pieces.

        • Krogoth
        • 4 years ago

        Not this time around. We are getting close to known theoretical limitations on how small we can shrink a transistor and IC.

        The semiconductor industry has a whole has already seen the writing on the wall. The smaller players have already jump ship and sold off their manufacturing assets to the bigger players who still have the capital and R&D to continue.

        Intel is the forefront in manufacturing tech. If they are having problems now, then the other big players will have problems as well.

        The gravy train was to going end sooner or later. It is absolutely stunning how far we managed to go with a span of 50 years. Digital computers used to consume entire buildings and required an insane amount of power and cooling to operate. Their computing power was utterly pathetic to what is currently considered to the slowest, cheapest embedded systems out there that consume around 1-2W of electricity and are passive cooled.

        Interesting enough this will cause a surge in R&D for alternative computing platforms.

      • Thresher
      • 4 years ago

      There will always be efficiencies that can be wrung out or optimizations that can be made for specific types of computing, so regular silicon still has a ways to go. I do think, however, that other types of computing will supersede it at some point, potentially soon enough to keep Moore’s Law going.

        • Krogoth
        • 4 years ago

        Moore’s Observation has nothing to do with computing.

        It is an observation which Moore noticed that IC and transistor density would doubled at a rate roughly 18 months and manufacturing cost of IC and transistor decrease in proportion to that.

        Neither of these items have been true for the last several years. The rate of IC and transistor density has slow down and the manufacturing cost has been increasing. The move to SiGe solution is going to ensure since Germanium and other semiconducting materials are more expensive then plain silicon.

      • blastdoor
      • 4 years ago

      [quote<]This is the beginning of the end for digital computing on the hardware side[/quote<] Yeah, because if I can't buy a computer with a CPU on a new process node every two years, I don't want to use a computer at all.

    • Ninjitsu
    • 4 years ago

    So TSMC has one or two 20-nm chips, and Samsung has one “14nm” chip so far? Yeah, I doubt they’ll beat Intel to “10-nm” – though Samsung may just achieve parity.

    Depends on their profits from their current nodes, though – all that equipment and R&D has to be amortized I suppose. And that depends on volume, and I don’t think they have as much there.

    Samsung will likely sit on 14nm till 2017 at the very least, with higher volume production next year. TSMC still needs to put out enough 20nm chips, so unless Nvidia/AMD have put in enough money, we’ll probably not see 16nm from TSMC till late next year when Apple will want another smartphone chip. So 10nm from TSMC till late 2018, from what I can tell.

    (Of course I could be totally wrong and Apple could get 16nm this year, but I don’t know how financially viable it will be for TSMC).

      • Zizy
      • 4 years ago

      TSMC has at least 2 20nm chips – A8 (+A8x) and SD 810 (+808). Mediatek will have stuff on 20nm as well but afaik no device uses that yet.
      As for beating to 10nm, depends on which chip class. I wouldn’t be too surprised to see Samsung overtaking Intel with low power early 10nm in late 2016/early 2017 (Note 6 or S8).

        • phileasfogg
        • 4 years ago

        Nvidia’s Tegra-X1 is also built on TSMC 20nm. so, at least 3 shipping products on that node.

    • HisDivineOrder
    • 4 years ago

    I hope Intel won’t be surprised when people continue to sit on their current CPU’s until they get around to making octa-core chips mainstream chips. Because my 3570K is still looking pretty damn capable atm and they aren’t using their dieshrinks to make mainstream chips with more cores yet.

    Perhaps they’re not needed. Which, I suppose, is also true of CPU upgrades for most users.

      • VincentHanna
      • 4 years ago

      There is a diminishing return for adding more cores because of the lack of parallelizable code for CPU workloads. DX12 will help with that to a degree… maybe, but really there just isn’t a demand for it.

      What we are really holding out for is the 5ghz magic number max clock at the new, improved power and PCC levels.

    • BlackDove
    • 4 years ago

    Isnt this Kaby Lake which has been known about for weeks now?

      • chuckula
      • 4 years ago

      Assuming “Kaby Lake” is the real name then yes.
      When Intel says “performance optimized” that can mean several things from a new stepping that clocks slightly better to a new IGP architecture (whoopy doo for most people here).

      • NoOne ButMe
      • 4 years ago

      It depends on who you ask… I’m saying no based on when we learned the name and also the timelines involved. Generally Intel’s architectures names come out 3-4 years before the product. And, I don’t see them launching a 14nm part in 2018-2019.

      • derFunkenstein
      • 4 years ago

      It’s the first time Intel has publicly acknowledged it which IMO makes it a pretty big deal. Also the rumors link is our story on Kaby Lake.

    • Billstevens
    • 4 years ago

    No competition no rush…. I wonder if mobile CPUs will ever reach parity and give intel a competator.

      • NoOne ButMe
      • 4 years ago

      Reword? Intel isn’t competitive with mobile SoCs… You mean they scaling up to high power?

    • blastdoor
    • 4 years ago

    Moore’s Law is as much about economics as it is about science and technology. It used to be the case that the profit-maximizing strategy for Intel was to come out with a new process every two years. That appears to no longer be the case.

    At the same time, the economics faced by TSMC and Samsung have changed, but in the opposite direction.

    I think some people around here are going to be in for a rude awakening.

      • Leader952
      • 4 years ago

      [quote<]Moore's Law is as much about economics as it is about science and technology. It used to be the case that the profit-maximizing strategy for Intel was to come out with a [u<]new process every two years[/u<]. [/quote<] It used to be every 18 months. Then that stretched out to every two years. Now it looks like it will be 2.5 years.

    • tviceman
    • 4 years ago

    Intel will likely retain a manufacturing lead for a significant future, but the gap will close enough to make the difference much harder to attribute to performance advantages. At that point, say the node after TSMC’s 16nm ff+, I think it will be EXTREMELY interesting to compare ARM’s current best perf/w to Intel’s best perf/w in the <10 watt space.

      • mesyn191
      • 4 years ago

      The thing you have to remember is that if Intel is having these issues then so will TSMC and others so they’re going to get delayed too. Possibly they’ll have greater delays at that.

      The really interesting situation will be when no one, not even Intel, can get any more straightforward process shrinks anymore. That looks to happen after 2020 right now so we’ve got at least a few more years to go.

        • w76
        • 4 years ago

        Yes, really looking forward to seeing how that plays out, and contrary to all the people that seem happy to prematurely announce the death of Intel’s lead, I think Intel has put more groundwork in than anyone else by far in terms of what technologies and strategies to implement to continue to improve computation performance beyond the death of traditional chip-making techniques.

        Not that companies and individuals throughout history haven’t snatched defeat from the jaws of victory, but it’s Intel’s game to lose and will be for at least a decade.

      • brucethemoose
      • 4 years ago

      All the different architectures at that node will make it even more interesting.

      You’ll have a big, highly refined Intel core that’s now small enough to compete in that envelope, and you’ll have the smaller Atom that’s specifically designed for that envelope. You’ll have an enormous Apple core, a big Qualcomm core, a biggish ARM BIG+little design, and if we’re lucky, a custom Exynos, an updated Denver, and a competitive Zen. On top of that, you’ll have different memory technologies like DDR4 and Wide I/O competing with each other.

      • MarkG509
      • 4 years ago

      Back in the day when 45nm was just starting to sample, the “old guard” at work would say that Intel needed to kill everyone else before 32nm or else they’d all have a chance to catch up.

      How right they were amazes me.

        • chuckula
        • 4 years ago

        ???

        When Intel was sampling 45nm AMD was within a year or so of their lead process.
        People still seem to forget that way back in 2011 when Intel supposedly had this “amazing” process lead they introduced Sandy Bridge on 32nm…. about 3 months before AMD introduced Llano on 32nm.

        Fast forward to 2015 and Intel is about to launch its second-generation high-performance CPU on a 14nm FinFet process when nearest competitor, Samsung, is still pushing the limits with a “14nm” process that’s more accurately called “20nm but with finfets” oh.. and Samsung’s parts are for smartphones. A nice niche, but it sure ain’t high performance computing.

        Don’t even get me started on Knights Landing.

          • derFunkenstein
          • 4 years ago

          Knotts Landing was ok but I preferred Twin Peaks. It was quirky.

      • TopHatKiller
      • 4 years ago

      Dear Tv-repair-man, How would go about comparing ARM custom, semi-custom, brought-off-the peg ARM designs with any of the evil empire’s? Narrow any difference between the fab process used and architectural differences between the two? No chance.
      Intel ‘significant’ fabing lead is vanishing as I speak. Within a couple’a years the-evil-empire will have no advantage at all. Too dear, too complex: every fab company in the world are going to have to rely on shared science and tech.

        • NoOne ButMe
        • 4 years ago

        High! Intel has a HUGE advantage that no one outside of IBM and Samsung publically has.

        They design the process and the chips to make. they can make their process tweaked to exactly what they want their chips to aim at.

        They’re also the only company outside of IBM designing processes specifically targeted at high performance CPU Last I checked.

      • NoOne ButMe
      • 4 years ago

      Intel will lead in the high performance space 100% if they move to FD-SOI. Probably 99% if they don’t.

        • TopHatKiller
        • 4 years ago

        Forgive me Sir – but are you nuts?! Intel has previously stated their loathing for full-depth: while AMD and GloFo have some significant technical resources to manufacture soi’s in the post 28nm gen – Intel seems to consider them abhorrent.

        Oh. Are you joking? Sorry, I mustva missed it. I’ll slap myself.

          • NoOne ButMe
          • 4 years ago

          Fully depleted SOI…?

          Intel doesn’t like it because they have to pay licensing fees. Same for pretty much everyone else. Intel will move of FD-SOI I believe because it gets you very good performance and power gains and is cheaper than shrinking nodes at this point.

          I think everyone will eventually make their 14nm SOI, for Intel it might be 10nm. Whichever process Of theirs is the best transistor/cost.

          And, if you don’t use a technology and a competitor does, what are you going to say about it? 😉 “not needed” is what Intel has said about SOI (I believe that was specifically PD-SOI, if it matters) in the past.

            • TopHatKiller
            • 4 years ago

            “Fully depleted” it is. Thanks for the correction.

    • NoOne ButMe
    • 4 years ago

    completely expected. With Intel’s R&D stray flat over the last 4-5 years and TSMC, Samsung and GloFo growing, Intel will slow down, everyone else will catch up.

    looking for Intel’s 10nm node early 2018 for commercial shipping product that are not ultra-high cost.

      • brucethemoose
      • 4 years ago

      [quote<] everyone else will catch up [/quote<] Maybe... But not anytime soon. Let's say, by some miracle, TSMC/Samsung aren't delayed and start shipping 14nm stuff before late 2016. And let's say Intel is delayed even more, and 10nm is pushed to 2018. I'll eat my shoe if Samsung/TSMC move from 14nm->10nm that fast and beat Intel.

        • blastdoor
        • 4 years ago

        If you predict that the sun will come up tomorrow based solely on the fact that it came up yesterday, your prediction will be correct on every day except for the one that really counts.

        • NoOne ButMe
        • 4 years ago

        “Catch up” not “pull ahead”.

        14nm is shipping 2015. 10nm? I expect 2017 for everyone. I don’t expect it to be viable outside of ultra-high margin spaces (where the price of a chip for sale can cost more than the wafer did) until 2018.

          • brucethemoose
          • 4 years ago

          Forgot about Samsung’s SOCs. I guess I was thinking of high-performance 14nm, as opposed to the low-power variant the Galaxies uses.

            • NoOne ButMe
            • 4 years ago

            Yes. I fully agree that Intel will have the lead for high performance chips. The only company that is even bothering to research stuff that could compete with IBM.

        • the
        • 4 years ago

        Intel was shipping 14 nm parts in late 2014 and roughly six months later, Samsung has 14 nm parts shipping as well. At the very least I’d say that Samsung can catch up to Intel as they’re not far behind.

        TSMC is another story but if they get their 16 nm FinFET process ready for chips later this year, then they’ll be a year behind Intel. While a significant gap, this too is narrower than the 18 month gap that existed perviously.

        The catch to catching up is simply TSMC, GloFo and Samsung adhering to their original schedules for 10 nm. If Intel is having problems, it isn’t going to be easy for anyone.

    • Stochastic
    • 4 years ago

    Tick-Tock-Tock+?

    I think the gears in Intel’s clock need some cleaning.

      • UnfriendlyFire
      • 4 years ago

      Well, it’s been running better than Global Foundries that repeatedly kneecapped AMD at the worst time.

      “You need Llano out soon? Whoops, production issues, we’re going to have to delay it by a few months!”

      “You need a silicon process for a high clockrate Bulldozer? NOPE.”

      “You need something smaller than 28nm? We got 20nm, but that’s only good for tablets and smartphones! Oh by the way, Kaveri desktop is going to suck.”

        • Meadows
        • 4 years ago

        People used to say “real men have fabs” but lately people have started to understand why AMD got a vasectomy.

      • ronch
      • 4 years ago

      Tick-Tock-BOOM!!!

    • chuckula
    • 4 years ago

    I wasn’t expecting 10nm in quantity until 2017 but this is still a delay of note since it’s late 2017 instead of early 2017. Plus there’s no guarantee it will be a full scale launch either.

      • Kougar
      • 4 years ago

      Plenty of time left for more delays, or a half-hearted launch like we saw with Broadwell.

      If anything, announcing a delay more than two years out means odds are very good for another one

    • UnfriendlyFire
    • 4 years ago

    Considering Intel’s difficulties with 14nm adoption and that AMD/Nividia were stuck on 28nm for FIVE YEARS, the delay in the node shrink wasn’t a surprise for me.

    At least it’s better than the product rollout mess with Broadwell and Skylake.

      • Ushio01
      • 4 years ago

      I believe it’s ‘are stuck on 28nm’ since there are no GPU’s smaller yet.

      • ronch
      • 4 years ago

      Nividia? Who they?

        • Srsly_Bro
        • 4 years ago

        you’re addicted to down-thumbs

          • w76
          • 4 years ago

          In the words of Obi Wan Kenobi, if you thumb me down I will become more powerful than you can possibly imagine.

          • ronch
          • 4 years ago

          And you’re addicted to delighting over other people’s getting downthumbs.

      • NoOne ButMe
      • 4 years ago

      We are talking about manufacturing here. When was the last time AMD owned a fab, I’m very sure Nvidia never did.

        • blastdoor
        • 4 years ago

        Yet I think there’s a valid point in there, even if it’s not the one intended.

        AMD and NV could have moved to 20 nm — the process came out. They chose not to because the economics didn’t make sense for them. If they had been willing to pay for 20 nm, they would have got 20nm. But they weren’t willing to pay.

        How can I claim to know that? Because when a customer (Apple) came along willing to pay for 20nm, TSWMC delivered 20 nm.

        If there’s a customer willing to pay the price to get to 10nm in 2017, and if that customer was willing to pay up front several years in advance, TSMC or Samsung will very likely do it.

          • TopHatKiller
          • 4 years ago

          Neither AMD or Nv could move to TSMC 20. If you believe TSMC’s marketing: it was designed simply for low-performance customers at Apple [spit] and Qualcomm. Or if you don’t believe their marketing: they screwed it, and made it impossible for anyone with chips that any sensibly person would care about to be made on that process.

            • the
            • 4 years ago

            AMD did let it slip that they had to cancel several 20 nm designs, though they were not specific if those were GPUs or mobile SoCs. Regardless, AMD did have plans for that node which they had to scrap.

            • NoOne ButMe
            • 4 years ago

            they were SoCs. At least, AMD had announced either 2 20nm SoC designs last I checked.

            37 million in unrecoverable work seems right for 2 SoCs that shared some stuff.

            • chuckula
            • 4 years ago

            It’s unclear if AMD ever had advanced plans for 20nm GPUs since they (like Nvidia) may have had a pretty long lead time to figure out that TSMC at 20nm was not going to fly for the GPU and had time to rework Fiji for 28nm.

            However, AMD was going to be be launching an ARM server part on 20nm that failed to materialize in any commercial products and that part (or its successor) has been delayed for a re-implementation using the 16nm finfet process.

          • NoOne ButMe
          • 4 years ago

          20nm is pants for GPUs. At least, all commercially available 20nm. If either could have gone IBM they would have. Same would be true for Intel, but, Intel charges a lot more than Everyone else.

          20nm is unproven outside of small low power, low clocking SoCs. And big “I spend $10000 per wafer and wall all my good dies at $20000 each” chips.

      • the
      • 4 years ago

      3.5 years actually. The first 28 nm GPUs were shipping in January of 2012. Still a long time as TSMC has traditionally offered a half node step for GPU makers between the major generations.

      There is a chance we’ll be seeing some 14 or 16 nm FinFET based GPUs at the end of this year, though I’d be surprised if they are the ~600 mm^2 behemoths we’ve seen from both AMD and nVidia. Low end/midrange chips will get a refresh to be testing the new process node for the really big designs due later in 2016.

        • NoOne ButMe
        • 4 years ago

        28nm was the half node! 32nm was so bad that it was canned.

    • homerdog
    • 4 years ago

    Intel having so much trouble with 10nm makes me wonder if TSMC will [i<]ever[/i<] get there.

      • NoOne ButMe
      • 4 years ago

      Look who is spending more money on R&D, that’s who will get there. R&D over time since started researching node.

        • ronch
        • 4 years ago

        Depends on one’s efficiency in spending their money.

          • NoOne ButMe
          • 4 years ago

          as Intel’s spending has stayed steady and TSMC/Samsung spending as risen the process gap has closed.

          It stands to reason that they’re all spending money about he same efficiency.

          OTOH, they also have to support older nodes and do modify them late into the lifespan at times. TSMC spending of R&D rose, but, how much was on improving 28, and how much on 20? I sure don’t know.

            • mesyn191
            • 4 years ago

            Uh there is no reason to believe they’re spending their R&D money with the same effectiveness or efficiency. There have been huge differences between all the foundries for decades and they’ve almost always lagged Intel in either node size and/or performance for high end MPU’s.

            • NoOne ButMe
            • 4 years ago

            Because they started behind and getting to the nose first has always made the most fiscal sense for Intel. Only two companies ever competed with them for high performance for a long time. AMD and IBM. GloFo stopped that, and, IBM had a superior performing 22mm process than Intel

            Don’t be surprised if everyone ends up being a steady 6 or so months behind Intel to implement a mode or technology. Let Intel eat the extra costs of being first.

            • mesyn191
            • 4 years ago

            What? No. TSMC has had a ‘high power’ version of their nodes for a long time now which is what AMD has been using for years.

            IBM’s process also wasn’t better than Intel’s. Yes POWER8 had some high clocks but the heat was ridiculous, 300w or so at the highest end clocked chips. This was even after they cherry picked their chips from their dedicated fab too. That sort of thing makes sense for mainframes but if they tried to go against Intel with a x86 chip with that power usage/performance they’d do worse than AMD has.

            I also have no idea where you’re getting the idea that everyone is suddenly going to be only 6 months behind Intel on process tech. The tool validation process takes upwards of a year right now alone and no one is as far along as Intel when it comes to mass manufacture on cutting edge processes.

            • NoOne ButMe
            • 4 years ago

            Look at the metal layers dude. TSMC ain’t got shiet on Intel. I can call anything I want super high performance, but, that doesn’t make it so.

            • mesyn191
            • 4 years ago

            Actually TSMC calls it their high performance/power process not me and it is the highest clocked node they’ve got, which AFAIK is still 28nm.

            You also ignored everything else I said.

            • NoOne ButMe
            • 4 years ago

            Yes, high performance from TSMC is not close to high performance from Intel. At least for designing CPUs. Metal layers.

            The rest of the post, IBM’s process technology is aimed at maximum performance period. They deliver higher absolute performance than Intel. I’m sure Intel could match it, but, their power draw would get into the same ballpark. Intel does the same thing IBM does, look at their needs, and design a process for their needs. IBM needed a mega-chip and it looks like 250-300W was the limit of power they would go. They designed around it.

            As for everyone else following Intel… 6 months may be off. Sure. The point that foundries may choose to stay X time behind Intel for new nodes I think will happen in most cases. Let Intel eat the cost of being the first one there.

            • mesyn191
            • 4 years ago

            Yes I know TSMC’s high performance process isn’t close to Intel’s, so again how are they going to suddenly get within touching distance of Intel any time soon?

            IBM’s process is aimed at maximum performance but that doesn’t mean its as good as Intel’s comparable 22nm process. After all TSMC’s high power process is also aimed at maximum performance too. Just because the goals are the same and some of the implementation details are broadly similar that doesn’t mean the end results or the same or even close to being the same. IBM needed a mega chip and had to resort to blowing out the power budget because they had no choice, not because they wanted to. Customer’s TCO is something they have to consider too or they lose more marketshare to Intel.

            [i<]May[/i<] be off? Are kidding me? Stop with the weasel wording, you know you're wrong and overstepped yourself to the point of ridiculousness. The other foundries also aren't trying to play it smart and let Intel eat the costs of developing the processes. They're behind Intel because they have no choice, not because they want to be, and if they had their way they'd happily take the high margins that go along with the high costs of a cutting edge process. Remember, they did that for years, and its only relatively recently that the gap between Intel and most everyone else has gotten so huge. In prior years the other foundries were typically a half node behind and their half nodes had much less fudging of performance numbers too.

          • Milo Burke
          • 4 years ago

          AMD LOVES SPENDING MONEY

        • blastdoor
        • 4 years ago

        And capital equipment…

    • Ushio01
    • 4 years ago

    I’m going to say it again!

    The first working 14nm test chips and wafers (just SRAM) were shown off in 2009 5 years before retail availability.

    No one has yet shown off a 10nm anything and until they do I will put $100 on at least 4 years before retail chips are available from that point.

      • Stochastic
      • 4 years ago

      We skipped right over to 7nm: [url<]https://techreport.com/news/28603/ibm-research-successfully-produces-7-nm-test-chips[/url<]

        • Ushio01
        • 4 years ago

        Using EUV which is way longer than 5 years away from being commercially viable.

        10nm is supposed to use current tech or is 10nm now going to be after 2020?

          • NoOne ButMe
          • 4 years ago

          EUV is not under 5 years away, EUV is however long it takes to get X power output from it.

          That time changes from year to year, however, it is almost always some point in the future. I believe currently it needs to hit over 300W output to beat quad patterning in throughput. Although, given it could increase yields, I suppose foundries could probably get by with less throughput and charge more per wafer in order to use EUV.

            • Ushio01
            • 4 years ago

            “EUV is not under 5 years away”

            I believe you misread my post I said EUV is over 5 years away from being commercially viable.

            • NoOne ButMe
            • 4 years ago

            It is as long as it takes to get the power output to equal patterning. That could take 3 years, 4 years, 5 years, 10 years, etc.

            • ronch
            • 4 years ago

            [quote<]however, it is almost always some point in the future. [/quote<] No, it depends where you are in time. For someone who's in 2030, it may very well be in the past.

          • TheFinalNode
          • 4 years ago

          According to ASML’s latest roadmap, EUV equipment will be ready for insertion at the 7nm node by the end of 2017. They hit >500 wafers per day last year, will hit >1000 this year and >1500 in 2016. It’s actually already viable for 10nm volume production but too late for insertion; the foundries have already decided on the specifications for the 10nm process and most likely went with quadruple-patterning.

          Source: [url<]http://www.asml.com/doclib/investor/asml_6_Investor_Day-EUV_FvHout1.pdf[/url<]

            • NoOne ButMe
            • 4 years ago

            Yes, and EUV was going to replace 193nm every year since 2005 [er, actually 2000 is when Intel said it will be needed for 2004!]… “Next year it will be good enough!”

            For a bit of reference, 100W output EUV was originally promised to Intel to arrive in 2007 I believe. 100W output instead ended up in late 2013/early 2014 I believe.

            A bit of a history of EUV and how we will all need it and it will be good soon!
            [url<]http://slideplayer.com/slide/3138513/[/url<]

      • jihadjoe
      • 4 years ago

      [quote<]No one has yet shown off a 10nm anything [/quote<] [url=http://www.tweaktown.com/news/43761/samsung-stuns-world-shows-worlds-first-10nm-finfet-tech/index.html<]Samsung[/url<] [url=http://vr-zone.com/articles/samsung-showcases-worlds-first-10nm-finfet-manufacturing-process-for-mobile-chips/87550.html<]did[/url<].

        • Ushio01
        • 4 years ago

        Any pictures of someone at Samsung holding up a 10nm chip or wafer? no? then it’s a press release.

          • NTMBK
          • 4 years ago

          [url<]http://media.bestofmicro.com/3/8/499076/original/SSI-2.jpg[/url<] There's your wafer.

            • Ushio01
            • 4 years ago

            Thank you, I wonder how I missed this usually the tech sites I visit have articles on new process tech.

Pin It on Pinterest

Share This