Report: TSMC CEO says 7-nm production is ramping up

Digitimes reports that TSMC CEO CC Wei told onlookers that commercial production of chips built using the company's 7-nm fabrication process has begun at a recent technology symposium. The leader of the Taiwanese foundry says an improved 7-nm node with EUV will come before the end of 2018, and it anticipates a move to a 5-nm fabrication node at the end of 2019 or in early 2020. The CEO said TSMC's newest plant, Fab 18 in Taiwan Science Park, will also be the tip of the spear for 3-nm production at some undetermined point in the future, according to Digitimes.

The outlet says that Wei's comments were likely an attempt to counter rumors that the company was experiencing teething trouble with its 7-nm process tech. The leader said the beginning of 7-nm production would increase the company's total output of 12″ silicon wafers by 9% to 12 million units in 2018. Digitimes quotes Wei as stating that TSMC would tape out at least 50 7-nm designs before the end of the year, highlighted by chips for AI, graphics, and cryptocurrency applications. Some of the 7-nm silicon is also intended for 5G wireless services and ASICs.

Digitimes believes Apple orders of A12 SoCs for its iOS-powered devices will be a “major driver” of 7-nm production, according to its sources. The outlet says TSMC also has orders from graphics outfits AMD and Nvidia, plus smartphone SoC designer Qualcomm and crypto player Bitmain.

Wei reportedly said the company could spend as much as NT$700 billion (about $24 billion USD) in the future transition from 7-nm to 5-nm technology, with 5-nm risk production scheduled to start in early 2020. Morris Chang, the retired chairman of TSMC, said the lion's share of that cash—NT$500 billion, or about $16.5 billion USD—would be spent at Fab 18, the manufacturer's newest facility for 12″ silicon wafer production.

TSMC's aggressive schedule for node shrinks comes as Intel's seemingly-insurmountable lead in silicon production technology appears to be eroding. The blue team has famously stumbled with its 10-nm node, a problem that could potentially affect its timeline for subsequent introductions in process tech. GlobalFoundries hasn't given a date for the start of volume 7-nm manufacturing, but the company was confident enough in its progress to give our own Jeff Kampman a tour of its Fab 8 facility and its extensive EUV investment earlier this year.

Comments closed
    • DavidC1
    • 1 year ago

    Intel’s “lead” in process never mattered in practice. They only use it for their CPUs. The claims of Intel being 3+ years ahead for example with FinFet on 22nm did not matter because the only products the 22nm gave any decent advantage was with Atom “Silvermont”. 22nm Ivy Bridge clocked barely better than 32nm Sandy Bridge and overclocked lot worse.

    14nm – Same. Density advantage only in Atom chips

      • ronch
      • 1 year ago

      It did matter because their fabs were churning out mostly CPUs, which is their core business (pun intended). And that matters when your main (and effectively ONLY) competitor in that space was stuck at 32nm for a while.

    • ronch
    • 1 year ago

    I always keep forgetting to bring this up here : Just how 7nm is this ‘7nm’? I’m sure no long-time industry observer honestly believes that it’s better than Intel’s 10nm and it’s probably not as good. GF’s 14nm is more like 18nm, so what is this 7nm more like in terms of density? 14nm maybe by Intel’s standards?

      • NoOne ButMe
      • 1 year ago

      In density terms the foundry 7nm nodes are right around or denser than Intel’s 10nm node in theory.
      TSMC, GloFo and Samsung all have 15-20% higher High Density SRAM on their initial 7nm processes compared to Intel’s initial 10nm process.

      In performance characteristics I believe it is a similar situation.

      TSMC far leads already entering HVM, followed by GloFo (4Q2018) and Samsung (4Q2018), with Intel at the back of the pack (4Q2019).

      Presumably yields are in the same order. Although Intel could have the highest yields, yet not enough to make 10nm fiscally viable compared to 14nm. As Intel’s historically has typically had the best yields on any given process.

      dates pulled from most recent timelines I could find. With the assumption “2H2018” equals 4Q2018, and “2019” equals 4Q2019.

        • blastdoor
        • 1 year ago

        I’ve had the impression that Intel’s 10nm is a tad better than foundry 7nm in density and speed, but that the foundries have an edge in cost. See the conclusion here, for example:

        [url<]http://www.linleygroup.com/newsletters/newsletter_detail.php?num=5848[/url<] I think, though, that the bottom line is that TSMC is actually producing in volume and Intel isn't, so product that exists beats product that doesn't. Of course, Intel doesn't really care about TSMC making iPhone SOCs because Intel doesn't compete in that space anyway. It only starts to get real next year if/when AMD starts selling 7nm Ryzen from GloFo.

    • ronch
    • 1 year ago

    Intel should just improve their current 14nm tech and rename it 10nm, and their 10nm should then be renamed 5nm.

    Intel: “You wanna play hardball? Ok let’s play hardball!!”

      • uni-mitation
      • 1 year ago

      I would say that comparing process nodes between these semi-conductor companies is unfair and leaves nuance to the discussion. A better and accurate measuring stick is transistor densities for all of these nodes. The [url=https://en.wikichip.org/wiki/technology_node<] meaning [/url<]to these process nodes is long lost since they no longer represent the gate-lengths. Some of the nodes have different lithographic features, and the manufacturing equipment or process may not necessarily be the same, but transistor densities of these chips is still the same with a clear objective definition that makes it easier to judge what process node is actually superior. I would love for discussions regarding these nodes to actually focus on transistor densities which is what really matter for cramming more stuff into smaller areas while maintaining the same level of performance. uni-mitation

        • ronch
        • 1 year ago

        Well, the fact that process tech monikers always state nm figures would suggest it *should* be about density. It’s always been that way historically anyway, but in an effort to look like they’re catching up to and overtaking Intel, GF and TSMC resorted to marketing gimmicks instead.

          • NoOne ButMe
          • 1 year ago

          Density as measured by SRAM instead of any magical form of counting has similar differences as past nodes did.

          This has much more to do with Samsung/TSMC being generally ahead of Intel in density measured by SRAM prior to the 14nm class generation of processors.

          There is much discussion to be had on the “right” way to measure density. All I know is at least with SRAM you’re getting a similar composition of transistors from each company. At least for the high density SRAM.

          All I know is Intel’a metric is terrible (And obviously made to favor Intel)
          [url<]https://images.anandtech.com/doci/11850/logicdensity.png[/url<] 10nm TSMC has a 55 mTr/mm^2 product on their "48 mTr/mm^2" 10nm in the Kirin 970. /////////////////////////////// \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ /////////////////////////////// SRAM sizes are from wikichip, where as far as I can tell the SRAM sizes are for the first generation of a specific node. all percentages may be off about 1% due to potential rounding errors. All numbers for processes to my knowledge are the first process launched. 32nm TSMC was canceled. 10nm Intel is not ready for HVM. Direct node naming: 110nm-IBM leads Intel in high density SRAM by about 16% density. 110nm- Intel leads TSMC in high density SRAM by about 3% density. 90nm- TSMC, Samsung, IBM and Intel SRAM within margin of error. 65nm- both Samsung and IBM Lead Intel in high performance SRAM by about 6% density. 65nm- IBM leads Intel is low power SRAM by about 5% density 65nm- Samsung and Intel Low Power SRAM within margin of error 65nm- TSMC leads Intel in high density SRAM by about 14% density. 65nm- TSMC leads Intel in Low Power SRAM by about 30% density. 45nm- Samsung leads Intel in High Density SRAM by about 20% density. 45nm- Samsung leads Intel in Low Power SRAM by about 6% density. 45nm- Intel leads IBM in high density SRAM by about 7% density 32nm- TSMC and Intel High Performance SRAM within margin of error. 32nm- Intel leads Samsung in High Density SRAM by about 6% density. 32nm- Intel and IBM high density SRAM within margin of error. 22nm- Intel leads IBM in high performance SRAM by about 11% density. 22nm- Intel leads IBM in high density SRAM by about 40% density. 14nm- Intel leads Samsung in high performance SRAM by about 14% density. 14nm- Intel leads Samsung in high density SRAM by about 42% density. 10nm- Intel leads TSMC is high density SRAM by about 35% density. 10nm- Intel leads Samsung in High density SRAM by about 28% density. 10nm- Intel leads Samsung in high performances SRAM by about 11% density. Including half nodes versus Intel: 40/45- TSMC 40nm leads Intel 45nm in high density SRAM by about 45% density. 28/32- TSMC 28nm leads Intel 32nm is high density SRAM by about 17% density 28/32- TSMC 28nm leads Intel 32nm in low voltage SRAM by about 10% density 28/32- Intel 32nm leads Samsung/Glofo/IBM 28nm in low voltage SRAM by about 15% density 28/32- Samsung/GloFo/IBM 28nm leads Intel 32nm in high density SRAM by about 23% density. 28/32- Samsuyng/GloFo/IBM 28nm leads Intel 32nm is high performance SRAM by about 31% density. 20/22nm- Samsung 20nm leads Intel 22nm in high performance SRAM by about 28% density 20/22nm- Samsung 20nm leads Intel 22nm in high density SRAM by about 11% density. 20/22nm- TSMC 20nm leads Intel 22nm in high density SRAM by about 11% density. 14/16nm- Intel 14nm leads TSMC 16nm in high density SRAM by about 48% density.

    • Srsly_Bro
    • 1 year ago

    12″ wafers? I’ve only ever seen them sized in millimeters.

      • tipoo
      • 1 year ago

      Millimeter wafers? Unless a joke flew over my head, I’ve only ever seen them in inches

      [url<]https://www.techpowerup.com/img/i4CVSUTi1KLpxAhn.jpg[/url<]

        • brennancyh
        • 1 year ago

        I think he meant that the diameter of the wafer is measured in millimetres

        • Srsly_Bro
        • 1 year ago

        It seems like forever the talk had been going on to transition to 450mm wafers from 300mm that iirc were 200mm prior to that.

        I guess the 300mm wafer is the 12″ wafer, but more like 11.8xxx inches. Accuracy be dammed in this world of inches.

      • freebird
      • 1 year ago

      Name convention:
      8″ wafers = 200mm wafers
      12″ wafers = 300mm wafers

      Here is a more complete list from WaferPro.
      [url<]http://www.waferpro.com/silicon-wafers/[/url<] SEMI Standard 2" (50.8mm) 3" (76.2mm) 4" (100mm) 5" (125mm) 6" (150mm) 8" (200mm) 12" (300mm) Diameter 50.8 ± 0.38mm 76.2 ± 0.63mm 100 ± 0.5mm 125 ± 0.5mm 150 ± 0.2mm 200 ± 0.2mm 300 ± 0.2mm

        • Srsly_Bro
        • 1 year ago

        Thanks, bro. I saw this after my latest comment.

        • cynan
        • 1 year ago

        And that’s precisely why that trusty old ruler you used in grade school has “12” on one side and “30” on the other (inches and cm).

      • ronch
      • 1 year ago

      Unit Converter is now available for Android and iOS.

      [b<]DOWNLOAD NOW!!! [/b<]

    • blastdoor
    • 1 year ago

    Given the sputtering of Moore’s Law and explosion of the cost curve, I predict that within 10 years, computers will be twice as powerful, ten thousand times larger, and so expensive that only the 5 richest kings of Europe will own them.

      • Srsly_Bro
      • 1 year ago

      I would like to see people not using “Moore’s Law” in any context besides it not being an actual law. I heard it used incorrectly as if it were a law several times on a technology-based podcast and cringed each time.

      #fakelaws

        • freebird
        • 1 year ago

        “Moore’s Law” (and how people interpret it ) should follow Mr. Moore into retirement…

      • Wirko
      • 1 year ago

      How’s that called? Blastdoor’s Law? Moore’s Outlaw? Laess’s Law?

        • blastdoor
        • 1 year ago

        [url<]https://m.youtube.com/watch?v=ykxMqtuM6Ko[/url<]

          • chuckula
          • 1 year ago

          I got that reference the first time BTW.

            • blastdoor
            • 1 year ago

            Perhaps it’s too old for the millennials.

      • jihadjoe
      • 1 year ago

      We still have 3D-stacked chips to help us along, just as soon as they figure out the heat problem. It’s already being used for memory.

        • blastdoor
        • 1 year ago

        [quote<]just as soon as they figure out the heat problem[/quote<] This is a nice example of the saying "easier said than done."

    • chuckula
    • 1 year ago

    [quote<]Digitimes quotes Wei as stating that TSMC would tape out at least 50 7-nm designs before the end of the year, highlighted by chips for AI, graphics, and cryptocurrency applications. Some of the 7-nm silicon is also intended for 5G wireless services and ASICs.[/quote<] Of course this isn't exhaustive but that statement seems to imply that AMD is returning to TSMC for GPUs alongside Nvidia. It doesn't mention a large CPU design like Zen2, which might end up at GloFo.

      • raghu78
      • 1 year ago

      AMD is using TSMC for 7nm Vega for HPC/AI/DL, Zen 2 based 7nm Rome for x86 servers and 7nm Navi for gaming GPUs. AMD is using GF for 7nm Ryzen CPUs and APUs. AMD already has silicon back in the labs for all TSMC products. GF is going to tapeout their first 7nm AMD chip only in H2 2018. So we can expect Vega 7nm to launch in Q4 2018 followed by Zen 2 based 7nm Rome and 7nm Navi in H1 2019. 7nm Ryzen CPUs will launch in H2 2019 while 7nm Ryzen APUs will launch in H1 2020.

        • DancinJack
        • 1 year ago

        You can see the future?

          • Waco
          • 1 year ago

          Or he/she can read roadmaps. 😛

      • NoOne ButMe
      • 1 year ago

      Zen2 (Probably EPYC parts like raghu78 states) is taped out, AMD has only announced tape outs at TSMC to date for 7nm.

        • freebird
        • 1 year ago

        Of course Zen 2 is taped out… Lisa Su was holding a 7nm Eypc (Zen2) at Computex and stated they are in the labs right now. Sampling in 2H2018 (real soon she said).

        [url<]https://www.youtube.com/watch?v=qLpzrWh-bok[/url<]

          • NoOne ButMe
          • 1 year ago

          Yes, but some rumors suggest Zen2 will use different die for EPYC and non-APU consumer CPU.

          So Zen2 which is taped out may “only” be for EPYC gen2 (and Threadripper gen3?)

          Same rumor also says TSMC is for EPYC and GloFo is for Ryzen CPU and APUs.

      • freebird
      • 1 year ago

      I’m not sure what you are talking about when you say “It doesn’t mention a large CPU design like Zen2” Currently, the Vega GPU is about twice the size of Zepplin(Zen/Zen+), so don’t you mean a SMALL CPU design like Zen2? Besides, we don’t even know what changes will be incorporated into the Zen2 die itself. Speculation on that is all over the place.

      According to a Google Chrome translation of the ChinaTimes

      June 21, 2018 04:09 Business Times Tu Zhihao / Taipei Report
      “The foundry leader Taiwan Semiconductor Manufacturing Co., Ltd. has already begun mass production of 7nm. In the fourth quarter, it expects to win orders from AMD’s central processor and Qualcomm’s smart phone chip, and it will launch a volume shot next year.”

      [url<]http://www.chinatimes.com/newspapers/20180621001096-260202[/url<] I don't read Chinese, so take this with some Google translated salt, since I'm assuming the translator can differentiate between a CPU & GPU calling a CPU a "central processor" and EE Times reported: "AMD said (and GF confirmed) it will be splitting its 7nm production across both GF and TSMC. Which chips will be coming from TSMC versus GF is still unclear. My understanding is AMD plans development of 7nm GPUs and CPUs at both foundries, selecting the best one for each option as late as possible. Given AMD has promised to ship 7nm Zen 2 CPUs and 7nm Vega and Navi GPUs by 2020, the window is closing on that selection process." [url<]https://www.eetimes.com/author.asp?section_id=36&doc_id=1332945[/url<] There is speculation in several AMD reddit threads that 7nm Epyc2 will be manufactured at TSMC and 7nm Ryzen 3000/Zen2 will be at GF. Lisa Su has stated over a year ago... at a JP Morgan Technology Conference "Our goal is to be aggressive with 7-nanometer technology. We will be doing tape-outs later this year. And as we get closer to production, we’ll give more insights there." [url<]https://seekingalpha.com/article/4075407-advanced-micro-devices-amd-presents-jpmorgan-technology-media-and-telecom-conference[/url<] So my belief, since AMD was being aggressive with 7nm they put their eggs in both baskets and whoever was going to get to the 7nm finish line 1st(TSMC), would get 7nm Eypc2. Their wafer agreement with GF will probably mean we'll have to wait on 7nm Ryzen 3000 until 2H2019 with AMD making 12nm Ryzen2000s until GF 7nm Production ramps up completely. As always, we'll see as Q1/Q2 2019 come and go. Edit: Hmmm, I guess there are some people out there that don't like reading all my quotes from actual ARTICLES (or maybe just the conclusions that can be drawn from them) with 3 downvotes overnight...

    • uni-mitation
    • 1 year ago

    Wouldn’t make sense for Intel to spin-off its fab operation and focus on catching up to AMD? I mean the enterprise market is a bigger deal than the desktop market due to the margins. All of that money could simply go with focusing on designing the best architecture to compete with AMD’s Ryzen instead of keeping a fab having issues with its node?

    uni-mitation

      • Zizy
      • 1 year ago

      The only customer of Intel fabs is Intel, despite attempts to open up a bit. Plus Intel has enough profits to afford working on the CPU architecture and switching to the new nodes at the same time.

      Though I guess GloFo would be glad to buy them, so they could spin them off if they feel fabs are a burden.

      • NoOne ButMe
      • 1 year ago

      Couldn’r Effectively happen now. If Intel manages to fix the 10nm issues magically before 2019 it might be possible. Although still expensive.

      Right now Intel’s fabs don’t Need to have a margin… if you spin off, Intel’s costs go up 50% or so per wafer, if not more.

      It’s to late for Intel to spin off. When 14nm initially had troubles, they might have had a chance. Now it is to late.

      Spin off either effectively keeps all the same issues, or involves paying over 10, if not over 20 billion dollars to the buyer.

      10nm should be shot. 7nm should be restarted i it contains the same things Intel failed to get working on 10nm. Intel should start doing more smaller jumps.

      • blastdoor
      • 1 year ago

      Alternatively, they could double down on verticality. Build a cloud service to compete with AWS (et al) using their own chips, optane, and SSDs. Design special chips just for themselves that they don’t sell to other customers.

      • Redocbew
      • 1 year ago

      You make it sound like the fabs at Intel work for the chip architects at Intel. Everyone I’ve ever heard talk about Intel as a company say it’s the opposite of that. Intel is a manufacturing company that just happens to make microprocessors.

        • uni-mitation
        • 1 year ago

        Well, if they are a manufacturing company as you say, then why not take AMD, Apple, Qualcomm and others as their customers? My point is about making the most money for the shareholders of Intel.

        By taking new customers Intel essentially lowers its risk by not relying too heavily on its own internal architecture designs. If you then tell me that AMD’s chips would be fabricated on Intel’s manufacturing, and there would be no “advantage”, I would say they are just a manufacturing company as you say. Intel is able to make money off the fabrication process of its competitors by charging a premium for its superb fabrication reputation as some of the most ardent supporters of Intel will tell you.

        I say it doesn’t make sense to me, and if I was a board member of Intel I would surely have steered Intel into that direction of taking, and opening the fabrication to all new customers that are willing to pay a premium for superb fabrication a long time ago. Money is money, and that is the [s<]only[/s<]most important legal requirement that a corporation owes to its shareholders. uni-mitation Edit- See strike-through

          • Redocbew
          • 1 year ago

          Intel has a foundry business in the same vein as AMD does with their semi-custom business. I don’t know who their clients are, but I’ll leave the business of maximizing profit for shareholders to the people who do in fact have access to the business.

            • freebird
            • 1 year ago

            But they were still developing design kits and tools for customers… their is much more than just manufacturing silicon to get a chip produced. Pure play foundries have lots of tools and IP design packs to help developer create their chips… Intel had their own internal tools and it took them a lot of time to fill out the software development portfolio for that… which took a lot of time from 2010 to 2016.
            [url<]https://www.forbes.com/sites/patrickmoorhead/2016/08/16/intel-foundry-rounds-out-ip-lineup-with-arm-adds-new-customers-at-idf-2016/2/[/url<] If they were cost competitive with other foundries they would probably have more customers, one would think.

            • Redocbew
            • 1 year ago

            Yeah, the comparison isn’t perfect. What probably comes to mind first when talking about AMDs semi-custom stuff are consoles. As we all know they use a version a product designed by AMD. It’s not really the same thing, and it sidesteps the whole issue of what happens when you start making chips from a competitor.

            It just seems totally bonkers to me to suggest that Intel should do something so drastic for basically no reason.

            • NoOne ButMe
            • 1 year ago

            Yeah, AMD has semi-customer partners they make money from.
            Intel seems to have no external fab customers.

            • uni-mitation
            • 1 year ago

            You have failed to address my points. I was expecting at least your opinion with some arguments, instead I am confronted with this non-answer.

            uni-mitation

            • Redocbew
            • 1 year ago

            My answer is: you cra brah. Just cra. Seriously bro.

            • uni-mitation
            • 1 year ago

            Enjoy the weekend, thank you for the exchange.

            uni-mitation

          • NoOne ButMe
          • 1 year ago

          Intel’s fab woes have not been architecturally bound. They’ve purely been nothing on the process can yield good enough to reap the economic benefit to Intel. LG’s 10nm SoCs, If they’re even happening at all, not showing up is indicative of this. The process, not the specific architectural designs have been the problem.

          Regarding opening up:
          Intel tried before.
          Intel costs more and does not bring benefit worth the added cost.
          They have had a couple days partners and all of them have faded away, or been bought.

          The biggest advantage Intel enjoys from owning it’s fabs would unlikely translate to external companies. Being able to tailor the process to the specific die/CPU.

            • uni-mitation
            • 1 year ago

            Time will tell.

            uni-mitation

            • freebird
            • 1 year ago

            Agreed, at least until Quantum computing. 😉

          • Zizy
          • 1 year ago

          They tried, didn’t work, like how GloFo only gets other customers for 28nm and nobody but IBM and AMD on their leading edge. The only noteworthy Intel’s customer was Altera and even they were rumored to be going back to TSMC instead with the next gen chips before Intel bought them.

          TSMC is the first option in majority of cases and generally Samsung is the alternative (for leading edge that is, there is more competition at 28nm). This isn’t changing any time soon and actually GloFo is more likely to grab some customers than Intel is.

          As for why does Intel still feed its fabs despite costs – there is no other option. The only other high performance process is IBM’s (now GloFo). Furthermore, nobody has the volume they need (TSMC has to serve others too and everyone else is smaller).
          The only other option would be to sell fab operation to GloFo with similar agreements as what IBM put in. But that might even lead to fabs sold for negative several billions (like IBM’s sale). It wouldn’t look too good given all their “we are 3 years ahead” bragging.

      • PrincipalSkinner
      • 1 year ago

      You mean Intel should sell their state of the art production facilities in which they have invested countless billions and become completely dependent on companies like TSMC, Samsung and GloFo?
      No.

        • Srsly_Bro
        • 1 year ago

        The argument isn’t that simple. Costs incurred shouldn’t be your deciding element.

        Your bro,

        Srsly_bro

        • uni-mitation
        • 1 year ago

        1- The growing market is and has been the mobile one. Intel still has the opportunity to throw its hat on the ring to compete for those mobile chip orders in the mobile sectors. Here companies like Apple, Qualcomm, [url=https://www.theregister.co.uk/2016/08/16/intel_foundry_arm/<] LG [/url<] , and others are looking for those state-of-the-art fabs to produce these chips. If you think the mobile market is saturated, it may be so for U.S but it is growing at high demand in countries like China, India, and other developing countries. 2- The above assumes that the risk for doing so is lesser than continuing on this path of trying to compete for a shrinking demand of desktop chips. The money to be made is in the enterprise & mobile sectors. 3- The recent announced move of Intel targeting the high-end GPU market is a sign of Intel at least acknowledging what best course to chart. 4- Costs, and building of these state-of-the-art facilities may only be supported if Intel maintains its drive to generate the most profit. No high-profit margins then the gravy train stops. 5- Moves like Intel cannibalizing its own enterprise line to sort of compete on the Desktop high-end are detrimental to its bottom line. There is only a limited amount of production that Intel has. Intel should do everything possible to keep as much market share in the enterprise and give secondary priority to the desktop. 6- The almost religious aversion of Intel diversifying its fab portfolio is what I don't understand. Let's take advantage of Intel's technological prowess and increase shareholder value in the future. Why be so myopic thinking that desktop deserves top dollar? uni-mitation

      • ronch
      • 1 year ago

      Asset Light.

      Yeah sounds familiar.

    • 223 Fan
    • 1 year ago

    Intel has already started to respond the challenges it faces by removing Krzanich and hiring Koduri and Keller. I expect them to get their act together on the 10nm and smaller feature sizes to stay ahead of TSMC and Samsung. Just not like in the past with an 18 month process advantage over their closest rivals. As Kretschmer has said, even when AMD gets their act together once in a while they don’t make significant inroads to Intel’s market share. However, TSMC and Samsung are not plucky little upstarts run by idiots like AMD was and they are well capitalized so Intel will probably be less able to keep the process halo like in the past.

    The question is… what will keeping the process crown get them? In my opinion – just keeping the server room and HEDT if they are not in graphics / deep learning and personal devices.

      • uni-mitation
      • 1 year ago

      [quote<]As Kretschmer has said, even when AMD gets their act together once in a while they don't make significant inroads to Intel's market share. [/quote<] Maybe it happens, maybe it doesn't. Unless you or somebody has a magic eight ball, then I would rather take the default assumption that past performance is not necessarily a surefire predictor of future performance. I am very happy though of the renewed competition. uni-mitation

        • 223 Fan
        • 1 year ago

        [quote<]Maybe it happens, maybe it doesn't. Unless you or somebody has a magic eight ball, then I would rather take the default assumption that past performance is not necessarily a surefire predictor of future performance. [/quote<] Regression to the mean. The mean in Intel's case is process leadership. The teething pains of 14nm and 10nm are the outliers. We will find out if they are the new normal for Intel. [quote<]I am very happy though of the renewed competition.[/quote<] I am as well.

          • tacitust
          • 1 year ago

          Agreed, and as the physics gets harder, smaller nodes are increasingly difficult to deliver on, so stumbles are more likely to happen, and not just from Intel.

          • uni-mitation
          • 1 year ago

          Just saying that 14nm and 10nm are the outliers don’t make them so. Regression to the mean is the simple mathematical theorem that given an extreme event that the next event is more likely than not to be less extreme. You seem to be adding a bit more inferences that aren’t logically supported. I would love to see with some statistical analysis how the length of time for these nodes were extreme as in beyond 3 standard deviations in a normal distribution. Not that the sample size would be sufficient to use a normal distribution; I am just using it as an example of what would be considered an “extreme event”.

          To drive the point home, in very small sample sizes the chances of “extreme events” is more common yet it is an entirely expected feature, and it doesn’t necessarily imply regression to the mean because this requires a sufficient sample size N for a given distribution for any statistical inferences to be made. If we are unable to have the required size, then regression to the mean is moot for there is no sample size big enough to judge a mean!

          Since the default assumption is that any new event is not an outlier unless we actually do some math, then your conclusion that Intel will rebound lacks support given this argument.

          uni-mitation

            • Srsly_Bro
            • 1 year ago

            Great post. I always enjoyed statistics.

            • uni-mitation
            • 1 year ago

            I don’t know about great, but it was sort of approaching college level although It has been more than five years since I took the course. What really blew my mind was the statistical inferences and how they seem to have such an utilitarian predictive value. Everything in modern life depends upon these theorems that buttress our current level of understanding of reality. We truly stand on the shoulders of giants.

            Science, and its most important tool, math, has been instrumental in raising the standard of living, decreasing mortality rates; an engorgement of an economy that is only supported as of now by fossil fuels that essentially guarantees an average “poor” current day American to have access to knowledge, and food & shelter that rivals any standard of living of any ancient Roman Emperor! Scribes of Alexandria would be envious of the amount of books, encyclopedias, literary works, etc so available today. Shamans would be out of job with the superb work of weather forecasts; these are underpinned by better predictive meteorologic models because we have increasing computing power to crunch those numbers. Not to mention medicine, and all of the awesome diagnostic tools available today.

            Science, math, and oil.

            uni-mitation

        • freebird
        • 1 year ago

        Although, past performance has shown us when AMD has a competitive or superior product (Opteron/Athlon 64) they gained ~25% market share in the server space.
        [url<]https://www.google.com/search?q=AMD+server+market+share&client=firefox-b-1&tbm=isch&source=iu&ictx=1&fir=M3EU3wZx86rTdM%253A%252COoDXA9-hYINToM%252C_&usg=__igTN0Tyfm6dJlNj5OL0Z3cPMHms%3D&sa=X&ved=0ahUKEwir7LzHgenbAhUB3YMKHX2gAr0Q9QEIKTAA#imgrc=M3EU3wZx86rTdM:[/url<] Back then they were probably manufacturing constrained in addition to be 2 years behind on manufacturing process. Intel had to resort to anti-competitive practices until they could develop Core 2. [url<]https://www.nytimes.com/2009/11/13/technology/companies/13chip.html[/url<] That isn't going to work this time.

      • NoOne ButMe
      • 1 year ago

      Intel has their act together trying to fix 10nm.

      They just have no idea how to solve the problem. It’s not like 40nm TSMC where most companies designed to avoid the terrible issues the node had.

      Intel has spent already near two years trying to fix to no avail. Hopefully they did not slow down their 7nm R&D to try to fix 10nm.

      7nm needs to be pulled up as much as possible by Intel.

        • 223 Fan
        • 1 year ago

        What is strange is that both TSMC’s 7nm process and Intel’s 10nm are supposed to be based off 193nm multi patterning. Yet TSMC yields and Intel does not.

          • NoOne ButMe
          • 1 year ago

          Despite Intel’s mentions of quad patterning, that is not where their troubles lay.

          It causes lower yield compared to not, but Intel isn’t suffering worse then TSMC from it.

          Meaning it is unlikely to be the real problem. Unless Intel is unable to solve a problem TSMC, Samsung and GlobalFoundris can…

      • ronch
      • 1 year ago

      AMD was never run by a bunch of upstarts or rookies. Even upon its incorporation it was formed by a group of veterans from Fairchild. Its second CEO, Hector Ruiz, was a Motorola veteran. Dirk Meyer was an ex-DEC Alpha lead architect who was (and still is?) highly regarded in the industry. Rory Read, Mark Papermaster, and Lisa Su are all highly experienced executives. And of course, Jim Keller was a lead K8 architect back in 2003 and also lead the development of Zen.

      AMD may have lost its focus in recent years but then when you have about 1/10 the resources of your competitor in the CPU industry and you’re also up against another very strong competitor in the GPU space, it’s not always easy to win. It’s like David up against two Goliaths. No other company in the world could do what AMD is doing given the resources they have to play with. No other company has managed to elude bankruptcy like they have for decades. Other companies fold after only a few quarters in the red. Think about that for a minute.

        • 223 Fan
        • 1 year ago

        When I referred to AMDs management as being idiots that did not refer to their design and manufacturing acumen. I was talking about their inability to do effective marketing in spite of having an obviously superior product in the Opteron vs P4 days, and some of their subsequent decisions . Yes, Intel was convicted of anti trust violations so that certainly played a role in AMD being unable to capitalize on their better products. AMDs unforced error of acquiring ATi, the debacle of the Bulldozer series of cores, and the poor GPU road map and execution are certainly not Intel’s fault.

        As far as AMD managing to stay afloat all these years, I am also glad that they are still here. I have used their products at home for a long time, and wish to continue to do so. If I was upgrading my i7-3770 and RX480 right now it would be to Ryzen and Vega.

          • ronch
          • 1 year ago

          1. Why do you think acquiring ATI was a mistake?

          2. IIRC Bulldozer started out as a very ambitious project but somewhere along the way they had to step back and scrap some goals.

          3. Why is their GPU roadmap crappy? Admittedly GCN is getting long in the tooth but when it came out in 2012 it was pretty great and introduced some very interesting innovations. However, coincidentally, AMD also had to reduce R&D spending starting 2012 because they were losing money quarter after quarter. This was smack right in the Bulldozer days and those seemingly endless one-time payments to GF. At the same time they knew they had to pour whatever little resources they have left into Zen R&D. They knew their only hope is to become competitive in the CPU market first, and then refocus on the GPU market. Given how GCN was new back in 2012 they had the luxury of having a brand new GPU architecture, which allowed them to continue selling graphics even as GCN started to fall behind. Navi is supposed to be a new GPU architecture so if I’m right, let’s see how AMD does in GPU.

            • 223 Fan
            • 1 year ago

            1. They paid 5.4 billion and saddled themselves with long term debt which impacted R&D. At the time I thought the acquisition was a good thing. Now it is still a good thing but the consensus seems to be that they overpaid. On the other hand it could be argued that during the dark Bulldozer days that the GPU + APU business kept them afloat and perhaps opened the door to consoles. So the mistake was on the financial side.

            2. Maybe because of long term debt taking away from R&D. Some long term debt was also related to paying off GloFo to avoid using them. I think AMD did the right thing for them, getting rid of their fabs because they could not hang with Intel, however in order to do so they had to offer sweeteners that were painful in the short to mid term.

            3. For competing at the high end AMD relied on HBM being more plentiful than it is, which constrained them from filling the channel when 16nm first came out. AMD also made a pitch for smaller, cooperative GPUs to assemble into performance tiers. It turned out that giant monolithic GPUs were still viable and had advantages. Is there anywhere on their roadmap where they plan to address the performance per watt disadvantage, not to mention the compute disadvantage, vs nVidia? DX12 and HBM do make a difference, unfortunately the adoption of both is leisurely enough to allow nVidia to incorporate them at will.

            It’s not all gloom and misfires. AMD is on track to get rid of their debt. They do have a compelling product in Zen. Miners have been buying their GPUs in droves. Su is righting the ship. I believe that prior management has a large part in the ship listing, hence my categorization of them as idiots. Perhaps that is too strident.

            These are all my opinions and are probably not worth the bits they are conveyed in. I don’t directly own stock in any tech companies, if that matters.

            • ronch
            • 1 year ago

            1. In your earlier post you said, “unforced error of acquiring ATI..” So I asked why accepting ATI was a mistake, to which you said they overpaid for ATI. There’s no question that they overpaid for ATI but I think that’s not the same thing as acquiring them being a mistake.

            2. I think getting rid of the fabs is not such a bad idea although it obviously leaves them a bit vulnerable to issues their fab partners will encounter, but then again running their own fabs themselves doesn’t guarantee they’ll be smooth sailing either. Just look at Intel now, and Intel has even far more cash to pour into R&D. And even AMD many times ran into process tech problems prior to spinning off their fabs. They should’ve inked a better deal with ATIC though. I think Hector was too eager to get rid of the fabs and got the shorter end of the stick, while too eager to buy ATI and overpaid for them. Killer combo.

            3. I do agree that because of tough times during 2012-2016-17 they really had to cut back on many things, and one thing that got little love was GPU development. They probably went for the low hanging fruit which was increased memory bandwidth while not really chasing energy efficiency that much, relying on process shrinks to get more efficiency (remember when 20nm was cancelled and AMD got stung?).

            • 223 Fan
            • 1 year ago

            1. Nobody made AMD pay too much for ATi, which is pretty much an unforced error. I can see how my language was poorly worded in the quote. While I do think the merger helped, I also think it would be difficult to portray the merger as an unalloyed success, even at the right price.

            2. AMD gambled on SOI and it turned out to be a dead end. In the end getting fabless almost certainly increased their chances of survival. And yes, even Intel is having fab problems even with their resources. Time will tell if this is a new reality for them or just a bump in the road.

            3. With Intel coming on line with GPUs in the next few years the mid/bottom market that AMD has been competitive in will become more difficult. I think that they need to be seen as market leaders in GPUs to stay relevant under those circumstances. Not necessarily THE market leader, though that would be helpful, but not 2nd class compared to nVidia.

            • Zizy
            • 1 year ago

            1.) The good thing was that AMD had to sell fabs because of it.
            The bad thing was that they had to gut CPU and GPU development too for the lack of money.
            It is generally agreed AMD overpaid for ATI, though to be honest almost everyone overpaid for anything they bought at that time 🙂

            2.) True. BD was probably the most complex CPU of the time and Zen is actually simpler. It just didn’t work.

            3.) Uhm, can you call it anything but disastrous? Gaming side is … bad, they cannot tackle the AI market and even their compute advantage is gone. Yet the only glimmer of hope is in 2020.

            • ronch
            • 1 year ago

            1. Agree.

            2. I think BD is more complex due to the fact that it throws 2 separate threads to 2 separate ‘cores’, or schedulers. But I am guessing that the really tough part of a modern day CPU would be the instruction scheduler, and that’s where Zen is more complex than Bulldozer because Zen has deeper structures that feed into a much wider array of execution units. Bulldozer is pretty narrow if you consider just one integer cluster.

            3. Yeah no doubt their GPU situation today is pretty bad, but this goes back to No. 1. Bulldozer funding got slashed because of the ATI purchase in 2006 and then spinning off the fabs and all the one-time payments to GF. GPU funding got slashed because they were still aching from the ATI buyout, GF payouts, plus Zen funding. Tough situation. It’s just a good thing they got Zen right so now they gotta fix the GPU side.

            • 223 Fan
            • 1 year ago

            I have a silly theoretical question that I can’t answer. Is there something intrinsic to x86 instructions that make them more amenable to out of order execution, and other single threaded optimizations? Couldn’t Intel allocate the same back end resources to a hypothetical ARM processor if they felt like it that would perform more or less the same as Coffee Lake at about the same power envelope? Or is there something about the ARM instruction set that prevents it from achieving that kind of performance? Perhaps the RISCy macro ops x86 is implemented with are better for scheduling and optimization?

            I am just talking theoretically here. Would such a chip, if possible, be only something Intel could pull off because of patents that they hold?

            • Zizy
            • 1 year ago

            I am unaware of any specific OoOE advantage as high performance ARM cores have that too, I guess it is just that Intel has way more experience here.
            But due to all differences between ISAs it wouldn’t be just slapping ARM decoding instead of x86 one. Well, you could do that, but the chip would be slow and quite inefficient for ARM instructions. There wouldn’t be any gain. Better to have all instructions translated in software and do a little bit of optimization magic that might be possible … like what MS does now (only the x86->ARM) or what Intel did during their phone(y) ambitions.

            Intel (or AMD) theoretically could make a beast that is good for x86 and ARM by taking all the points of both ISAs and merging them in a combined beast. However, such chip would be pretty insane and I don’t know why would you do that. Better to make 2 chips that share the same socket, like AMD planned (before Lisa pulled the plug in favor of 100% Zen for cost reasons).

            Furthermore, why would you want a chip that runs x86 and ARM? The only semi-plausible scenario would be where you would have a phone that becomes brain and touchpad of a laptop. It would be running Android+ARM on the phone and Windows+x86 as the laptop. But it would be far simpler to just have Windows+ARM + emulation.

            • 223 Fan
            • 1 year ago

            Sorry for my poor English, I was not asking about an x86/ARM chimera. I was asking if there is anything about the ARM ISA that would prevent someone like Intel from making an ARM chip with the same execution units and power envelope as Coffee Lake that would also perform the same.

Pin It on Pinterest

Share This