Report: Intel could move some chipset production to TSMC

Intel's delayed transition to the 10-nm node means that every time the company wants to add a leading-edge product to its portfolio, that chip has to be made on its 14-nm process, and the company is already making a ton of 14-nm silicon. If a new report by Digitimes is correct, the company could be reaching a breaking point on fab capacity.

The site says Intel is bumping some of its low-end desktop chipset production out of its own fabs and into TSMC's to free up fab time for server CPUs and chipsets. The site reports that Intel plans to move some production of the H310 chipset and “several other 300 series desktop processors” to TSMC. DigiTimes notes that Intel already sources its SoFIA smartphone chips, some FPGA products, and modems from the Taiwanese fab.

The H370 platform controller hub. Similar silicon underpins H310

Why is Intel manufacturing low-end chipsets on a leading-edge node to begin with? In conversations with other members of the media, I've come to understand that move is the result of the California Energy Commission's 2019 regulations. Indeed, Intel itself was involved in the state's rule-making effort and the timeline for the imposition of those regulations, and the company trumpeted its support for the rules upon their ratification. CEC 2019 imposes strict limitations on the power usage of computers and monitors in idle, sleep, and off modes.

The state believes its rules will save California utility customers 2,332 gigawatt-hours of energy usage per year, or an amount of power equal to the electricity use of all homes in San Francisco or San Luis Obispo counties in 2015. In total, those rules could save California customers $3.5 billion from 2019 to 2030, as well. Intel noted that the majority of those savings would come from PCs compliant with the new standards, and making chipsets on leading-edge processes could help cut down their energy usage in low-power states.

While Intel had no comment for DigiTimes' story, other reports point to the company being capacity-constrained for 14-nm production. SemiAccurate says that Intel's server partners apparently can't get enough supply of its Xeon Scalable chips to meet demand, so it wouldn't be a shock if the company is farming out the low-cost and potentially lower-margin H310 chipset to free up 14-nm fab time for larger, more complex processors that earn much higher profits. If these reports are correct, Intel may be under more pressure than ever to get 10-nm silicon working and shipping. The second half of 2019 apparently can't come soon enough.

Comments closed
    • Phartindust
    • 1 year ago

    Wonder if this has anything to do with it:

    [url<]https://www.tomshardware.com/news/jp-morgan-intel-cpu-shortage-hurt-pc-sales,37797.html[/url<]

    • psuedonymous
    • 1 year ago

    Going by [url=https://en.wikipedia.org/wiki/List_of_Intel_manufacturing_sites<]Intel's fabs[/url<], they have 8 Silicon fabs (excluding Fab 68 which is Chalcogenide only), of which 5 are producing 14nm dies, 2 producing 10nm (and only one of those is 'competing' with 14nm for fab space) and 2 exclusively focussing on 'legacy' processes.

    • willmore
    • 1 year ago

    This makes no sense to me. Yes, I hear the California power efficiency story, but that has very little to do with the fab choice for an I/O heavy chip.

    Chipsets were traditionally made with older processes because:
    1) They were I/O limited (I/O doesn’t benefit from smaller processes because the transistors need to be of a certain size for current drive reasons and that is process independent)
    2) There is very little complex logic on the die that would benefit from the smaller process

    Packaging underscored this issue. The chipsets were pad limited, so you could either use an area grid on a large, low density die and a cheap package or you could use a smaller chip with a costly organic interposer. The firmer chip probably had sufficient cooling through those direct attach bumps while the interposer version needed a heatsink–further driving up cost.

    So, the California justification seems to be a red herring. That leaves the question of why? Are they just getting rid of older fabs? Is this the classic “trade capital investment for ‘rental’ fees” ploy that clever accountants use to cover losses? (How the heck could Intel be needing to do that?)

      • DavidC1
      • 1 year ago

      I call BS on that as well.

      They are doing this because they are delayed on 10nm. So their CPUs would historically be a node ahead, but now its not. So everything is on 14nm.

        • uwsalt
        • 1 year ago

        The energy thing is a red herring. As you say, this is about capacity.

    • ronch
    • 1 year ago

    Not too surprising. I’d be more surprised if Intel inks a deal with GF.

    • Kougar
    • 1 year ago

    Nice to finally see chipsets getting off the older nodes, even if it is just one of them.

    The Z370 chipset die is already the same-size as Intel’s quadcore+IGP processors, that’s no small potatoes.

    • highlandr
    • 1 year ago

    [u<]3rd[/u<] party fab, producing [u<]300[/u<] series chipsets? HALFLIFE 3 CONFIRMED.

      • derFunkenstein
      • 1 year ago

      2005 called and asked for its meme back. +3

        • trackerben
        • 1 year ago

        It’s got a half-life of 6.5 years.

          • chuckula
          • 1 year ago

          So you’re saying it’s 25% true?

    • Sahrin
    • 1 year ago

    Whoa…are we about to see Intel spin off its Fab business?

    • the
    • 1 year ago

    The real indicator of a shortage would be how Intel is treating the big cloud providers (Amazon/Microsoft/Google/Facebook) since they design/built their own equipment now and buy in bulk for their data centers. The likes of Dell/HP/Lenovo are now second tier (and third tier being ‘retail’ parts for white box builders). If the big players are feeling a crunch, then there is legitimately a shortage.

    Even then, context for the launch of Cascade lake also needs to be considered. Sky Lake-SP is being replaced by the end of the year and with it taking months for wafer to go into a fab and come out as product, some lines would be ramping down from Sky Lake-SP and just starting Cascade Lake. There could easily be a bubble as Intel puts the finishing touches on the Meltdown/Spectre fixes for Cascade Lake.

    Capacity projections are complex. I would have thought Intel had an excess of 14 nm capacity right now. They famously closed Fab 42 before it opened for 14 nm years ago as they were expected to meet demands without it. Then again, I suspect that at the time they had plans to have started 10 nm production which would have lifted some of the existing pressure on 14 nm.

    Other factors for supply constraints is that 450 mm wafers never became a thing. Much of the planned capacity increase around this time frame early on was calculated with these larger wafers. One thing that was expected to decrease wafer throughput at the factors that did come to past is the increase of multipatterning. The decision to drop 450 nm and go with multipatterning were made years ago and their impact on wafer starts/end chip volume were understood.

    Intlel has leveraged TSMC before in the past. Much of this work was continuing contracts from acquisitions (I believe that there are still some outstanding). However, Intel also teamed up with Rockchip and TSMC for mobile designs in China. While that deal wasn’t long lived, it does show that they are willing to look at alternatives/partnerships where there is a strategic advantage. So even if Intel has the capacity now, they could tap TSMC to give them flexibility as they transition fabs to 10 nm.

      • Stonebender
      • 1 year ago

      Huh? The size of the wafers has nothing to do with multipatterning. The only fab that ever stood a chance of running 450mm wafers was Fab 42, but as that’s an empty shell at this point it doesn’t matter. The other fabs would never convert over to the larger wafers, The fab downtime and tooling costs would be astronomical.

        • Goty
        • 1 year ago

        I don’t think he’s connecting larger wafers and multipatterning, I think he’s saying both the lack of 450 mm wafers and the use of mutlipatterning separately contributed to a reduction in 14 nm capacity.

      • pogsnet1
      • 1 year ago

      In short 10 nm in Intel has a problem

        • Krogoth
        • 1 year ago

        That’s an understatement.

        10nm process is a dumpster fire. Intel tried proving too many new techniques at once and it spectacularly backfired. Intel still cannot make anything large and complicated on it.

    • blastdoor
    • 1 year ago

    I can’t quite discern what process TSMC will be using to fab these chips for Intel. The closest analog to Intel’s 14nm process would be TSMC’s 10nm process. But I don’t know if that’s absolutely necessary… perhaps TSMC’s 16nm process would be sufficient? It would be interesting to know more details.

      • uwsalt
      • 1 year ago

      Intel has generally manufactured its chipsets on process technologies that, depending on timing and needs, are one to two nodes behind its leading edge. Leading edge performance isn’t really needed for chipsets and this helps them balance and fully utilize capacity across the fab network as new nodes are being ramped up.

        • Pwnstar
        • 1 year ago

        Sure, but that’s not an issue if Intel has a different fab manufacture for them.

          • uwsalt
          • 1 year ago

          I didn’t have time to elaborate when I was writing that earlier post, but I pointed out Intel’s internal practice by way of saying that Intel will almost certainly take the same approach in contracting with TSMC to fab any chipsets.

          Like Intel, TSMC operates a network of fabs. They bring new fabs online, transition load to the prior leading edge, and retool older fabs for newer or different processes, as new nodes are developed. Fabs are (very) expensive to build and operate, and they are only becoming more so. So, just like Intel, TSMC seeks to recover as much cost and squeeze as much profit out of a given node and their network as a whole. This is reflected in lower costs for customers who are willing or able to put their designs on processes that are highly mature (corresponding to better yields) and for which TSMC has already amortized or recovered most or all of the development costs.

          Intel isn’t going to pay TSMC to fab chipsets on the leading edge process for the same reason Intel doesn’t do so itself. It costs too much.

    • Krogoth
    • 1 year ago

    One Fab to rule them all, One Fab to find them,
    One Fab to bring them all and in the salty tears that bind them
    In the Land of Fanboys where the Shills lie.

      • Neutronbeam
      • 1 year ago

      Fabulous!

      • DeadOfKnight
      • 1 year ago

      Krogoth is impressed. Buy buy buy!

        • JustAnEngineer
        • 1 year ago

        I bought shares in both INTC and TSM 15 or 20 years ago. TSM has definitely appreciated more in that period.

    • Usacomp2k3
    • 1 year ago

    Treading into R&P, but if power is a limited resource, then charge accordingly at the meter. A better way to get people to use less power is to raise the prices and human nature will get people to seek lower-power devices or supplemental power from solar.

      • Neutronbeam
      • 1 year ago

      There you go again with your pragmatic, rational, logical thinking–where has that ever gotten us? ;->

      • Spunjji
      • 1 year ago

      The problem is that the issue you’re discussing is only visible at a macro level. At an individual level the difference in standby power between these devices is invisible, it blends into the noise.

      It means that to incentivize buying devices with low *standby power* at the consumer level, you’d have to egregiously overcharge for electricity before those tiny amounts ever became noticeable. It wouldn’t really work.

      Better to change this on the supply side while also focusing on getting lazy-ass people to switch off and/or unplug their shit when they’re not using it.

        • Usacomp2k3
        • 1 year ago

        Exactly my point. It’s stepping over dollars to pick up pennies. Something as simple as setting your home AC to 5 degrees warmer during the day when no one is home is going to have a much, much larger affect on power consumption than the “vampire power” charge. Don’t get me wrong, I like the idea of low-idle power, but that should be at the leisure of the manufactures. This entire article is a lesson is unintended consequences.

          • Voldenuit
          • 1 year ago

          Standby power usage was estimated to be [url=http://webarchive.nationalarchives.gov.uk/20090609033948/http://www.berr.gov.uk/files/file31890.pdf<]8% of British residential power consumption in 2004[/url<] and [url=https://web.archive.org/web/20070706120640/http://standby.lbl.gov/ACEEE/StandbyPaper.pdf<]7% of French consumption in 2000[/url<]. Granted there have been several initiatives since then to reduce standby power consumption, but the load is non-trivial (some estimates say 1% of total power production globally). Laissez-faire capitalism will not address this issue, there is no economic incentive for manufacturers to step up their game if governments do not get involved.

          • cygnus1
          • 1 year ago

          Agreed. And I think it’s also really a case by case basis for what can actually save any appreciable amount power for an individual.

          For instance, that lower the AC suggestion saves nothing for me. In FL heat, with an efficient but 2 level home, it costs me so close to the same to maintain 76 as it does 80, that the extra time it runs to cool back down from 80 to 76 makes it a wash in power cost and a negative considering the extra hours it puts on my AC system.

      • Voldenuit
      • 1 year ago

      Charging more at the meter disproportionately hits low income families. These families can’t afford to shop around for providers, they can’t afford higher bills, and they can’t afford to upgrade their devices to “save money” at the pump.

      Granted, they’re probably also not going to be able to afford newer, more power-efficient devices, but the reality is that the consumer is not necessarily the driver for innovation or change.

      If a government wants to promote a more power efficient society, sometimes you have to incentivize the big corporations to actually make power-efficient devices, and disincentivize (such as with a carbon or energy tax) wasteful producers.

        • blastdoor
        • 1 year ago

        I agree that addressing this issue through pricing is the most efficient thing to do. I also agree that something would need to be done to address the adverse effect on lower income families. These are both good points and it’s possible (in theory) to address both through policy. For example, use part of the proceeds from a carbon tax to provide income supports to low-income families.

        But there are some challenges to getting the optimal bundle of policies. One challenge is that voters can easily be demagogued on these things and there’s no shortage of demagogues. Another (related) challenge is that you really need to implement a coherent bundle of policies, not bits and pieces.

        If the R party was made up entirely of people like Romney and the D party were made up entirely of people like Obama, I think we’d get much better policy. Alas…. that’s not what we’ve got.

        • Anonymous Coward
        • 1 year ago

        It might hit low-income families harder, but misrepresenting the true cost of electricity (or any other source of energy) won’t turn out good for anyone either. So push the manufacturers [b<]and[/b<] push the poor, everyone has to move towards lower power usage.

      • bthylafh
      • 1 year ago

      Sounds great as long as you’re not so poor you can’t afford to buy something decent.

      • homerdog
      • 1 year ago

      Or they could build more power plants.

        • Pwnstar
        • 1 year ago

        Exactly. Here’s hoping fusion is a hit but fission works, too.

    • blastdoor
    • 1 year ago

    I guess the good news for Intel here is that they are fully utilizing their capacity! That’s a problem many others would like to have.

      • Eversor
      • 1 year ago

      Don’t read too much into it. They may be putting part of the capacity offline to transition to 10nm. Akin to what happened in the 2D to 3D NAND transition for most manufacturers.

      • HERETIC
      • 1 year ago

      Except that they probably have fabs tied up making broken CPU’s, with probably
      terrible yields, just so they can say-“10nm is in manufacturing.”

    • chuckula
    • 1 year ago

    It’s great to see how Intel has a choice of fabs for its chips!

    This is a clear advantage over AMD that only really has TSMC as a viable option.

    #GuessWhoGetsFirstDibsAtTSMCFanboys

      • blastdoor
      • 1 year ago

      [quote<]#GuessWhoGetsFirstDibsAtTSMCFanboys[/quote<] I'll give you a hint -- it's not Intel or AMD. It's a big company.

        • chuckula
        • 1 year ago

        Nvidia doesn’t count, these aren’t GPUs!

        Or did you mean Qualcomm?

        I sure can’t think of any other company that you always post about who makes fruity phones that would be a potential TSMC customer!

          • ronch
          • 1 year ago

          Which fruity phones?? Oh, Blackberry of course!! 😀

      • tipoo
      • 1 year ago

      [quote<]#GuessWhoGetsFirstDibsAtTSMCFanboys[/quote<] So far as things look right now, Apple?

      • Spunjji
      • 1 year ago

      I don’t imagine that Intel slinging some minor chipset orders to TSMC will persuade the latter to give the former preference over AMD. They’re not really priority starts; they strike me more as the sort of thing Intel would want to have done at the lowest possible cost.

        • blastdoor
        • 1 year ago

        I’m sure that if Intel asked TSMC to use their 7nm process to make Xeons then TSMC might give Intel preference. But that’s probably not going to happen… yet.

          • chuckula
          • 1 year ago

          From the latest rumors about how Rome is going to get to 64 cores*, I’m not sure Intel would *want* to use TSMC’s 7nm process for the types of chips it wants to make.

          * Those of you expecting 4 dice with 16 cores each better change those numbers to get to 64 if the rumors are to be believed.

            • blastdoor
            • 1 year ago

            I hadn’t heard those rumors, but that sounds interesting — do you have any links or more details that you could share?

            • chuckula
            • 1 year ago

            Scroll down the line in [url=https://twitter.com/juanrga/status/1038013646666907648<]this link[/url<] and it shows up amongst other topics. This is still definitely in rumor land but most of the posters are pretty well established and it's currently in the retweet box on the front page of Anandtech. So not what I would call... CONFIRMED... but not completely out of the question either.

            • blastdoor
            • 1 year ago

            Interesting…

            Somehow this rings true to me. But I posit that the reason AMD went this route isn’t a clear cut case of TSMC not being able to make the bigger chip. Instead, it could be because TSMC is offering them a way to package multiple dies together that is much more efficient than the current EPYC implementation, both in terms of space and power. Better packaging could change the calculation about yields and chip size, pushing towards higher yields on smaller chips.

            It could be that beyond 14nm, Intel ends up also going to a multi-die solution for the big Xeons (presumably you are better able to speak to that than me, though!)

            • chuckula
            • 1 year ago

            Intel is certainly going multi-die in the future and it’s no secret. They have years-old patents in chiplet technology and EMIB is an extremely versatile interconnect that’s way way better than running copper traces through a PCB and cheaper than a huge silicon interposer.

            The real question is how much are you willing to sacrifice in the name of MOAR COAR instead of other advances. AMD had tendered an offer for 7nm Rome — not the older Epyc — for a supercomputer contract and flat out lost to Cascade Lake.. not even some 10nm Intel chip that isn’t ready yet. [url<]https://www.nextplatform.com/2018/08/29/cascade-lake-heart-of-2019-tacc-supercomputer/[/url<]

            • blastdoor
            • 1 year ago

            Thanks for the link. But it’s not clear to me that it supports the contention that there’s anything wrong with AMD’s design choices with Rome. They specifically reference “schedules” and the need to make a decision “right now.” I read that as “Rome sounds great, but AMD is a flaky company and we don’t trust them to deliver on time.” Very reasonable, I’d say!

            There is also a reference to AVX-512, which I imagine is what you’re focused on. I can easily imagine that there are cases –and maybe this is one of them, but I don’t think it’s clear — in which spending transistors on AVX-512 instead of on more general purpose cores might be more appealing. But that’s not always going to be true.

            • Goty
            • 1 year ago

            I’d believe AMD rumors with you being the only source before I’d believe them from juanrga…

            • the
            • 1 year ago

            With so many dies, it’d make sense to just got the chiplet route and hack off the IO into their own die.

            The bonus is that the IO scales based round socket (Epyc, TR, and AM4 need 4,2,1 IO dies respectively). Cores would be on their own type of die and scale arbitrarily by n. Communication between CCX would still to be addressed but that is something AMD already hinted they’re looking at for Zen 2.

            The consumer side would also benefit as the CPU die could be placed on an interpose with a GPU die and HBM for a very fast setup.

            Down side is that cost of interposers to pull this off.

      • Pancake
      • 1 year ago

      With volumes for mobile phones going up and up and their demand for the best nodes and with the world seemingly happy to spend $1000 on a new phone it’ll be interesting to see what’s left for AMD and the smaller low-margin players in years to come.

      The little guys will be fighting for the scraps of fab capacity and TSMC would be only too happy for that situation as they can charge whatever the market will bear – as they should.

      In this scenario, Intel can flick a bit of their cash mountain at TSMC to alter the market dynamics. They’re still getting monstrous margins on – well – basically everything they make north of an Atom so they ‘ve got room to move. Putting the squeeze on AMD to pay more to play. Or get delayed access to the best node, or pushed behind the queue. Blue balled, so to speak.

      Should be fun too watch.

Pin It on Pinterest

Share This