Intel to premiere 3D transistors in 22-nm fab process

In downtown San Francisco earlier this morning, Intel gathered journalists for its "most significant technology announcement of the year." The news? Intel’s 22-nm manufacturing process will be the first to employ 3D tri-gate transistors, resulting in substantial power savings and performance increases. Not only that, but we’ll be seeing the fruits of Intel’s labor next year with Ivy Bridge, the successor to Sandy Bridge, which will be based on the new transistor type. (Ivy Bridge was demoed running inside three different PCs at the event.)

You don’t need uncomfortable goggles to see Intel’s 3D transistors. As you can see above, planar transistors run current along a flat conducting channel with gates in between, while the 3D tri-gate transistors use a tall, narrow "fin" with three sides. This fin offers a larger conducting channel with a greater surface area in a smaller space. Also, the fins poke through an oxide layer separating the gate from the silicon substrate, which leads to what Intel calls "fully depleted operation." From what I could gather, that means less current leakage and the ability for transistors to operate at lower voltages.

If you’re still scratching your head, the video below dumbs things down:

The point is, thanks to these 3D transistors, Intel says its 22-nm fab process can trim power consumption by more than 50% compared to the company’s existing 32-nm technology. Also, at reduced operating voltages, the new process can boost performance by up to 37%. The chipmaker mentioned a two-fold increase in transistor density, too, but credit for that doesn’t belong entirely to the new transistor type.

According to Intel, 3D transistors will help keep Moore’s Law alive and kicking through the 22-nm and 14-nm process nodes. New inventions will be required to keep transistors shrinking and increasing in number without costs getting out of control after that. In the meantime, the 22-nm process ought to give Intel a competitive advantage—not just in desktops, servers, and laptops, but also tablets and smartphones, where competition with ARM chips produced by other foundries is likely to get more heated.

Comments closed
    • abw
    • 9 years ago

    Wonder if the gullible average joe will conclude that 3D on screen
    will be better thanks to those 3D transistors…

    • jcw122
    • 9 years ago

    I wonder if this will create any sort of cooling issues

      • NeelyCam
      • 9 years ago

      Hard to say… the heat-generating channel is now more isolated from the heat-conducting substrate… on the other hand, generating less heat (due to running at lower power) helps with any cooling issues.

      Your guess is as good as mine.

    • sonofsanta
    • 9 years ago

    And thus we see the main reason why Apple have switched to Intel for their chip production, rather than just a petty move to spite Samsung.

    • NeelyCam
    • 9 years ago

    I see my invisible nemesis is busy again, thumbing down everything I say indiscriminately.

    Good to see some things never change – it’s like having a warm cuddly safety blanket.

      • Meadows
      • 9 years ago

      Yeah, and it seems I have at least one partner, usually getting here before I do.

      • maxxcool
      • 9 years ago

      or herpes …. just keeps flaring up 😡

    • pedro
    • 9 years ago

    Do we still need 2 kW PSUs?

      • PRIME1
      • 9 years ago

      We need 2

    • eitje
    • 9 years ago

    [quote<]where competition with ARM chips produced by other foundries is likely to get more heated.[/quote<] ba-dum-SCHHH!

    • DLHM
    • 9 years ago

    Get perpindicular.

    [url<]http://www.youtube.com/watch?v=EIzS8XV3Ol4[/url<]

      • crabjokeman
      • 9 years ago

      Exactly. As someone who grew up with video games, lots of weed, and a fairly short attention span, this Intel video bored me. I mean, shrinking a boring looking and sounding guy just makes for a smaller boring looking and sounding guy. Their video should have had 300% more colorful, trippy stuff (oh, and music).

        • MadManOriginal
        • 9 years ago

        Don’t worry, you have a place in this too as a consumer.

    • thesmileman
    • 9 years ago

    If you stack them on top they are called Tri-gates so…

    if you stack them diagonally would they be star-gates?

    🙂

      • derFunkenstein
      • 9 years ago

      Carrier has arrived.

        • maxxcool
        • 9 years ago

        Awesome reply!

    • Spotpuff
    • 9 years ago

    Mark T Bohr – worst name ever for someone about to explain scientificky stuff.

      • DancinJack
      • 9 years ago

      I respectfully [url=http://en.wikipedia.org/wiki/Niels_Bohr<]disagree[/url<].

      • NeelyCam
      • 9 years ago

      What, doesn’t sound danish enough for you?

      • dpaus
      • 9 years ago

      I imagine Neils Bohr would disagree too.

        • NeelyCam
        • 9 years ago

        Niels Bohr.

          • dpaus
          • 9 years ago

          My finger slipped. Only a real Bohr would notice.

            • NeelyCam
            • 9 years ago

            You flatter me.

            • sweatshopking
            • 9 years ago

            No, I flatter you. He’s moving in on my territory…. 🙁

            • NeelyCam
            • 9 years ago

            It’s called “competition”

        • UberGerbil
        • 9 years ago

        Niels and Werner would take him for a little walk around Copenhagen to open a shell of atomic whupass on him. Or maybe not. Hard to be certain of anything when Werner’s involved.

    • sschaem
    • 9 years ago

    The Atom Z600 is a 45nm chip, right?
    21 milliwatt idle
    1.2 watt during HD video playback / web browsing.

    Are we talking about a possible 4x reduce power usage and 2 to 3x speed up if Intel use this trigate 22nm with their SOC?

    So Intel x86 SOC will be faster and use less power then ARM chips by the end of the year ??

    If it wasn’t for Apple ARM would be gone by the end of 2012…

      • OneArmedScissor
      • 9 years ago

      Wow. Where to begin…

      Intel would actually have to have a fleshed out SoC to shrink it. Apple is not even close to the driving force behind ARM adoption. Your math is quite a bit off for your estimates of somehow massively reducing power and increasing speed, which kind of flies in the face of having to more or less pick one or the other with a new node. And nothing is stopping ARM SoCs from being made on more advanced nodes than Intel uses, as theirs are always reserved for their higher power, higher margin chips.

      Translation: Intel has accomplished a standard die shrink of an existing chip.

      There is not really anything super duper special about this. Yes, they had to do something whacky to keep Moore’s Law moving, but much as was the case with switching to hafnium gates at 45nm, that’s all they accomplished. A major change every few nodes to reach the same end result is par for the course now. At whatever comes after 14nm, they’ll have to do something else.

        • indeego
        • 9 years ago

        [quote<]There is not really anything super duper special about this[/quote<] Just 10 years of Intel'ss brightest minds working around the clock, is all. I was tri-gating well before it was popular.

          • NeelyCam
          • 9 years ago

          Are you implying you are one of those Intel’s brightest minds?

            • BiffStroganoffsky
            • 9 years ago

            Tri-gating – …or a three-way gay-thing.

            • NeelyCam
            • 9 years ago

            I don’t judge other people’s preferences.

            • BiffStroganoffsky
            • 9 years ago

            It is hard not too when they boast about it so publicly like NeelyCam did.

            • NeelyCam
            • 9 years ago

            (S)He did no such thing.

        • sschaem
        • 9 years ago

        The Z600 doesn’t seem that bad [url<]https://techreport.com/articles.x/18866[/url<] Other website also show pretty good power usage across the board and 1.5 to 3x the power if ARM equivalent. Thats using 45nm single gate... the next gen atom might jump to 22nm trigate + new core design (cpu and gpu) I'm not saying ARM and their partners will site idle in the next 12-24month, but we are seeing convergence.

      • NeelyCam
      • 9 years ago

      Ignore OneArmedScissor – he has his Intel hate on, and doesn’t actually understand what’s going on here.

      Answers to your questions/comments: 3-4x active power reduction at the same performance might be realistic vs. a 45nm part. Idle power could be much lower.

      X86 SOCs won’t make it to 22nm by the end of the year (at least I haven’t heard any rumors). Nobody knows if the 32nm Medfield is more power efficient than ARM Cortex A9 dual-core chips. Back a while ago Intel claimed that Medfield is supposed to be faster and lower power than ARM, but later they started saying that they’ll beat ARM at 22nm node.

      And ARM wasn’t going anywhere, but Apple really kickstarted their growth.

        • OneArmedScissor
        • 9 years ago

        50% power reduction = expected and best case scenario result at a new node.

        50% power reduction ≠ dramatically reduced idle power, where leakage is minimal.

        This is not magic pixie dust that they can just sprinkle on Atom. Everyone else will be working on similar things and it has just about zero implications as far as Intel’s ability to beat anyone or not, as this will only serve as a mainstream CPU shrink for quite some time, and they’ll still be playing on the same field as everyone else down the road.

        The only way it would matter is if an Atom SoC was their first 22nm chip, but it’s not, so that’s the end of it. This is, “We beat everyone to 32nm by a year!” all over again. And here we are, a year later, and we’re only just now actually seeing 32nm actually being put to serious use…right alongside other foundries that manufacture for their competitors.

        And therefore, I apologize. I didn’t realize that sticking to the extremely limited facts instead of just wildly making things up is “hating,” and stopping to consider that the end result is the exact same end result there’s been the last over 9,000 means I “don’t understand.”

        You’ve still lost me with the Apple thing.

          • tfp
          • 9 years ago

          It seems the thing you are missing is 22nm is 2 new nodes from 45nm. So best case it gets 50% power reduction twice under load assuming the same performance and clock speed (shrink only). So 1.2W (for HD vid) would be ~.6W at 32nm and ~.3W at 22nm.

          Now they are quoting some amount of performance increases too so who knows. I’m sure the chip they release at 22nm will not have a ~.3W power usage underload but that’s because they will add better graphics and probably improve a few things about the CPU itself including clock speed. It wouldn’t be unreasonable to guess double performance and ~.6W power usage when playing HD video on next gen Atom on 22nm. That should be competitive.

          • NeelyCam
          • 9 years ago

          Dramatically reduced idle power is possible – it depends on how the transistors are tweaked. Check out “Table 1” and “Figure 2” of this abstract:

          [url<]http://www.intel.com/technology/architecture-silicon/32nm/IEDM_09_32nm_SOC_abstract.htm[/url<] Table 1 shows two slightly different transistors in the same process node. Very different "Ioff" currents: 100nA vs. 1nA. Notice that the "Idsat" currents aren't that different 1.53mA vs. 1.08mA. As a result, sacrificing 30% of the performance could yield a 100x reduction in leakage. Nothing has been published on Intel's 22nm technology yet, but I would imagine it has a similar performance vs. leakage trade-off. The whole "37% better performance" could be sacrificed to have a drastically lower leakage (reducing idle power a huge amount). Meanwhile, load power efficiency is more a function of that Idsat. I would guess Intel is aiming to strike a balance between the idle power reduction and load power efficiency improvements, so where it'll land is anybody's guess. All we know is that it'll be quite a bit better than 32nm (or 45nm). [quote<]This is not magic pixie dust that they can just sprinkle on Atom. Everyone else will be working on similar things and it has just about zero implications as far as Intel's ability to beat anyone or not, as this will only serve as a mainstream CPU shrink for quite some time, and they'll still be playing on the same field as everyone else down the road.[/quote<] Based on the commentary on this release ("low voltage", "50% power reduction" etc.), I'm not sure it'll follow the same two-year delay as in the past.. it seems that everyone + dog thinks that Intel worthless if they can't break into the cell phone business. Not getting there drives the stock price down, making executives sad. By the way, I have a feeling that you don't truly grasp the importance of the manufacturing process lead. In a way this [b<]is[/b<] magic dust that can be sprinkled on Atom, making it a far more competitive than it has a right to be. GloFo 32nm SOI products aren't out yet - compared to GloFo Intel has at least a 1.5year advantage (maybe more - we don't know how stable GloFo 32nm yields are compared to what Intel had in January 2010). Compared to TSMC 28nm (performance-wise a bit worse than Intel's 32nm), Intel has a solid two-year lead. Sure, everyone else is working on "similar" things, but they are 1.5-2 years behind, and haven't proven that they can do it yet (while Intel's 22nm looks to be ready or close to it). Also, remember how TSMC canned their 32nm because they couldn't do it? It's not a given that everyone manages to follow Intel. To finish this WaltC-long post, with the ARM/Apple comment I meant ARM's business really exploded because of the smartphones. Sure you could argue that they were in every cell phone before that, but the real money started rolling in with the more expensive chips (and royalty % increases... you know, they can do that since they have a [i<]monopoly[/i<]). Apple is the singular reason why smartphones are so popular now.

            • OneArmedScissor
            • 9 years ago

            I’m aware of the implications for leakage, but those theoretical gains never seem to be realized due to Intel’s nature – they will not give up performance if it means someone else is just as fast. Atom may improve considerably in SoC form, but it will have to contend with multi-core ARM SoCs of architectures that aren’t even available yet.

            In general, this process does actually benefit Atom more than anything because it has a greater than typical advantage at sub-0.8v, where it would likely run at that node.

            However, that doesn’t change the underlying issue – Intel is incredibly unlikely to use this for Atom for at least a year and a half. It’s a year out for Ivy Bridge EX, and at least we know Sandy Bridge EX is coming. The first Atom SoC is still MIA. Yet again, their “lead” is effectively a year long “public beta” before we see what it can actually do.

            That begs the question – is it really a “lead” if it isn’t used to bring about something new before anyone else?

            Despite Intel releasing the first 32nm CPU a year and a half ago, now AMD is only behind doing the same new things at 32nm as Intel, what, two or three months? But Bulldozer comes out at the same time, on the same node. Intel is never that aggressive, but still has had quite a few issues as of late, implying they’re not exactly taking their sweet time.

            This has become a habit of theirs and it comes across as a PR attempt to hide that not only their lead, but their iron grip of enforcing their own standards, is eroding in the face of increasingly collaborative competitors who want to go in completely different directions. Consider all the things they’ve swept under the rug in just the last two years:

            -Larrabee due in 2009 – cancelled
            -45nm dual-core Nehalem w/ IGP – cancelled
            -32nm Nehalem shrinks – cancelled
            -First to 32nm…with 32nm dual-cores and 45nm off-die memory controllers
            -Bazillion Westmere Xeon SKUs, mostly with cores disabled
            -One year later, real 32nm Sandy Bridge!
            …and ensuing weird southbridge recall
            …and rushed chipsets with screwy feature sets
            …and still no Z68 after five more months
            -Lightpeak – copper, but fiber optic “coming soon,” still no announcement…
            -25nm flash…not first, late, and after all that, no new controller
            -Sandy Bridge E – $1,000 and…dude, where’s my cores?
            -Atom Soc?!?!?

            With Windows 8 supporting ARM, more companies going fabless, and more money concentrated in fewer and rapidly advancing manufacturers, the competition is turning up another few notches. They can’t really afford to keep playing the, “We’re Intel – BUY IT ANYWAYS!” card, but they seem hell bent on it.

            • NeelyCam
            • 9 years ago

            You know, you’re right about the Atoms. The wait for a 32nm Atom has been a long one… long enough that everybody looking at that segment is going with Brazos. I don’t know what the deal is with the “Atom SOC” – Charlie @ Semiaccurate seems to believe even Moorestown was pretty hot and Medfield is even hotter (no pun intended, seriously), but nobody’s seen them in the wild. They might surprise everyone with Medfield or its successor (anybody know the code name yet?).. or be late to the party again. Who knows… I would happily welcome some competition to the cell phone chip market (and not the kind where everybody’s making clones and undercutting each other by a few cents).

            Your list of “things” is pretty long, but have a differing view on most of them:

            [quote<]Larrabee due in 2009 - cancelled[/quote<] Yeah - that was a bag of fail. [quote<]-45nm dual-core Nehalem w/ IGP - cancelled[/quote<] Not "cancelled" but replaced with a 32nm dual-core Nehalem shrink w/ IGP (Westmere) whose schedule was pulled in because of great 32nm yields. [quote<]32nm Nehalem shrinks - cancelled[/quote<] Westmere came out. It was a dual-core only, but still. [quote<]First to 32nm...with 32nm dual-cores and 45nm off-die memory controllers[/quote<] What was the problem with this one? [quote<]-One year later, real 32nm Sandy Bridge! ...and ensuing weird southbridge recall ...and rushed chipsets with screwy feature sets ...and still no Z68 after five more months[/quote<] Chipset issue was a bug and was fixed rapidly. Small delay - not due to their (un)willingness to use the new process. What "screwy feature set"? Lack of "native" USB3? "Only" two 6Gbps SATA ports? Z68 was never announced - just "rumored". It's still "rumored" except that it's already on the new iMacs. [quote<]-Lightpeak - copper, but fiber optic "coming soon," still no announcement...[/quote<] I don't know why people have a problem with Thunderbolt. I don't care if it's optical or not if it's fast. I wasn't gonna run 100m optical fibers around the house anyways. If anything, I think I'd prefer electrical if it comes with cheaper cables. BTW, I did see some announcement about a TB successor - 50Gbps, optical. I don't remember where, but I'm sure Google will find it. [quote<]25nm flash...not first, late, and after all that, no new controller[/quote<] Define "late"..? Last-gen controller and lack of 6Gbps was a disappointment, though. [quote<]Sandy Bridge E - $1,000 and...dude, where's my cores?[/quote<] ?? You mean the engineering sampler on Ebay? Regarding the late wide-range adoption of new process, I completely understand why Intel's been doing it. The competition was weak, so their own last-gen chips were perfectly competitive. Making them in fully/mostly amortized fabs extends the cumulative profits from their previous investments. And, fully populating the field with the latest-gen stuff will cannibalize the sales of older chips. I'm thinking with 22nm it'll be different. IvyBridge would certainly eat up SB sales, so maybe some crippled versions will be released initially (e.g., dual-cores like with Clarkdale). But, since there's no "Atom SOC" on the market to cannibalize, they might pull together all the resources to push those out on 22nm... They would certainly benefit from the process advantage in order o break into that market, and Intel execs are not stupid - they know it is a lucrative growth opportunity for the company. All the announcements make it seem like they really want this. They've made changes to their approach before when it was apparent the old plan wasn't working (like Netburst). I'm hoping a 22nm SOC push is such a change... PC CPUs are fast enough now, but I wouldn't mind having a faster cell phone with a longer battery life.

      • maxxcool
      • 9 years ago

      Such yummy bait… must resist.

    • ssidbroadcast
    • 9 years ago

    Proof positive that waffle fries > normal fries.

      • WillBach
      • 9 years ago

      Yeah, I saw the first picture and thought, “I need to break for lunch. It’s waffle time.” Good job, Intel. I don’t think a CPU picture has ever made me hungry before…

      • dpaus
      • 9 years ago

      Actually, 22nm < 32nm. But I know what you mean; chips are good, and waffle chips take much less power to cook, and energy efficiency is, like, you know, good, eh?

    • jstern
    • 9 years ago

    Cool. I just wish they didn’t use the 3D in the name. 3D makes it sound like a gimmick.

    If perpendicular hard drives came out now, they probably would have been called 3D hard drive.

      • NeelyCam
      • 9 years ago

      They also call them “Tri-Gate transistors”

        • sweatshopking
        • 9 years ago

        Hey neely. INTEL SUCKS :p

          • NeelyCam
          • 9 years ago

          yeah, Intel sucks the life out of AMD’s plans.

            • sweatshopking
            • 9 years ago

            lol pretty much. with arm rising, intel will have less and less reason to babysit AMD. I like amd, and usually buy there products, but they really won’t be able to compete. Unless intel licenses this tech, or somehow they make their own, i don’t see amd surviving the next 5-10 years

            • NeelyCam
            • 9 years ago

            Zero chance Intel will ‘license’ tri-gate technology to anybody… unless you meant the foundry thingy with Apple? But no – AMD won’t get this until maybe 15nm (then again, GloFo might be going with something else).

            I attended an ISSCC panel about “22nm and beyond” – the panelists were top process guys from Intel, TSMC, GloFo, Samsung etc. They were bombarded with questions about what the companies are doing at 22nm, and nobody wanted to say anything conclusive, except that most agreed FinFETs (=tri-gates) won’t be coming at 22nm. Mark Borh didn’t say anything to that – I guess now we know why.

            This means Intel has tri-gates for about three years before others. [i<]Major[/i<] advantage. Maybe that's why Intel is investing heavily in 22nm fabs... they're aiming to leverage this advantage to the max.

            • sweatshopking
            • 9 years ago

            exactly. another 5 years of no real competition.

            • BaronMatrix
            • 9 years ago

            aaah the beauty of monopoly profits…keep feeding the beast. next time our economy won’t be so lucky…

            • NeelyCam
            • 9 years ago

            Um… somehow your comment seems a bit misplaced.

Pin It on Pinterest

Share This