IBM, GlobalFoundries, and Samsung offer a glimpse of chipmaking’s future

The opportunity doesn’t come along every day to get a detailed peek into the future of computing from the people who are building it. Last week, I had just such a chance. I attended the Common Platform Technology Forum, the annual get-together hosted by IBM, Samsung, and GlobalFoundries. These firms are the members of the Common Platform Alliance, a rather unique consortium of chipmakers.

Co-opetition: IBM, Samsung, and GlobalFoundries together

The Alliance member firms banded together over a decade ago in the face of growing challenges in the quest to achieve ever-higher densities in semiconductor production. Somewhat amazingly, this collaborative partnership has held together over time even though the firms involved sometimes compete directly against one another. The collaborators have seen their share of change, of course, most notably as AMD spun off its manufacturing arm into GlobalFoundries and, not long after that, GloFo acquired fellow Alliance member firm Chartered Semiconductor. However, the remaining entities appear to be as committed to working together as ever, with a clear sense that shared, common interests are at stake.

From left to right: Mike Cadigan (IBM), Dr. Gary Patton (IBM),

Dr. K.H. Kim (Samsung), Mike Noonen (GlobalFoundries)

I’m not aware of anything else quite like the Common Platform Alliance in the tech industry. The members include the #2 and #3 foundries in the world, Samsung and GlobalFoundries, firms that manufacture chips for other companies on contract. (TSMC holds the #1 spot.) Odds are that Samsung manufactured the SoC processor driving the smartphone in your pocket, whether it’s a Galaxy S3 or an iPhone, and if your PC has an AMD processor inside of it, that chip was likely produced by GlobalFoundries. With the rise of mobile devices and the ballooning costs of building chip factories, the foundry business has grown in market share and importance in recent years. Outside of Intel, many of the most prominent names in the chip business, from AMD and Nvidia to Apple and Qualcomm, are “fabless” semiconductor companies that must rely on outside foundries to produce the silicon they design.

IBM’s role in the Alliance is a bit different from the other two firms’. It takes the lead in the research and development of new fabrication techniques, looking ahead a decade or more to determine which avenues might provide the best path to the next reduction in transistor size at near-atomic levels. Much of this research takes place in upstate New York, at the Albany Nanotech Center, where researchers from IBM and academia partner with representatives from the foundries and equipment suppliers to develop upcoming generations of chipmaking tech. GloFo and Samsung then take IBM’s basic process technology, modify it to fit their customers’ needs, and bring it into high-volume production.

If all of that sounds complicated, that’s because it is. This collaborative effort is a mash-up of very large corporations, smaller tools providers, and academics, along with very significant involvement from multiple governments, most notably the State of New York. It is a huge, sprawling enterprise that involves a substantial chunk of the overall semiconductor manufacturing industry. Although Intel typically has been a year or two ahead of the Alliance partners in reaching new, smaller process nodes, the work done by this collaborative effort has contributed in a multitude of ways to the regular drumbeat of Moore’s Law, the founding constant of chipmaking that says the number of transistors jammed into a given space will double roughly every two years. This drumbeat has brought us a precipitous downward trend in the cost of computing, along with steady leaps in performance and reductions in power consumption.

Among other benefits, Moore’s Law has given us affordable, low-power system-on-a-chip (SoC) processors than have enabled the burgeoning markets for smartphones and tablet computers. Although all three Alliance firms have stakes in other markets, most of the talk at the Tech Forum this year was focused on SoCs for mobile applications, the richest trove of potential customers for the foundries. ARM had a conspicuous presence at the Forum, with an emphasis on its partnership with GloFo, while AMD was conspicuous mostly by its absence.

The presentations at the Forum offered glimpses into several stages of upcoming chip technology, from the now-imminent generation beyond today’s 32- and 28-nm silicon to truly wondrous innovations that could be a decade away from making it into real products. The prospects for the continuation of Moore’s Law has been a popular source of concern in recent years, but gloom didn’t dominate the talk at the Forum. What I saw there was both worrying, because of how much hard work remains to be done, and encouraging, given the wealth of resources being thrown at the problems and the astounding potential of technologies now being explored.

What’s next: a new transistor structure at 14 nm

Most immediately, the members of the Common Platform Alliance are working to deliver their next-gen process technologies, whose smallest feature sizes will be 20 and then 14 nm. I should say that each partner is working to deliver its piece of the puzzle, because they tend to diverge when it comes to final implementations. Several years ago, the Alliance articulated a vision of true uniformity and coordination that would allow any product to be manufactured at multiple sites—either at, say, different GloFo fabs in Dresden and New York, or perhaps even at two fabs belonging to different member companies. This year, the Alliance members were frank in admitting that the market instead asked for something else: customization, the ability to tune a solution to a customer’s specific needs. As a result, the sort of strict uniformity that would allow fab-to-fab portability is no longer an important goal for the Alliance.

Below is a slide showing GlobalFoundries’ process tech roadmap for the next little while. Mike Noonen, EVP of Sales, Marketing, Quality and Design at GlobalFoundries, presented this slide during the summit keynote. GloFo was much more forthcoming than Samsung, generally speaking, about its current status and future plans.

Source: GlobalFoundries

As you can see, the road ahead for GlobalFoundries involves a vastly simplified product offering. The firm offers a host of different process types for different market segments in the 28-nm node, but going forward, each step downward in geometry size includes only one process type. GloFo says each process can be tweaked to meet the needs of different types of devices, but the change is still notable. In the past, for instance, AMD Opterons have been fabricated on a super-high-performance process specifically tailored to achieve the high switching speeds needed for multi-gigahertz operation, using techniques like silicon-on-insulator (SOI). GloFo’s 20-nm process, dubbed 20LPM, jettisons SOI in favor of traditional bulk silicon and will serve everything from low-power mobile SoCs to high-performance CPUs. 20LPM is in testing now and is slated to be in production this year.

We are approaching the limits of conventional photolithography, the basis of modern chip fabrication, in which light is directed through a mask and onto a light-sensitive layer of material to etch the patterns that will become circuits. Making the transition to smaller geometries will require some new techniques. Most notably, the 20LPM process employs some double patterning, on the finest metal layers, in order to work around the current resolution limits of photolithography. Double patterning “cheats” by using two different masks, offset slightly, and two light exposures, to achieve a higher effective resolution—a little like interlacing on an old TV screen, only without the resulting flicker.

Since its beginnings, the semiconductor industry has used a mostly flat or “planar” transistor structure that has served it well, but the push to near-atomic-scale devices has led to increases in wasted power, or leakage, that threaten to wipe out the efficiency gains associated with a process shrink. In order to surmount this problem, chipmakers are turning to a transistor structure in which a thin silicon fin protrudes vertically so that it’s surrounded by the gate on three sides, offering more conductive surface area—and thus better efficiency. Intel adopted such a transistor structure, which it calls a “tri-gate transistor,” for its 22-nm process. Most of the rest of the industry, including the Alliance members, calls this sort of structure a FinFET. GlobalFoundries and the Alliance will use conventional planar transistors for their 20-nm process, and then they’ll make the transition to FinFETs at 14 nm.

Although the move to FinFETs is something of a landmark transition, it comes in the form of an incremental step. GloFo’s 14XM process will incorporate elements of 20LPM, building on it rather than supplanting it entirely. Like 20LPM, the 14XM process will include some double-patterning, and it will serve a range of devices from low-power mobile to high-performance computing. GloFo expects 14XM to go into production in the first half of 2014. Right now, the roadmap calls for another new process, 10XM, to come online the very next year, in 2015. The Alliance’s work at 10 nm employs FinFETs and “second-generation” double-patterning.

One thing we don’t know is how GlobalFoundries’ highest-profile customer will take advantage of this new process tech. With the changes in leadership at AMD has come a more conservative approach to both roadmap disclosures and process technology transitions. All we know at present is that the next-generation “Kaveri” APU, successor the Trinity and Richland and competitor to Ivy Bridge, is slated for 28-nm production late in 2013. Several of AMD’s other products, including the low-power “Temash” APU and the “Sea Islands” graphics chips, will be manufactured by TSMC at 28 nm. We’d expect future Opteron and FX processors to make the transition to 20LPM or 14XM at GlobalFoundries, but neither AMD nor GloFo is willing to talk about specifics.

Rather than discussing its cooperation with AMD, GlobalFoundries chose to highlight its partnerships with various important players in the mobile SoC ecosystem. Chief among them is ARM, whose CPU architectures are licensed by the vast majority of SoC makers. GlobalFoundries announced an expanded partnership with ARM in 2012, and at the Forum, it revealed some of the fruits of that collaboration.

Source: GlobalFoundries

Most notably, GloFo used ARM’s Cortex-A9 CPU core as test vehicle for its FinFET-enabled 14XM fab process. Compared to the current super-low-power 28nm process, the Cortex-A9 test chip built on 14XM exhibited considerable improvement, as the slide above illustrates. Those gains can translate into a 62% reduction in power consumption, a 61% increase in operating frequency, or some combination of the two. That’s a very healthy generational improvement, suggesting that the move to FinFETs can keep up the Moore’s Law-style cadence for at least one more generation.

Other GlobalFoundries partners that Noonen highlighted included Cyclos Semiconductor, whose resonant clocking technology was first deployed in AMD’s Piledriver core and contributed to its substantial power savings over the Bulldozer core made on the same process. Cyclos and GloFo are now working together to integrate resonant clocking into the ARM Cortex-A15 at 28 nm, and GlobalFoundries will make this power-saving core available to customers later this year.

Noonen also singled out Adapteva, a small processor startup that has built a simple, dual-issue RISC CPU core with integrated memory and multicore networking, intended to be implemented in massively parallel fashion. GloFo and Adapteva have built a chip at 28 nm that houses 64 CPU cores and achieves a claimed 100 GFLOPS of throughput—at only 2W of power draw and 10 mm² of die space. The two firms have partnered up to market this technology to potential foundry customers, for integration into their SoC solutions.

Partnerships like these obviously move well beyond the traditional contract manufacturing model, and they are a big part of GlobalFoundries’ approach to attracting new customers.

Future process tech—the possibilities and pitfalls

The whole industry is worried, to one degree or another, about whether Moore’s Law can be maintained, along with its attendant cost, power, and speed benefits. At the end of his opening keynote speech at the Forum, IBM vice-president and head of the Semiconductor Research & Development Center, Dr. Garry Patton, said straight up, “I believe CMOS scaling will continue.” Beyond that confident proclamation, though, all sorts of questions remain.

One of the big issues has to do with costs. Although it may be possible to continue pushing to smaller process geometries, doing so is becoming ever more complicated, which means more money must be poured into R&D while manufacturing methods are becoming more elaborate. The resulting higher per-transistor costs may begin to offset the gains won by cramming more transistors into a smaller area. If that effect becomes pronounced, computing could stop becoming cheaper over time at the rate we’ve come to expect. In fact, in a “fireside chat” during the Forum’s afternoon session, Dr. Handel Jones, CEO and owner of consulting firm IBS, said we’re facing this problem even at 20 and 14 nm. He suggested several possible remedies, most notably a renewed emphasis on efficient chip designs that ensure better utilization of the transistors on a die. Jones acknowledged that such efforts could mean chips take longer to design.

Another potential means of improving the economics of chipmaking is the adoption of larger wafers. Right now, state-of-the-art fabs produce chips on round metal wafers that are 300 mm across. The industry has been looking to 450-mm wafers as the next logical step, and Intel recently showed the first fully patterned 450-mm wafer. However, during a Q&A at the Forum, one of the IBM representatives said 450-mm wafers are still some years off, probably arriving near the end of this decade.

Semiconductor scaling faces other hurdles before the end of the decade, too. The Alliance members have articulated a fairly clear path to the 10-nm process node via double-patterning and FinFETs, but beyond that, the road could get bumpy. Dr. Patton told the crowd that moving to 7 nm with conventional lithography would require triple or quadruple patterning, which he characterized as “very expensive.”

Many folks have considered the obvious next step forward for lithography to be the use of shorter-wavelength extreme ultraviolent (EUV) light. IBM appears to be working diligently on developing EUV technology, but Patton threw cold water on the notion that EUV is a foregone conclusion. He explained that the process involves dropping molten tin at a speed of about 150 MPH inside of a tool, zapping it with a laser to broaden it, hitting it with a “real CO₂ laser” to generate plasma, while bouncing off of about six mirrors, each with an efficiency of 6%. Patton called EUV “the biggest change in the history of the industry” and outright disputed the notion that making it feasible is now only a matter of “hard engineering work.” Instead, he said, there are still “real physics problems we have to solve.” As if those words hadn’t raised the uncertainty quotient enough, Dr. Patton then mentioned the possibility that the masks used for EUV lithography could themselves have flaws in them, and he compared finding a 30-nm defect on such a mask to searching over 10% of California’s surface area in order to find a golf ball.

Once the crowd was sufficiently terrified, Patton listed a host of other possibilities for extending semiconductor scaling, some of them involving exotic new materials and techniques. Part of his intent, I think, was to illustrate that the way forward is by no means clear, and that IBM’s research arm is exploring a host of possibilities in hopes of finding the best possible options.

Two of the most intriguing possibilities were further explored by IBM researchers in the afternoon sessions.

If they do become viable, silicon nanowires may be the last gasp of silicon-based semiconductors.

Dr. Mukesh Khare, Director of the Semiconductor Alliance at IBM, outlined his group’s research into silicon nanowires. Nanowires are long, thin silicon structures between about three and 20 nm in diameter. These wires could serve as the building block for transistors in the future. In fact, the basic transistor structure doesn’t look too terribly different from the FinFETs of today; the nanowire takes the place currently occupied by a silicon fin. Nanowires are created by etching a long and relatively thick bar of silicon using conventional lithography and then using hydrogen to anneal the silicon. The annealing process leaves behind a thinner, rounder silicon “wire” structure that is suspended above the substrate layer below it. After that, gate material is deposited all around the nanowire, even beneath it, for even better exposure of surface area than a FinFET’s triple-sided contact area. There are still hurdles to overcome in making nanowires a viable manufacturing technology, but the potential is undeniable, especially since one can easily imagine how nanowire creation could be integrated into current manufacturing methods.

If they do become viable, silicon nanowires may be the last gasp of silicon-based semiconductors. According to Dr. Supratik Guha, Director of the Physical Sciences Department at IBM Research, silicon ceases to be a good material “when you come to atomic dimensions.” Seven nanometers is about as small as silicon can go; beyond that point, other materials might be superior. For the past couple of years, his group at IBM Research has been exploring the most promising alternative material: carbon—specifically, the rolled lattice structures known as carbon nanotubes.

Since their discovery, carbon nanotubes have been the subject of intensive study, in part because they can act as semiconductors. You can imagine these tiny tubes, typically about one nanometer in diameter, taking the place of a fin or nanowire in a transistor layout. Dr. Guha explained that the first carbon nanotube transistors were created around 2001, and some time later, researchers figured out how to encase a tube with gate material. In 2007, the first carbon nanotube circuit was demonstrated.

Part of the appeal here is that, as Guha put it, the “short-channel characteristics for carbon nanotubes are very, very good.” To put it simply, that means carbon nanotube-based chips could in theory have relatively low power leakage, and leakage is the #1 problem to be managed in today’s silicon chips. Guha said IBM did a simulation of a hypothetical carbon nanotube-based chip, comparing it to FinFET silicon at nodes as small as 5 nm, with promising results. At the same power density, carbon nanotubes could offer three times the performance of silicon FinFETs—or at equivalent performance, they could require one-third the power.

Although carbon nanotubes hold promise, incorporating them successfully into some variant of today’s chip manufacturing methods will be no small feat. Carbon nanotubes are “grown” in a lab using a chemical process, and they exit that process with impurities—some portion of the resulting nanotubes are metallic conductors. The metallic nanotubes must be culled from the rest, so that a pure batch of semiconducting nanotubes remains. The goal then is to deposit the nanotubes in a layer atop a traditional silicon wafer and then, somehow, to align them into a precise, regular layout, so the wafer can be patterned using lithography.

Amazingly, researchers have made tremendous progress with each of these challenges. Dr. Guha said the IBM team can now sort nanotubes well enough to achieve 99.9% purity using an automated, parallel, electrical sorting method. There’s still work to be done to achieve the “four or five nines” of purity needed for production, but Guha believes the purity challenge will be solved.

The next challenge seems even more daunting: somehow, to arrange the nanotubes in a regular, predictable fashion on top of a silicon wafer. The answer IBM researchers are pursuing uses a bit of dark magic known as directed self-assembly (DSA), in which nano-scale materials are coaxed into forming ordered structures. Already, Guha reported, they are able to align about 10 nanotubes per micron with consistency. That ability has led to another breakthrough: the chip-scale fabrication of positioned carbon nanotube-based devices. Last year, IBM researchers demonstrated a chip with 10,000 carbon nanotube devices onboard, produced using techniques similar to silicon processing. As I mentioned above, IBM has developed a process for encasing carbon nanotubes with gate materials, much as they do with silicon nanowires. The result is an all-around carbon nanotube FET in which the gate contact is self-aligned properly to the source and drain.

So yes, researchers are making considerable progress toward using carbon nanotubes in chips. What they have now, though, is only a beginning. The test chips give them the ability to run statistical analysis, so the difficult work of increasing alignment precision and reducing defects can commence. Dr. Guha expects to see contributions from multiple disciplines coming into the effort, helping to sort out problems of chemistry, the physics of quantized systems, and the behavior of materials at atomic scales. He expressed confidence that the Common Platform team “has the horsepower to make it happen.” If the effort succeeds, we could see workable carbon nanotube-based chip fabrication technology somewhere in the 2019-2022 time frame.

If chips can be produced with CNTFETs, they’ll face challenges on other fronts, most notably in scaling up the interconnects used to move data around. The metal wires used now may not scale down much further without developing serious problems with performance and reliability. We’ve known about this issue for a while, of course; even full-scale networks now use optical links for the highest transfer rates. Dr. Patton offered a bit of hope in his keynote by pointing to an emerging technology, nanophotonics, as a potential chip-level interconnect solution. He showed an example of a nanophotonic waveguide integrated into a CMOS logic circuit. Patton claimed the device can transfer 25GB/s and is “very cost effective.”

Ultimately, Patton envisions chips with multiple layers stacked on top of one another in 3D: a photonics plane, a memory plane, and a logic plane. A chip built in this fashion, he said, could have 300 cores, 30GB of embedded DRAM, and “incredible bandwidth” to tie it all together. One gets the impression that he intends to help make it happen.

If that isn’t an antidote to the gloom coming from some quarters, I don’t know what is.

Comments closed
    • shank15217
    • 8 years ago

    Basic research is awesome.. pushing all boundaries, it was a delight just to read this article. Its funny how you hear about how Apple will take over the world with their idevices but one read of this articles shows just where the frontiers of sciences are.. IBM Research develops CNTFETs and Apple patents the rounded square. I’m not trying to insult Apple however its pretty obvious that there are movers and shakers in this industry that really push boundaries with a lot of basic knowledge that’s NOT patented and people need to be educated.

    • sarahsmith320o
    • 8 years ago
    • CBHvi7t
    • 8 years ago

    “We are approaching the limits of conventional photolithography, the basis of modern chip fabrication, in which light is directed through a mask and onto a light-sensitive layer of material to etch the patterns that will become circuits. Making the transition to smaller geometries will require some new techniques. Most notably, the 20LPM process employs some double patterning, on the finest metal layers, in order to work around the current resolution limits of photolithography.”

    That makes it sound as if this is new. Multi-patterning, phase-shift-mask and other “tricks” have been used for years. The 90nm (~2004) node was the last node that did not make use of very complicated tricks.

    • jessterman21
    • 8 years ago

    The more I look at those guys on those stools, the more I laugh.

    Completely unrelated, but check out the shout out to Scott on TH! [url<]http://www.tomshardware.com/reviews/gaming-processor-frame-rate-performance,3427-2.html[/url<]

    • echo_seven
    • 8 years ago

    Does anyone know how they prevent transistors from getting damaged or displaced (by jostling or vibration from physical handling) when the feature sizes are so small? Might be a dumb question (since it would be true for previous processes as well).

      • MadManOriginal
      • 8 years ago

      Hmm…they don’t I guess? Because it’s a solid piece of silicon, the transistors can’t really move once created.

      • Anonymous Hamster
      • 8 years ago

      During manufacturing, the wafers are very sensitive, and handling is typically by robots under very tightly controlled conditions. As mentioned, though, chip manufacturing is overall an additive process, where layers of material are deposited over existing layers, with the top layer having only metal contacts and insulator left exposed (typically). The result is a solid package. There are various techniques for depositing the added layers, though, so they certainly must consider the proper technique to use for what is being covered with what material.

        • cjb110
        • 8 years ago

        Also the fabs are some of the cleanest places on the planet, far cleaner than any operating room…as obviously a stray spec of dust (which is about 100-1000 times larger than transistors) would balls things up!

      • willmore
      • 8 years ago

      The last step in processing an IC is called ‘passivation’. That process covers the whole chip with a thick layer (except for the I/O pads, of course) of SiO2–think quartz crystal. At the nano-scale proportions of these devices, it’s very very strong. The chip as a whole may crack, but individual transistors (and other features) are safely encapsulated.

        • echo_seven
        • 8 years ago

        Ah, I see.

    • MNRHT
    • 8 years ago

    Great article

    • link626
    • 8 years ago

    That guy is giant.

    makes the korean guy look miniscule.

      • Anonymous Hamster
      • 8 years ago

      Part of it is that his chair is a bit closer to the camera.

    • willg
    • 8 years ago

    Piledriver shipping in Trinity/Vishera doesn’t have resonant clock mesh technology from Cyclos. Richland might, Kaveri will from what I’ve read.

      • NeelyCam
      • 8 years ago

      Trinity had a resonant clock mesh from [i<]somebody[/i<], right...? AMD talked about it in ISSCC last year

        • willg
        • 8 years ago

        ” Originally, this story contained a page exploring resonant clock mesh technology, from Cyclos Semiconductor, which was expected to surface in AMD’s Piledriver-based SoCs. Upon discussing this in greater depth with the company, however, “the timing of the products and the implementation of resonant clock mesh caused [the technology] to not be productized with “Piledriver” based processors.” As such, we’ve removed that page to avoid any confusion.”

        [url<]http://www.tomshardware.com/reviews/fx-8350-vishera-review,3328.html[/url<]

          • NeelyCam
          • 8 years ago

          Interesting – I didn’t know. Thanks for the info. So, they were blowing smoke in ISSCC, then..

    • TO11MTM
    • 8 years ago

    Everything is new under the sun; I see this easily going about as well as AIM did in the 90s…

    • Sam125
    • 8 years ago

    Thanks for the good morning reading, Scott. It’s always cool to stay abreast of technology that makes its way from academia to the corporate lab into high volume production and finally to the consumer. It makes me feel all warm and fuzzy inside knowing that such bright minds are being put to such good and productive use! Well, maybe not warm and fuzzy but decidedly optimistic that tomorrow will be a much better place to exist than today.

    • Firestarter
    • 8 years ago

    So, AMD and GloFo expect the 20LPM and 14XM processes to scale up into the 4ghz range, as well as down to ~1watt?

      • CBHvi7t
      • 8 years ago

      not in the same product obviously.

    • ronch
    • 8 years ago

    In the picture, the GF marketing guy must be thinking to himself, “Damn it, I wish I took something tech-oriented instead of marketing so I could talk to these guys… I feel so out of place here.”

      • willmore
      • 8 years ago

      You might want to check out his background before you think he’s some MBA in a suit. He’s actually an engineer with a number of pattents to his name. He started his career as an FAE which isn’t for the weak.

    • ronch
    • 8 years ago

    It’s interesting to see all these industry giants banding together to try and bring some competition to Intel – IBM in particular – and yet still… Intel has been in the lead since who knows when. Also, all these research projects on what they can use after silicon (i.e. quantum computing, etc.) seem promising, but they really don’t have much to show for it. I mean, we keep reading about new stuff such as new materials, but really, how many of them actually end up in our hands? Quantum Computing, for instance, has been hyped all the way to Mars for years, but to this date there aren’t really any consumer-grade products out there that uses the technology.

    Back in the 90’s we had so many different CPU makers in the x86 bandwagon, as well as other ISAs, and in a lot of ways that was the Golden Age of computing. The thing is, if that was the Golden Age, then this age must be Platinum. CPUs are very cheap today and offer computing performance way beyond what we thought we can put inside a desktop PC case back then. We have Moore’s Law to thank for that as well as the incredible efforts of those involved in the chipmaking and design business. But as the article says, when the cost of advanced manufacturing becomes prohibitive enough to offset any savings achieved by using finer processes, then the fun stops. Stop shrinking and you stop being able to cram in more computing resources (or cores, if you will, since CPU designers seem to just give us more cores anyway instead of still being able to improve their cores in big ways), or perhaps designers need to learn how to design processors more efficiently (stop using automated design tools, perhaps?).

      • Flying Fox
      • 8 years ago

      It is probably not a surprise that the alliance was able to hold it together for so long: the common threat of the 2 other 800-pound gorillas – Intel and TSMC (or just Intel? Does TSMC do their own R&D?).

      • Third_Eye
      • 8 years ago

      Just to give you a background, the IBM Technology Alliance(ITA) has been there since early to mid 2000s. All of them have the following in common.
      a) IBM would based on its patent rich history and heritage lead the research side
      b) The other partners would place their Engineers and process experts at IBM research facility
      c) They will decide on the common approaches like Gate First vs. Gate Last, FinFET vs. Planar at a particular node.
      d) The partners will implement and commercialize the technology.
      e) The implementation is different is each of the partner’s fabs, but the common research binds them together.
      f) Even though IBM is the head honcho in RnD base, its other partners actually are the ones that commercialize the technology and develop products before IBM.

      The terms of membership/licensing fee etc is not public.

      Earlier the consortium was sub-divided into 2 camps. And not surprisingly they would have different press releases on technology progression.

      [b<]IBM SIO Option Camp[/b<] ISOC [i<]IBM, AMD[/i<] The chips produced by this consortium were to made on SOI wafers. AMD actually drank the IBM cool-aid and oriented itself to produce mass market products in SOI technology. There were a lot of "commercialization" issues that AMD had to fight with itself. On a direct fight with Intel, AMD had to scurry up their SOI node to the latest and greatest Intel has in the bulk to be competitive. IBM had no such compulsion as it used them to fab only its Power server MPUs as well as in later stages the console chips. [b<]IBM Bulk Option Camp[/b<] IBOC [i<]IBM, Samsung, ST Micro, Chartered Semi, Renesas, Toshiba, Freescale[/i<] The chips produced by this consortium were to made on bulk wafers. Since they had more members and minds, in theory they can resolve any issues quicker. That is what it slowly turned out to be. Once AMD spun of its fabs into The Foundry Company (TFC) first, the number 1 thing TFC did was become a member of the IBOC. Then later with the acquisition of Chartered Semi, Global Foundries as it was known sealed the IBOC deal. In 2012 UMC had joined this alliance for 20nm and below so as to facilitate Qualcomm investment. Effectively now all its members had Bulk while SOI was an option just to 2 of them. Then the ITA started to take common decisions in the last couple of years like a) They would chose which kind of Option they will choose for a particular node. b) Like 28nm, 20nm will be bulk HKMG based. No equivalent SOI processes for those from the alliance. c) So for some time, the best ISOC process would be the 32nm PD-SOI process at GloFo Dresden. d) SOI will be re-visited at 16/14nm with FinFETs etc. While the consortium came up with the recommendations, the individual partners can go ahead and do something on their own. Unable to convince the consortium to have a SOI map at 28nm, ST Micro went on its way and came up with its 28nm FD-SOI technology. While STM is very much a part of the ITA, their FD-SOI is not. And since STM does not have the money/fab capacity to monetize their research, it is desperately trying to rope in the current SOI leader, Global Foundries (whose deep pockets from the Arab SWF) to do the commercialization. But there was something else that was new in 2013 forum that was new that Scott has broke. Actually this explains my question all along. IBM, Samsung and Global Foundries besides being the premier ITA members also tried to form an IBM Common Platform (ICP). So they not only shared the same technology, but standardized their process and manufacturing to a common level that [b<]a client of one of the 3, can easily with minimal changes utilize the other two's fabs due to similar process steps [/b<] if suppose demand exceeds supply. So I had thought that AMD would have atleast 3 fabs to chose from GF, Samsung and IBM (though IBM is not latest and greatest with Fabs) once it choses a standard process technology from ICP. Effectively that common process standardization is no longer the objective. Now that partially explains as to why AMD signs contracts with GF inspite of having a roller-coaster relationship. To just see how less influential IBM Micro is in manufacturing, last year, Samsung and Glo Fo put their feet down strong enough to convince IBM to do a 180 against the latter's thinking and do a Gate-Last approach at 22/20nm.

        • NeelyCam
        • 8 years ago

        +1. A bit long, but I’m glad I read it – a great summary with bonus info.

    • MadManOriginal
    • 8 years ago

    [quote<]...the 20XM process employs some double patterning...[/quote<] 20LPM?

    • chuckula
    • 8 years ago

    [quote<]Noonen also singled out Adapteva, a small processor startup that has built a simple, dual-issue RISC CPU core with integrated memory and multicore networking, intended to be implemented in massively parallel fashion. GloFo and Adapteva have built a chip at 28 nm that houses 64 CPU cores and achieves a claimed 100 GFLOPS of throughput—at only 2W of power draw and 10 mm² of die space. The two firms have partnered up to market this technology to potential foundry customers, for integration into their SoC solutions.[/quote<] This is nowhere near as impressive as it sounds considering Intel had the Terascale Polaris test chip running 1 Teraflop in a 62 watt power envelope.. in 2007 on a 65 nm process (http://en.wikipedia.org/wiki/Teraflops_Research_Chip#Energy_efficiency)

    You ever wonder why Intel didn’t just drop that chip on the market before GPGPU got to where it is today? It’s because raw computing power isn’t the biggest issue and hasn’t been the biggest issue in over 10 years.

    The bigger issues are:
    1. Keeping all of those cores fed with useful code & data instead of just spinning while the memory subsystem tries to catch up (guess where at least 80% of the engineering time & effort on the Xeon Phi went, hint: it wasn’t put into making the vector units, those are comparatively easy).

    2. Getting software code that can actually take advantage of massively parallel cores to do something other than embarrassingly parallel benchmarks that look great on powerpoint slides but don’t buy you much in the real world.

      • NeelyCam
      • 8 years ago

      [quote<]You ever wonder why Intel didn't just drop that chip on the market before GPGPU got to where it is today?[/quote<] Research != product. It probably takes 5-10 years for the research folks to convince product folks that their stuff is something worth pursuing. Intel had a 20Gb/s electrical I/O in ISSCC 2006 (at 90nm), yet in 2013 products are still in the 10Gb/s regime (ThunderBolt).. [url<]http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1696056[/url<]

      • WillBach
      • 8 years ago

      It is exactly as impressive as the product it ships in. One deci-Polaris reduced to a two-Watt IP block lacks the whiz-bang of Polaris expanded to a 230-Watt general purpose add-in card, but it could still significant.

      • MNRHT
      • 8 years ago

      I think they’re both impressive, no need to turn it into a one winner game. Let’s see adoption of both and then benchmark them — that’s the only true test. Interesting technical points — would be interested to see how you conclude that, especially on your bigger issue #1.

      The thing that’s disappointing right now is that Xeon Phi isn’t being commercially released. I know there’s a lot of enthusiasts (me for one) who would love to get their hands on one of them. I’m sure there’s probably a reason for this, limited yields, limited availability, etc., but still, it would be nice to hear some of that reasoning.

      As far as the software side, it’s not even a question of taking advantage of massive parallelism. Software doesn’t even use basic parallelism still. Even worse, most of the primary user applications now like Google don’t use any of the onboard computing power. It’s a little frustrating to get your hands on a new ultrabook and have nothing mainstream that’s not a game that can actually tax the system. There are huge opportunities for software right now that people just aren’t taking advantage of.

      • WaltC
      • 8 years ago

      [quote<]You ever wonder why Intel didn't just drop that chip on the market before GPGPU got to where it is today? It's because raw computing power isn't the biggest issue and hasn't been the biggest issue in over 10 years.[/quote<] You're forgetting the most important thing of all. If you cannot mass-produce a test chip at acceptable yields you are left with a PR chip and not much else. IE, if you can't *make* the thing in quantity you cannot bring it to market. Test chips not brought to market rate a 0 in my book...;)

    • NeelyCam
    • 8 years ago

    [quote<]I believe the Platform's 20-nm node will be the first widespread use of double-patterning in an advanced process. Intel has been making 22-nm chips for over a year now, but it hasn't yet resorted to double-patterning.[/quote<] Intel has been using double patterning for a while now: [url<]http://en.wikipedia.org/wiki/Multiple_patterning#Intel[/url<] [quote<]"Intel has been using double patterning in its 45 nm as well as its 65 nm technology."[/quote<] Also, [quote<]Those gains can translate into a 62% reduction in power consumption, a 61% increase in operating frequency, or some combination of the two. That's a very healthy generational improvement[/quote<] That's two generations [i<]and[/i<] going from planar to FinFET.

      • Damage
      • 8 years ago

      Interesting about Intel and double patterning. Hadn’t realized. I nixed that bit. 🙂

      And I consider switching to FinFETs a *part* of the generational change. Also, seems like 20/14nm fit together like 32/28nm. Not sure you could call it two full generations between them.

        • NeelyCam
        • 8 years ago

        Usually, a “generation” brings roughly a 50% area reduction. For instance, going from 28nm to 20nm is a “one-node” process shrink, as is going from 20nm to 14nm: (20/28)^2=(14/20)^2=0.5 (approximately).

        28nm-to-32nm is closer to what they used to call “half-node” shrinks, but yeah – those two are somewhat equivalent. And it’s all semantics, anyway – a 28nm transistor doesn’t necessarily have a 28nm gate length, and contacted poly pitch might not shrink as much as what the ‘node name’ seems to promise etc.

          • Damage
          • 8 years ago

          Right. 🙂

    • NeelyCam
    • 8 years ago

    [quote<]From left to right: Mike Cadigan (IBM), Dr. Gary Patton (IBM), Dr. K.H. Kim (Samsung), Mike Noonen (GlobalFoundries)[/quote<] Soo... three semiconductor experts and one marketing guy. Who doesn't belong? Shouldn't GloFo get their process figured out first before they start sending marketing guys promoting it? Couldn't they send an engineer to a forum like this? Or are all the engineers fired now, and only marketing guys remain?

      • ronch
      • 8 years ago

      Even when separated, GF and AMD still have a few things in common. They always bring out the marketers.

      • keltor
      • 8 years ago

      Mike Noonen was an FAE and he’s got a number of patents to his name, he’s not like some clueless marketing droid.

        • NeelyCam
        • 8 years ago

        He was a FAE for three years (1985-1988) right after graduating with a B.Sc. He’s been in Sales/Marketing/Business ever since. He got his name on four patents, each with a bunch of co-inventors, during the last years of his tenure as a Vice President of Business Development at 8×8 (ten years after being a FAE)… which makes me wonder if he got the name there because he was the inventor, or because he was the boss of the inventor.

        Regardless, his semiconductor experience is nowhere near that of the other three in the forum. He’s with GloFo mainly because of his sales/marketing/business experience, and that’s fine, but in this forum he’s the marketing guy – not the semiconductor expert.

          • Sam125
          • 8 years ago

          This might come as a surprise to you but there are more than a few executives and CEOs at tech companies who started off as hardcore engineers or working in the lab. It’s just that being a numbers person [i<]and[/i<] good with people is kind of a valuable talent which is why some of the techies make a career switch early on.

            • NeelyCam
            • 8 years ago

            Sure, I agree with that. All I’m saying is that this guy has been a sales/marketing/business guy for 25 years, while the others spent time in hard-core semiconductor engineering/research. If this forum was meant to be focused on technology, he’s simply out-of-place.

            Note also that when he was an engineer, his work had pretty much nothing to do with semiconductors. When these people are discussing pros/cons of gate first/last, FinFET vs. bulk, low leakage vs. high performance etc., Mike Noonan can’t have that much to bring to the table. As is also clear from the article, his role there was that of a sales/marketing guy, selling the GloFo technology, talking about partners and features they offer…

            I’m not saying he’s not a smart guy – you kind of have to be to climb the ladder that high (unless, of course, you’re [b<]pure evil[/b<]).

Pin It on Pinterest

Share This