Single page Print

Future process tech—the possibilities and pitfalls
The whole industry is worried, to one degree or another, about whether Moore's Law can be maintained, along with its attendant cost, power, and speed benefits. At the end of his opening keynote speech at the Forum, IBM vice-president and head of the Semiconductor Research & Development Center, Dr. Garry Patton, said straight up, "I believe CMOS scaling will continue." Beyond that confident proclamation, though, all sorts of questions remain.

One of the big issues has to do with costs. Although it may be possible to continue pushing to smaller process geometries, doing so is becoming ever more complicated, which means more money must be poured into R&D while manufacturing methods are becoming more elaborate. The resulting higher per-transistor costs may begin to offset the gains won by cramming more transistors into a smaller area. If that effect becomes pronounced, computing could stop becoming cheaper over time at the rate we've come to expect. In fact, in a "fireside chat" during the Forum's afternoon session, Dr. Handel Jones, CEO and owner of consulting firm IBS, said we're facing this problem even at 20 and 14 nm. He suggested several possible remedies, most notably a renewed emphasis on efficient chip designs that ensure better utilization of the transistors on a die. Jones acknowledged that such efforts could mean chips take longer to design.

Another potential means of improving the economics of chipmaking is the adoption of larger wafers. Right now, state-of-the-art fabs produce chips on round metal wafers that are 300 mm across. The industry has been looking to 450-mm wafers as the next logical step, and Intel recently showed the first fully patterned 450-mm wafer. However, during a Q&A at the Forum, one of the IBM representatives said 450-mm wafers are still some years off, probably arriving near the end of this decade.

Semiconductor scaling faces other hurdles before the end of the decade, too. The Alliance members have articulated a fairly clear path to the 10-nm process node via double-patterning and FinFETs, but beyond that, the road could get bumpy. Dr. Patton told the crowd that moving to 7 nm with conventional lithography would require triple or quadruple patterning, which he characterized as "very expensive."

Many folks have considered the obvious next step forward for lithography to be the use of shorter-wavelength extreme ultraviolent (EUV) light. IBM appears to be working diligently on developing EUV technology, but Patton threw cold water on the notion that EUV is a foregone conclusion. He explained that the process involves dropping molten tin at a speed of about 150 MPH inside of a tool, zapping it with a laser to broaden it, hitting it with a "real CO₂ laser" to generate plasma, while bouncing off of about six mirrors, each with an efficiency of 6%. Patton called EUV "the biggest change in the history of the industry" and outright disputed the notion that making it feasible is now only a matter of "hard engineering work." Instead, he said, there are still "real physics problems we have to solve." As if those words hadn't raised the uncertainty quotient enough, Dr. Patton then mentioned the possibility that the masks used for EUV lithography could themselves have flaws in them, and he compared finding a 30-nm defect on such a mask to searching over 10% of California's surface area in order to find a golf ball.

Once the crowd was sufficiently terrified, Patton listed a host of other possibilities for extending semiconductor scaling, some of them involving exotic new materials and techniques. Part of his intent, I think, was to illustrate that the way forward is by no means clear, and that IBM's research arm is exploring a host of possibilities in hopes of finding the best possible options.

Two of the most intriguing possibilities were further explored by IBM researchers in the afternoon sessions.

If they do become viable, silicon nanowires may be the last gasp of silicon-based semiconductors.

Dr. Mukesh Khare, Director of the Semiconductor Alliance at IBM, outlined his group's research into silicon nanowires. Nanowires are long, thin silicon structures between about three and 20 nm in diameter. These wires could serve as the building block for transistors in the future. In fact, the basic transistor structure doesn't look too terribly different from the FinFETs of today; the nanowire takes the place currently occupied by a silicon fin. Nanowires are created by etching a long and relatively thick bar of silicon using conventional lithography and then using hydrogen to anneal the silicon. The annealing process leaves behind a thinner, rounder silicon "wire" structure that is suspended above the substrate layer below it. After that, gate material is deposited all around the nanowire, even beneath it, for even better exposure of surface area than a FinFET's triple-sided contact area. There are still hurdles to overcome in making nanowires a viable manufacturing technology, but the potential is undeniable, especially since one can easily imagine how nanowire creation could be integrated into current manufacturing methods.

If they do become viable, silicon nanowires may be the last gasp of silicon-based semiconductors. According to Dr. Supratik Guha, Director of the Physical Sciences Department at IBM Research, silicon ceases to be a good material "when you come to atomic dimensions." Seven nanometers is about as small as silicon can go; beyond that point, other materials might be superior. For the past couple of years, his group at IBM Research has been exploring the most promising alternative material: carbon—specifically, the rolled lattice structures known as carbon nanotubes.

Since their discovery, carbon nanotubes have been the subject of intensive study, in part because they can act as semiconductors. You can imagine these tiny tubes, typically about one nanometer in diameter, taking the place of a fin or nanowire in a transistor layout. Dr. Guha explained that the first carbon nanotube transistors were created around 2001, and some time later, researchers figured out how to encase a tube with gate material. In 2007, the first carbon nanotube circuit was demonstrated.

Part of the appeal here is that, as Guha put it, the "short-channel characteristics for carbon nanotubes are very, very good." To put it simply, that means carbon nanotube-based chips could in theory have relatively low power leakage, and leakage is the #1 problem to be managed in today's silicon chips. Guha said IBM did a simulation of a hypothetical carbon nanotube-based chip, comparing it to FinFET silicon at nodes as small as 5 nm, with promising results. At the same power density, carbon nanotubes could offer three times the performance of silicon FinFETs—or at equivalent performance, they could require one-third the power.

Although carbon nanotubes hold promise, incorporating them successfully into some variant of today's chip manufacturing methods will be no small feat. Carbon nanotubes are "grown" in a lab using a chemical process, and they exit that process with impurities—some portion of the resulting nanotubes are metallic conductors. The metallic nanotubes must be culled from the rest, so that a pure batch of semiconducting nanotubes remains. The goal then is to deposit the nanotubes in a layer atop a traditional silicon wafer and then, somehow, to align them into a precise, regular layout, so the wafer can be patterned using lithography.

Amazingly, researchers have made tremendous progress with each of these challenges. Dr. Guha said the IBM team can now sort nanotubes well enough to achieve 99.9% purity using an automated, parallel, electrical sorting method. There's still work to be done to achieve the "four or five nines" of purity needed for production, but Guha believes the purity challenge will be solved.

The next challenge seems even more daunting: somehow, to arrange the nanotubes in a regular, predictable fashion on top of a silicon wafer. The answer IBM researchers are pursuing uses a bit of dark magic known as directed self-assembly (DSA), in which nano-scale materials are coaxed into forming ordered structures. Already, Guha reported, they are able to align about 10 nanotubes per micron with consistency. That ability has led to another breakthrough: the chip-scale fabrication of positioned carbon nanotube-based devices. Last year, IBM researchers demonstrated a chip with 10,000 carbon nanotube devices onboard, produced using techniques similar to silicon processing. As I mentioned above, IBM has developed a process for encasing carbon nanotubes with gate materials, much as they do with silicon nanowires. The result is an all-around carbon nanotube FET in which the gate contact is self-aligned properly to the source and drain.

So yes, researchers are making considerable progress toward using carbon nanotubes in chips. What they have now, though, is only a beginning. The test chips give them the ability to run statistical analysis, so the difficult work of increasing alignment precision and reducing defects can commence. Dr. Guha expects to see contributions from multiple disciplines coming into the effort, helping to sort out problems of chemistry, the physics of quantized systems, and the behavior of materials at atomic scales. He expressed confidence that the Common Platform team "has the horsepower to make it happen." If the effort succeeds, we could see workable carbon nanotube-based chip fabrication technology somewhere in the 2019-2022 time frame.

If chips can be produced with CNTFETs, they'll face challenges on other fronts, most notably in scaling up the interconnects used to move data around. The metal wires used now may not scale down much further without developing serious problems with performance and reliability. We've known about this issue for a while, of course; even full-scale networks now use optical links for the highest transfer rates. Dr. Patton offered a bit of hope in his keynote by pointing to an emerging technology, nanophotonics, as a potential chip-level interconnect solution. He showed an example of a nanophotonic waveguide integrated into a CMOS logic circuit. Patton claimed the device can transfer 25GB/s and is "very cost effective."

Ultimately, Patton envisions chips with multiple layers stacked on top of one another in 3D: a photonics plane, a memory plane, and a logic plane. A chip built in this fashion, he said, could have 300 cores, 30GB of embedded DRAM, and "incredible bandwidth" to tie it all together. One gets the impression that he intends to help make it happen.

If that isn't an antidote to the gloom coming from some quarters, I don't know what is.TR

Like what we're doing? Pay what you want to support TR and get nifty extra features.
Top contributors
1. GKey13 - $650 2. JohnC - $600 3. davidbowser - $501
4. cmpxchg - $500 5. DeadOfKnight - $400 6. danny e. - $375
7. the - $360 8. Ryszard - $351 9. rbattle - $350
10. Ryu Connor - $350
Nvidia's GeForce GTX 980 and 970 graphics cards reviewedThe bigger Maxwell arrives in style 394
Intel's Xeon E5-2687W v3 processor reviewedHaswell-EP brings the hammer down 114
AMD's FX-8370E processor reviewedEight threads at 95W 146
AMD's Radeon R9 285 graphics card reviewedTonga is quite the surprise 124
Intel's Core i7-5960X processor reviewedHaswell Extreme cranks up the core count 198
Asus' ROG Swift PG278Q G-Sync monitor reviewedEverything is awesome when you're part of a team 152
AMD spills beans on Seattle's architecture, reference serverCache networks and coprocessors 46
Intel's Broadwell processor revealedThe 14-nm Core M aims to upend the tablet market 86