Future process tech—the possibilities and pitfalls
The whole industry is worried, to one degree or another, about whether Moore's Law can be maintained, along with its attendant cost, power, and speed benefits. At the end of his opening keynote speech at the Forum, IBM vice-president and head of the Semiconductor Research & Development Center, Dr. Garry Patton, said straight up, "I believe CMOS scaling will continue." Beyond that confident proclamation, though, all sorts of questions remain.
One of the big issues has to do with costs. Although it may be possible to continue pushing to smaller process geometries, doing so is becoming ever more complicated, which means more money must be poured into R&D while manufacturing methods are becoming more elaborate. The resulting higher per-transistor costs may begin to offset the gains won by cramming more transistors into a smaller area. If that effect becomes pronounced, computing could stop becoming cheaper over time at the rate we've come to expect. In fact, in a "fireside chat" during the Forum's afternoon session, Dr. Handel Jones, CEO and owner of consulting firm IBS, said we're facing this problem even at 20 and 14 nm. He suggested several possible remedies, most notably a renewed emphasis on efficient chip designs that ensure better utilization of the transistors on a die. Jones acknowledged that such efforts could mean chips take longer to design.
Another potential means of improving the economics of chipmaking is the adoption of larger wafers. Right now, state-of-the-art fabs produce chips on round metal wafers that are 300 mm across. The industry has been looking to 450-mm wafers as the next logical step, and Intel recently showed the first fully patterned 450-mm wafer. However, during a Q&A at the Forum, one of the IBM representatives said 450-mm wafers are still some years off, probably arriving near the end of this decade.
Semiconductor scaling faces other hurdles before the end of the decade, too. The Alliance members have articulated a fairly clear path to the 10-nm process node via double-patterning and FinFETs, but beyond that, the road could get bumpy. Dr. Patton told the crowd that moving to 7 nm with conventional lithography would require triple or quadruple patterning, which he characterized as "very expensive."
Many folks have considered the obvious next step forward for lithography to be the use of shorter-wavelength extreme ultraviolent (EUV) light. IBM appears to be working diligently on developing EUV technology, but Patton threw cold water on the notion that EUV is a foregone conclusion. He explained that the process involves dropping molten tin at a speed of about 150 MPH inside of a tool, zapping it with a laser to broaden it, hitting it with a "real CO₂ laser" to generate plasma, while bouncing off of about six mirrors, each with an efficiency of 6%. Patton called EUV "the biggest change in the history of the industry" and outright disputed the notion that making it feasible is now only a matter of "hard engineering work." Instead, he said, there are still "real physics problems we have to solve." As if those words hadn't raised the uncertainty quotient enough, Dr. Patton then mentioned the possibility that the masks used for EUV lithography could themselves have flaws in them, and he compared finding a 30-nm defect on such a mask to searching over 10% of California's surface area in order to find a golf ball.
Once the crowd was sufficiently terrified, Patton listed a host of other possibilities for extending semiconductor scaling, some of them involving exotic new materials and techniques. Part of his intent, I think, was to illustrate that the way forward is by no means clear, and that IBM's research arm is exploring a host of possibilities in hopes of finding the best possible options.
Two of the most intriguing possibilities were further explored by IBM researchers in the afternoon sessions.
|If they do become viable, silicon nanowires may be the last gasp of silicon-based semiconductors.|
Dr. Mukesh Khare, Director of the Semiconductor Alliance at IBM, outlined his group's research into silicon nanowires. Nanowires are long, thin silicon structures between about three and 20 nm in diameter. These wires could serve as the building block for transistors in the future. In fact, the basic transistor structure doesn't look too terribly different from the FinFETs of today; the nanowire takes the place currently occupied by a silicon fin. Nanowires are created by etching a long and relatively thick bar of silicon using conventional lithography and then using hydrogen to anneal the silicon. The annealing process leaves behind a thinner, rounder silicon "wire" structure that is suspended above the substrate layer below it. After that, gate material is deposited all around the nanowire, even beneath it, for even better exposure of surface area than a FinFET's triple-sided contact area. There are still hurdles to overcome in making nanowires a viable manufacturing technology, but the potential is undeniable, especially since one can easily imagine how nanowire creation could be integrated into current manufacturing methods.
If they do become viable, silicon nanowires may be the last gasp of silicon-based semiconductors. According to Dr. Supratik Guha, Director of the Physical Sciences Department at IBM Research, silicon ceases to be a good material "when you come to atomic dimensions." Seven nanometers is about as small as silicon can go; beyond that point, other materials might be superior. For the past couple of years, his group at IBM Research has been exploring the most promising alternative material: carbon—specifically, the rolled lattice structures known as carbon nanotubes.
Since their discovery, carbon nanotubes have been the subject of intensive study, in part because they can act as semiconductors. You can imagine these tiny tubes, typically about one nanometer in diameter, taking the place of a fin or nanowire in a transistor layout. Dr. Guha explained that the first carbon nanotube transistors were created around 2001, and some time later, researchers figured out how to encase a tube with gate material. In 2007, the first carbon nanotube circuit was demonstrated.
Part of the appeal here is that, as Guha put it, the "short-channel characteristics for carbon nanotubes are very, very good." To put it simply, that means carbon nanotube-based chips could in theory have relatively low power leakage, and leakage is the #1 problem to be managed in today's silicon chips. Guha said IBM did a simulation of a hypothetical carbon nanotube-based chip, comparing it to FinFET silicon at nodes as small as 5 nm, with promising results. At the same power density, carbon nanotubes could offer three times the performance of silicon FinFETs—or at equivalent performance, they could require one-third the power.
Although carbon nanotubes hold promise, incorporating them successfully into some variant of today's chip manufacturing methods will be no small feat. Carbon nanotubes are "grown" in a lab using a chemical process, and they exit that process with impurities—some portion of the resulting nanotubes are metallic conductors. The metallic nanotubes must be culled from the rest, so that a pure batch of semiconducting nanotubes remains. The goal then is to deposit the nanotubes in a layer atop a traditional silicon wafer and then, somehow, to align them into a precise, regular layout, so the wafer can be patterned using lithography.
Amazingly, researchers have made tremendous progress with each of these challenges. Dr. Guha said the IBM team can now sort nanotubes well enough to achieve 99.9% purity using an automated, parallel, electrical sorting method. There's still work to be done to achieve the "four or five nines" of purity needed for production, but Guha believes the purity challenge will be solved.
The next challenge seems even more daunting: somehow, to arrange the nanotubes in a regular, predictable fashion on top of a silicon wafer. The answer IBM researchers are pursuing uses a bit of dark magic known as directed self-assembly (DSA), in which nano-scale materials are coaxed into forming ordered structures. Already, Guha reported, they are able to align about 10 nanotubes per micron with consistency. That ability has led to another breakthrough: the chip-scale fabrication of positioned carbon nanotube-based devices. Last year, IBM researchers demonstrated a chip with 10,000 carbon nanotube devices onboard, produced using techniques similar to silicon processing. As I mentioned above, IBM has developed a process for encasing carbon nanotubes with gate materials, much as they do with silicon nanowires. The result is an all-around carbon nanotube FET in which the gate contact is self-aligned properly to the source and drain.
So yes, researchers are making considerable progress toward using carbon nanotubes in chips. What they have now, though, is only a beginning. The test chips give them the ability to run statistical analysis, so the difficult work of increasing alignment precision and reducing defects can commence. Dr. Guha expects to see contributions from multiple disciplines coming into the effort, helping to sort out problems of chemistry, the physics of quantized systems, and the behavior of materials at atomic scales. He expressed confidence that the Common Platform team "has the horsepower to make it happen." If the effort succeeds, we could see workable carbon nanotube-based chip fabrication technology somewhere in the 2019-2022 time frame.
If chips can be produced with CNTFETs, they'll face challenges on other fronts, most notably in scaling up the interconnects used to move data around. The metal wires used now may not scale down much further without developing serious problems with performance and reliability. We've known about this issue for a while, of course; even full-scale networks now use optical links for the highest transfer rates. Dr. Patton offered a bit of hope in his keynote by pointing to an emerging technology, nanophotonics, as a potential chip-level interconnect solution. He showed an example of a nanophotonic waveguide integrated into a CMOS logic circuit. Patton claimed the device can transfer 25GB/s and is "very cost effective."
Ultimately, Patton envisions chips with multiple layers stacked on top of one another in 3D: a photonics plane, a memory plane, and a logic plane. A chip built in this fashion, he said, could have 300 cores, 30GB of embedded DRAM, and "incredible bandwidth" to tie it all together. One gets the impression that he intends to help make it happen.
If that isn't an antidote to the gloom coming from some quarters, I don't know what is.
48 comments — Last by shank15217 at 8:41 AM on 03/10/13
|The next Atom: Intel's Silvermont architecture revealedAll-new architecture shoots for superior single-threaded performance||144|
|AMD's Radeon HD 7990 graphics card reviewedHow much does adding a second GPU really help?||178|
|Today's mid-range graphics cards in BioShock InfiniteAMD and Nvidia fight it out in Columbia||77|
|AMD touts unified gaming strategyGCN and x86 everywhere||79|
|Inside the second with Nvidia's frame capture toolsDisplay-level reckoning for GPUs||189|
|Nvidia's GeForce GTX 650 Ti Boost graphics card reviewedHas the Radeon HD 7790 met its match?||120|
|AMD's Radeon HD 7790 graphics card reviewedOld ingredients, new recipe||140|
|Nvidia's GeForce GTX Titan reviewedThe GK110 brings its talents to the desktop||220|
|Gigabyte offers early peek at Z87 motherboards||5|
|Deals of the week: IPS displays, graphics cards, storage, and games||15|
|Which game is the new champ of PC visuals?||112|
|Intel-powered Lenovo Yoga 11S lands at $799.99||22|
|Coffee Talk with Timmy Cook||22|
|Pre-orders begin for Nvidia's Shield||38|
|Otellini: Intel passed on the original iPhone||85|