Single page Print

Semiconductors from idea to product


The story of how chips are made
— 8:47 PM on April 22, 2015

Rys Sommefeldt works for Imagination Technologies and runs Beyond3D. He took us inside Nvidia's Fermi architecture a couple of years ago, and now he's back with a breakdown of how modern semiconductors like CPUs, GPUs, and SoCs are made.

Disclaimer: what you're about to read is not an exact description of how my employer, Imagination Technologies, and its customers take semiconductor IP from idea to end user product. It draws on how they do it, but that's it.

This essay is designed to be a guide to understanding how any semiconductor device is made, regardless of whether it's purely an in-house design, licensed IP, or something in between. I'll touch on chips for consumer devices, since that's what I work on most, but the process applies almost universally to any chip in any device.

I've never read a really great top-to-bottom description of the process, and it's something I'd have loved to have read years before I joined a semiconductor IP company. I hope this helps others in the same position. If you're at all interested in chip manufacturing and how chips are made and selected for consumer products, this should hopefully be a great read.


Chips on a 45-nm wafer. Source: AMD

The idea
It all starts with an idea you see. Not quite at the level of "I want to build a smartphone," although understanding that the smartphone might be a target application for the idea will be great to help the idea take shape. No, we're going to talk about things a little bit further down, at the level of the silicon (but not for long!) chips that do all the computing in modern devices, be they smartphones or otherwise.

All of the chips I can think of, even the tiniest and most specialized chips that perform just a few functions, are made up of much smaller building blocks underneath. If you want to perform any non-trivial amount of computation, even just on a single input, you're going to need a design that builds on top of foundational blocks.

So whether the idea is "let's build the high-performance GPU that'll go in the chips that go into smartphones," or something that's much simpler, the idea (almost) never gets built in its entirety as one monolithic piece of technology. It usually must be built from smaller building blocks. The primary reason, especially these days, is that it's incredibly rare that one single person can hold the entire design for a chip in her or his head, in order to build it from start to finish and make sure it works. Modern chips are complex, usually consisting of at least a couple hundred million transistors in most consumer products and often much much more. Most main processors in modern a desktop or laptop are well over a billion transistors. There's maybe over a billion transistors in your pocket, in the main chip in your phone.

So you overwhelmingly can't build the idea as a monolithic thing, because humans just don't work that way. Instead, the idea must be broken down into blocks. Maybe a single person can design, build, assemble, and test all of the blocks themself, but blocks are must. I'll talk a lot about blocks, so apologies if the word offends somehow, or if it means "I hate your cat" in your native language. I definitely love your cat.

Timescales
For simplicity's sake, I'm going to talk about most common processors these days, which all take at least a year to make. Nothing in the semiconductor business happens really quickly. It really does normally take years to go from an idea about a chip all the way through the design, build, validation, integration, testing, sampling, possible rework, and mass production. All that happens before the product can be sold and you hold it in your hands, put it under your TV, drive it, fly it, use it to read books, or whatever else the chip finds itself in these days.

There's maybe over a billion transistors in your pocket, in the main chip in your phone.

The lifetime of a new chip is therefore never short. There are some macro views of the semiconductor industry where you might think that's the case. For example, a modern smartphone system-on-chip (SoC) vendor might be able to go from project start to chip mass production in a short matter of months, but that's because all they're doing is integrating the already designed, built, validated, and tested building blocks that other people have made. Tens of thousands of man years went into all of the constituent building blocks before the chip vendor got hold of them and turned them into the full SoC.

It takes years—not months, weeks, days, or anything silly like that, at least for the main chips performing complex computation in modern consumer electronics and related industries.

Knowing what you need
Chip development taking years means there's a certain amount of hopefully accurate prediction to be done. Smart chip designers are data-driven folks who don't trust instinct or read tea leaves. They don't make decisions based on whether the headline in today's paper started with the third letter of the name of the second dog they had in their first house as a kid. Knowing what to design is almost pure data analysis.

Data inputs come in to the chip designer from everywhere: marketing teams, sales people, existing customers, potential new customers, product planners, project managers, and competitive and performance analysis folks like me. Then there's the data they get from experience, because they built something similar last time and they know how well it worked (or not).

The chip designer's first job is to filter all of that data and use it as the foundation of the model of what they're going to build. They need to know as much as possible about the contextual life of the chip when it finally comes into existence. What kind of products is it going to go into eventually? What does the customer expect as a jump over the last thing someone sold them? Is there a minimal bar for new performance or a requirement for some new features? Are trends in battery life, materials science, or the manufacturing of chips by the foundry changing?

What about costs? Costs play an enormous role in things. There's no point designing something that costs $20 if your competitor can sell their closely functional and performing equivalent for $10. Knowing your cost structure for any chip is probably the thing that shapes a chip designer's top-level bounds the most. Every choice you make has a cost, direct or indirect.

Say your chip needs Widget A, which is 20 square millimeters in area on the process technology of your foundry. Your total chip cost lets you design something that's 80 mm² square, because every square millimeter costs you 20 cents and your customer won't pay more than $20 for the full chip, and because you really need that 25% gross margin on the manufacturing to pay for the next chip. Widgets B through Z only have 60 mm² left, and really a bit less than 60 because it's incredibly hard to lay out everything on the chip so there are no gaps. Sometimes you even want gaps, for power or heat reasons. I'll come back to that theme later.

There's both a direct (your chip can't cost more than $16 to fab) and an indirect (choosing Widget A affects your further choices of Widgets B through Z) set of costs to model.

So the chip designer takes all of those inputs and feeds them into her or his models (there's usually a lot of spreadsheet gymnastics here, more than you might think). The designer decides what Widgets they need for their chip, intercepting all of the top-level context about the chip, when it will be made, and when it will come into the world to make they take advantage of everything known about its design and manufacturing.

We now know that the designer needs some building blocks for their chip, and that they've made the hard decisions about what they believe they need. Where do those blocks come from these days?

Buy it in or build it yourself
If you're a semiconductor behemoth like Intel, where you literally have the ability not just to design the chip yourself, but also to manufacture it because you also own the chip fabrication machinery, you invariably build the blocks yourself. Say you're the lead designer for the next-generation Core i8-6789K xPro Extreme Edition Hyper Fighting. These days a product like that is not just the CPU like it used to be, where everything else in the system lies on the other end of a connected bus. Chips like the Core i7-4790K are a CPU, memory controller and internal fabric, GPU, big last level cache, video encoder, display controller, PCI Express root complex, and more. So let's assume the i8-6789K is probably at least all of those things.

As lead designer of something like the i8-6789K, there's probably almost nothing on the i7-4790K chip that its designer bought from outside Intel, or that you'll now buy from a third party as a building block. I'd like to think there's at least one block that Intel didn't design, but I wouldn't be surprised if someone told me there were zero third-party pieces.


Intel's Core i7-4790K (left) and i7-5960X (right)

Intel do make chips where they get the blocks from outside of the company, but the vast majority of their revenue comes from sales of chips that are almost completely their own.

So where are you going to get building blocks from? Intel obviously has design teams for each and every block of the chip. It's incredibly expensive, but the competitive advantages are enormous. Knowing that all of your block designs are coming from your own company, on timescales you (hopefully) control, where your competitors have no idea what you're building, and where you have full design-level control over every part that results in a flip-flop to be flip-flopped, is really compelling. That vertical integration is overwhelmingly an excellent idea if you can afford it, because it lets you put economies of scale to work amortizing the incredibly expensive capital expenditure required.

You can see that build-it-yourself mentality elsewhere in the chip industry. Qualcomm do as much as they can. Nvidia are trying their very best. Apple are beating the rest of the consumer device world to death with their ability to vertically integrate as much as they can. Lots of that is built on Apple doing the work themselves, at the chip's block level.

At the other end of the scale in consumer devices like phones and tablets, you have vendors that are master integrators but design none of the blocks themselves. They go shopping, get the blueprints for the blocks from other suppliers, connect them up, and ship the result, often very quickly. It's comparatively cheap and easy for them take this approach. And, primarily because it's also cheap and easy for someone else to follow suit, they're in a horrible, slow, squeezing, cost-down race to the bottom that only a few will survive unless they can differentiate.

Choosing between buying it in or building it yourself is largely a matter of capital expenditure, expertise, and supporting shipping volume. Those are the big factors, but there's still incredible extra nuance depending on the company making the chip. Some vendors will take a block design in-house where they previously bought it, not because doing so will make them any more money directly, but simply because it'll increase the size of the smile on the customer's face when they use the final product.

Now we know where the blocks tend to come from. If you're rich and your customers love your stuff so much that your competition matters less, if anyone can even compete with you at all, and if you ship loads of whatever it is you make, you can go ahead and try to do as much of block design as you can yourself. If your cost structure and competitive environment means things are tighter, you need to go shopping. I've also written about how you should go shopping, if you want to nip off and read about that too.

Regardless, someone needs to design the blocks.