Single page Print

The meaning of an "SoC-style" approach
Our understanding of AMD's newly adopted approach to creating products came into sharper focus when we had the chance to participate in a "fireside chat" with CTO Mark Papermaster and a small group of journalists last Friday.

Papermaster opened by explaining his role at AMD; he is wearing two hats, acting as the Chief Technology Officer who sets the firm's long-term technology direction and also running the development team. He told us he's taken on both roles since it's so important for AMD to execute well on its plans. Throughout the conversation, although he was willing to talk pretty freely about technology and ideas, Papermaster kept returning to the theme of solid execution as his top priority.

Hand in hand with the talk of consistent execution, Papermaster sounded several themes to describe AMD's goals, including agility, flexibility, and architectures that are "ambidextrous." At the heart of it all is a different approach to building chips, one that is borrowed from the world of low-power and embedded system-on-a-chip (SoC) products that are becoming nearly ubiquitous in smartphones, tablets, consumer routers, embedded systems, and a whole host of other devices.

SoCs are often assembled from blocks of custom logic—referred to as IP or intellectual property—whose basic design is licensed from a third-party provider. Think of a smartphone chip that incorporates CPU cores from ARM, graphics from Imagination Tech, baseband communications tech from another provider, and so on. Many different chip companies combine these basic IP building blocks into various configurations tailored for certain requirements. The IP blocks can be mixed and matched with relative ease because they all share a common, industry-standard communications interconnect.

We've seen aspects of the IP-based SoC approach in one part of the PC market over time: core-logic chipsets, where specific I/O blocks are often licensed from third parties and incorporated into support chips. Generally speaking, though, PC processors have been proprietary affairs. AMD's Llano, for instance, combines Phenom-class x86 CPU cores with Radeon graphics, an in-house north bridge, and AMD's own memory controller. Sandy Bridge is built largely from Intel's proprietary tech, as well. PC chip designs have become increasingly modular in recent years, but that modularity is relatively limited. Papermaster explained that AMD's current chips are not built from IP blocks that have been expressly tailored for re-use.

Going forward, Papermaster envisions a common interconnect that AMD can deploy across its entire lineup. This interconnect will be high-speed, low-power, and capable of sustaining memory coherency across multiple logic blocks. The interconnect will act as glue for AMD's various types of IP, whether it's graphics, CPU cores, video encoders, or what have you. The idea is to allow the firm to mix and match its assets, easing the creation of chips based on its core technologies.

Although this interconnect will necessarily have to be proprietary in order to feed AMD's high-performance CPU and GPU cores, Papermaster said it will have a bridge to the AMBA interconnect created by ARM and used by other SoC providers. That fact opens up all sorts of intriguing possibilities, including the incorporation of third-party IP into AMD silicon, either as a means of adding new features or, more likely, as part of an effort to build a chip tailored for a specific customer.

Now, he told us AMD is "investing very heavily in emulation technology" in order to perform validation on the various IP blocks it has in development. The idea is to move bug discovery earlier in the process, before the whole chip comes together.

Even with some modularity in its current chips, the move to an SoC-style approach appears to involve a fairly noteworthy change in the company's operations. Papermaster told us he has restructured his organization to fit this strategy. In a separate conversation, Graphics CTO Eric Demers also asserted that the change requires a true shift in mentality compared to AMD's prior methods. As an example, Papermaster said AMD's product validation efforts have, in the past, largely focused on testing an entire chip. Now, he told us AMD is "investing very heavily in emulation technology" in order to perform validation on the various IP blocks it has in development. The idea is to move bug discovery earlier in the process, before the whole chip comes together.

First and foremost, the new executive team expects this modified method of building chips to make it easier for AMD to deliver on its product roadmap commitments. Beyond that, it may also enable new combinations of AMD IP and open up new business opportunities.

That's especially true for AMD's server products, where "workload-optimized" processors are a big part of its future plans. Rather than competing directly with Intel's formidable Xeon processors in every case, AMD hopes to win business by building more varied processors targeted at specific types of workloads. With two x86 CPU core development tracks—high-performance and low power—and its increasingly compute-capable GPUs, one can envision many possible combinations. One possibility is a future server chip with a modest contingent of Opteron x86 cores for integer math and a boatload of FLOPS supplied by a host of GPU compute units. Another option Papermaster mentioned specifically is an ultra-dense server processor comprised of a large number low-power Brazos cores. Either of those processors might be better suited to a specific application than a stock Opteron or Xeon. This sort of targeting looks like it could make quite a bit of sense given the way server-class workloads have diverged in recent years. Segments like HPC hinge almost entirely on FLOPS and memory bandwidth while others, like cloud providers, require energy efficiency and scalable performance with lots of integer-focused threads.

When pushed for specific, non-theoretical examples of how AMD might incorporate third-party IP into future chips, Papermaster offered one scenario related to a "smart TV" product. Such products need some compute power, good display technology, and video codec hardware. They also have the very attractive property of being high-volume parts, so they could offer the economies of scale needed to make a chip business workable. The relationship between AMD and a customer, say a big consumer electronics firm, might start with AMD supplying a discrete GPU that the customer would pair with its own applications processor. Later, these components might be integrated into a single chip, provided by AMD, where Radeon graphics and video processor tech share die space with third-party IP.

AMD is now open to the possibility of such integration, where in the past, it probably wouldn't have been (although it does have some history of making custom GPUs for game consoles.) Papermaster was careful to explain that such relationships are likely to be few—he said there won't be "hundreds of customers," not even "dozens." But AMD appears to be working toward some new types of relationships with select customers, made possible by a newfound willingness to combine its own technologies with those invented elsewhere.

Yes, such a relationship could mean that an ARM CPU core could be combined on the same silicon with, say, Radeon graphics. AMD clearly opened the door to that possibility and talked openly about "ISA flexibility" as part of its new strategy. Still, the firm's public roadmap mentions only x86-compatible processors for the time being, and we don't know of any specific plans for AMD to produce an ARM-based SoC to compete with the likes of Nvidia's Tegra lineup. All we really know is that AMD's new leadership is expressing an openness to new types of products and business relationships. We don't have many specifics so far, and we're unsure how many of the ideas being kicked around will turn into products. If they do, they'll most likely become visible once AMD exposes a public roadmap for 2014 and beyond.