Single page Print

A new competitive landscape
Seeing Intel push so aggressively to play in this emerging CPU space is a bit surreal, and understanding it all takes some calibration. Processors in this new class are often sold on the basis of being low-power devices—and they are that, with power envelopes typically ranging from 5-20W—but modest power budgets are perhaps too often conflated with power efficiency, which isn't entirely the same thing. For instance, it's quite possible a larger Xeon chip would expend less energy per instruction when executing a given workload. That efficiency advantage could well extend to the rack level. There's much to be said for the deep voodoo built into those big CPU cores, at the end of the day.

However, the pitch for low-power SoCs in the data center often focuses on the end-to-end efficiency of a solution. The performance of, say, a big storage appliance might be entirely gated by the speed of its disks and network uplink. A low-power SoC might be a more appropriate choice to drive that device than a big Xeon.

That logic is taken to its most jarring conclusion in the case of microservers, where a bunch of lightweight user sessions might be served more efficiently by a collection of low-power CPU nodes. Microservers have been all the rage lately, but I'm still not sure that handling server-class workloads with a large collection of relatively weak CPU cores makes sense outside of a few fairly unusual players like Facebook. Heck, once that Broadwell SoC hits the market, the case for microservers based on smaller cores like Silvermont or the A57 may look awfully shaky, even from a power-efficiency standpoint.

Still, the focus on power sometimes obscures another advantage of these small SoCs: low costs. These tiny chips are cheap to make and cheap to buy, which is why Intel continues to pursue multiple microarchitectural development tracks, even when its big cores can squeeze down into single-digit power envelopes. Chips like Avoton can serve cost-sensitive markets that Xeons cannot.

Once you've seen the details, Intel's new strategy comes into sharper focus. Yes, Intel's expanded vision for its role in the data center is about potential growth. For 2016, the company estimates the market for public cloud servers will be $15 billion, and it cites analyst estimates that place the value of the distributed storage market at $21 billion and software-defined networking at $5.5 billion. But this new direction is also about denying opportunity to ARM and its partners. We all know the ARM ecosystem has been gearing up for a push into the server space. One can imagine Intel realizing that ARM and its partners have credibility in the data center—and resources—since they're already shipping inside of all sorts of routers, switches, storage appliances, firewalls, and the like. Rather than simply polishing up its Xeon processors and bracing for impact, Intel has elected to contend against ARM and its licensees directly, denying them uncontested market share and revenue.

As we noted in our recent look at ARM, the virtue of ARM's IP licensing regime is the variety of solutions the resulting ecosystem produces. Intel is facing that challenge by expanding the range of its own offerings into ARM's traditional areas of strength, including low-cost, low-power SoCs. In order to compete well against the wide variety of custom solutions out there, though, Intel will likely have to offer some products tailored to the requirements of large customers. Waxman touched on this point, mentioning that Intel has already built custom solutions for eBay and Facebook—although those evidently weren't anything too fancy, just CPUs with custom policies for dynamic CPU frequencies. He cited a "50% frequency variation depending on workload" in the eBay offering.

Custom SoCs more often integrate specific technology in order to address a particular workload efficiently. Intel traditionally hasn't built custom IP into its own silicon for partners, but Waxman offered a solution that could serve the same purpose. Intel offers two different versions of its Haswell processor, the SoC package and the GT3e variant, that include two chips on one package. Waxman noted that Intel could use this ability to build multi-chip modules (MCMs) as a means of incorporating custom IP into its SoCs in the future. One could imagine an Avoton die sharing a module with custom video compression chip in order to better serve as the engine for, say, a video sharing service like YouTube. Waxman didn't share any details of such projects currently in the works, but he made clear Intel might use this MCM capability in order to win some business.

What's remarkable is how far Intel has already progressed in pursuing this strategy for re-architecting the data center, not just in terms of the chips and roadmaps, but also the platforms and tools. Waxman showed off an Intel reference platform for high-density microserver deployments that incorporates 30 Avoton compute nodes, situated on plug-in cards, into a 2U rack enclosure. The compute nodes can be upgraded by swapping in new cards.

He also displayed a reference design that crams three 2P Xeon servers into two rack units in something of a resource remix. The three servers can share PCIe and Ethernet connectivity between them, and there's no power supply unit in the box. Instead, 12V power is delivered to the box from a rack-level PSU with n+1 redundancy. Snaking out of the box is a fiber cable; that connection is driven by Intel's in-development silicon photonics technology. According to Waxman, the solution can drive four 25-gigabit connections simultaneously.

Perhaps the most radical statement of all was the reference design for a network switch chassis, front face honeycombed with Ethernet ports, that uses Xeon processors for the control plane. The box's Ethernet NICs and switch silicon are from Intel, as well. This design exists to enable Intel partners to build reconfigurable network switches based on standards like OpenStack and OpenFlow. Some of the ports on the front of such a system could be dedicated to routing or switching, while other ports could be allocated to software-defined functions like firewall protection or intrusion detection—all running on virtual machines inside the same box.

When it's investing in building reference designs like this one, Intel is obviously moving beyond the traditional server box. What happens next, as the company and its partners work to make inroads into markets where Intel hasn't played before, should be fascinating to see.TR

Asus' RT-AC1900P wireless router reviewedThe Dark Knight legacy continues 22
Ryzen Pro platform brings a dash of Epyc to corporate desktopsZen puts on a suit and tie 28
AMD's Epyc 7000-series CPUs revealed Zen gets its data center marching orders 157
Intel's Core i9-7900X CPU reviewed, part oneVying for a perfect 10 169
AMD's Ryzen 5 CPUs reviewed, part twoGetting down to business 171
Intel's Core X-series CPUs and X299 platform revealedSkylake-X and Kaby Lake-X make their debut 245
The Tech Report System Guide: May 2017 editionRyzen 5 takes the stage 111
Corsair's One Pro small-form-factor gaming PC reviewedOne rig to rule them all 27