Single page Print

A new competitive landscape
Seeing Intel push so aggressively to play in this emerging CPU space is a bit surreal, and understanding it all takes some calibration. Processors in this new class are often sold on the basis of being low-power devices—and they are that, with power envelopes typically ranging from 5-20W—but modest power budgets are perhaps too often conflated with power efficiency, which isn't entirely the same thing. For instance, it's quite possible a larger Xeon chip would expend less energy per instruction when executing a given workload. That efficiency advantage could well extend to the rack level. There's much to be said for the deep voodoo built into those big CPU cores, at the end of the day.

However, the pitch for low-power SoCs in the data center often focuses on the end-to-end efficiency of a solution. The performance of, say, a big storage appliance might be entirely gated by the speed of its disks and network uplink. A low-power SoC might be a more appropriate choice to drive that device than a big Xeon.

That logic is taken to its most jarring conclusion in the case of microservers, where a bunch of lightweight user sessions might be served more efficiently by a collection of low-power CPU nodes. Microservers have been all the rage lately, but I'm still not sure that handling server-class workloads with a large collection of relatively weak CPU cores makes sense outside of a few fairly unusual players like Facebook. Heck, once that Broadwell SoC hits the market, the case for microservers based on smaller cores like Silvermont or the A57 may look awfully shaky, even from a power-efficiency standpoint.

Still, the focus on power sometimes obscures another advantage of these small SoCs: low costs. These tiny chips are cheap to make and cheap to buy, which is why Intel continues to pursue multiple microarchitectural development tracks, even when its big cores can squeeze down into single-digit power envelopes. Chips like Avoton can serve cost-sensitive markets that Xeons cannot.

Once you've seen the details, Intel's new strategy comes into sharper focus. Yes, Intel's expanded vision for its role in the data center is about potential growth. For 2016, the company estimates the market for public cloud servers will be $15 billion, and it cites analyst estimates that place the value of the distributed storage market at $21 billion and software-defined networking at $5.5 billion. But this new direction is also about denying opportunity to ARM and its partners. We all know the ARM ecosystem has been gearing up for a push into the server space. One can imagine Intel realizing that ARM and its partners have credibility in the data center—and resources—since they're already shipping inside of all sorts of routers, switches, storage appliances, firewalls, and the like. Rather than simply polishing up its Xeon processors and bracing for impact, Intel has elected to contend against ARM and its licensees directly, denying them uncontested market share and revenue.

As we noted in our recent look at ARM, the virtue of ARM's IP licensing regime is the variety of solutions the resulting ecosystem produces. Intel is facing that challenge by expanding the range of its own offerings into ARM's traditional areas of strength, including low-cost, low-power SoCs. In order to compete well against the wide variety of custom solutions out there, though, Intel will likely have to offer some products tailored to the requirements of large customers. Waxman touched on this point, mentioning that Intel has already built custom solutions for eBay and Facebook—although those evidently weren't anything too fancy, just CPUs with custom policies for dynamic CPU frequencies. He cited a "50% frequency variation depending on workload" in the eBay offering.

Custom SoCs more often integrate specific technology in order to address a particular workload efficiently. Intel traditionally hasn't built custom IP into its own silicon for partners, but Waxman offered a solution that could serve the same purpose. Intel offers two different versions of its Haswell processor, the SoC package and the GT3e variant, that include two chips on one package. Waxman noted that Intel could use this ability to build multi-chip modules (MCMs) as a means of incorporating custom IP into its SoCs in the future. One could imagine an Avoton die sharing a module with custom video compression chip in order to better serve as the engine for, say, a video sharing service like YouTube. Waxman didn't share any details of such projects currently in the works, but he made clear Intel might use this MCM capability in order to win some business.

What's remarkable is how far Intel has already progressed in pursuing this strategy for re-architecting the data center, not just in terms of the chips and roadmaps, but also the platforms and tools. Waxman showed off an Intel reference platform for high-density microserver deployments that incorporates 30 Avoton compute nodes, situated on plug-in cards, into a 2U rack enclosure. The compute nodes can be upgraded by swapping in new cards.

He also displayed a reference design that crams three 2P Xeon servers into two rack units in something of a resource remix. The three servers can share PCIe and Ethernet connectivity between them, and there's no power supply unit in the box. Instead, 12V power is delivered to the box from a rack-level PSU with n+1 redundancy. Snaking out of the box is a fiber cable; that connection is driven by Intel's in-development silicon photonics technology. According to Waxman, the solution can drive four 25-gigabit connections simultaneously.

Perhaps the most radical statement of all was the reference design for a network switch chassis, front face honeycombed with Ethernet ports, that uses Xeon processors for the control plane. The box's Ethernet NICs and switch silicon are from Intel, as well. This design exists to enable Intel partners to build reconfigurable network switches based on standards like OpenStack and OpenFlow. Some of the ports on the front of such a system could be dedicated to routing or switching, while other ports could be allocated to software-defined functions like firewall protection or intrusion detection—all running on virtual machines inside the same box.

When it's investing in building reference designs like this one, Intel is obviously moving beyond the traditional server box. What happens next, as the company and its partners work to make inroads into markets where Intel hasn't played before, should be fascinating to see.TR

Like what we're doing? Pay what you want to support TR and get nifty extra features.
Top contributors
1. GKey13 - $650 2. JohnC - $600 3. davidbowser - $501
4. cmpxchg - $500 5. DeadOfKnight - $400 6. danny e. - $375
7. the - $360 8. Ryszard - $351 9. rbattle - $350
10. Ryu Connor - $350
TR's Fall 2014 System GuideGeForce GTX 900-series galore 142
The typical enthusiast PC is more decked-out than you might thinkDissecting the TR Hardware Survey 2014 200
TR's September 2014 System GuideHaswell-E and DDR4 galore 79
Intel's Xeon E5-2687W v3 processor reviewedHaswell-EP brings the hammer down 114
AMD's FX-8370E processor reviewedEight threads at 95W 147
Intel's Core i7-5960X processor reviewedHaswell Extreme cranks up the core count 198
TR's August 2014 peripheral staff picksMonitors, monitors, monitors! 53
Zotac's Zbox ID92 mini-PC reviewed35W Haswell, 802.11ac, and dual Ethernet 18