Home Intel aims to reinvent the data center
Reviews

Intel aims to reinvent the data center

Scott Wasson
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

Last week, Intel hosted an event for press and analysts where it provided some updates on the state of its data center business. Such events are usually staid affairs where the world’s largest chipmaker offers some details about the latest incremental updates to its Xeon processors and perhaps a scrap or two about Itanium, just for comic relief. After all, servers are very serious business, and one wouldn’t want to project disruptive intent—especially not Intel, the firm that has dominated the traditional server business in recent years, reaping handsome profits in the process.

Yet Intel did precisely the opposite of what you might expect. Diane Bryant, Senior VP and GM of the Datacenter and Connected Systems Group, used her opening keynote to project a somewhat radical vision of the data center of the future. In this vision, nearly every component of the data center—from servers to switches to storage to network appliances—will be re-architected to offer more flexibility and configurability.

Today’s networks, with manual provisioning and distributed control of resources, will give way to software-defined networks that can be controlled via a single, centralized interface. Those networks, from core switches to cellular base stations, will run in sophisticated software on high-volume Intel hardware. In storage, expensive SANs will be replaced by multi-tier storage services. Those services will allocate data to the appropriate tier in the network—from high-cost, high-speed SSDs through efficient cold-storage systems populated by spun-down hard disk drives—automatically, in response to application request patterns. Servers won’t be spared in the transformation, either. Instead of today’s arrangement, where discrete servers are carved up into virtual machines, the rack itself will become the basic unit of data center computing. The various pools of compute, memory, and storage capacity in each rack will be provisioned in software-defined servers, according to application needs.

As you might have gathered, this future data center is much better instrumented. That is, it’s aware of how applications are using the available resources. Combine that awareness with the ability to reallocate those resources dynamically, and admins should be better able to optimize the data center, eliminating bottlenecks while making fuller use of each server than in the past.

Sounds good, doesn’t it? But how will Intel make it happen? Is it developing some sort of comprehensive solution for the automated data center of tomorrow?

Not exactly. Intel’s Jason Waxman, GM and VP of the Cloud Platforms Group, spoke next, and he laid out some of the details of Intel’s plans. As he did so, it became clear that Intel will continue to be itself: a supplier of the chips, platforms, and tools. The firm will enable a host of different partners to build the sorts of solutions Bryant described.

Still, Intel’s intention to attack other areas of the data center, beyond the center of the rack where the servers sit, is a major change in strategy—and the chips, platforms, and tools to make it happen are deep into development already, with some key products due to ship in the second half of this year.

Those products include Avoton and Rangeley, a pair of low-power systems-on-a-chip (SoCs) based on the brand-new Silvermont architecture Intel announced a couple of months ago. Silvermont is the next incarnation of the microarchitecture that underpins Intel’s Atom processors. The Avoton and Rangeley SoCs are targeted specifically at data center deployments, including networking, storage, and the emerging category of microservers for cloud service providers.

Surprisingly, Waxman revealed quite a bit about Avoton and Rangeley in his presentation, including the slide above that shows the basic chip setup. These 22-nm chips each have four dual-core Silvermont modules for a total of eight cores, and they include 16 lanes of second-generation PCI Express connectivity. These are true SoCs with a full complement of interconnects on board, including SATA, USB, Gigabit Ethernet, and legacy PC I/O. The Rangeley variant includes an additional hardware block for the acceleration of cryptography, a necessary feature for its expected role in communications solutions.

Avoton and Rangeley are clearly not just phone and tablet SoCs repurposed to serve niche roles in the data center. They support true 64-bit memory addressing, protection of memory via ECC, and Intel’s ISA extensions for virtualization (up to the level of Westmere-based Xeons). With up to 64GB of capacity—via dual channels of DDR3-1600 memory—they also support quite a bit of physical RAM per node.

Intel says Avoton and Rangeley have been sampling to customers for months, and final products are expected to ship later in 2013. Although some ARM partners like Calxeda already have products in the market, Avoton and Rangeley should be beefier than the current low-power server SoCs in terms of core counts, connectivity, and true 64-bit memory addressing.

Intel is claiming big gains in performance and power efficiency for the Avoton/Rangeley-based Atom C2000 series versus the prior-generation Atom S1200 series. Some of those improvements come courtesy of the more potent Silvermont CPU microarchitecture, but I believe higher core counts, faster networking, and larger memory capacity also play a role in those claims.

In addition to pulling back the curtain on a lot of details about Rangeley and Avoton, Waxman announced that Intel is developing two more SoCs for release next year. As part of Intel’s new tick-tock development cycle for the Atom, a chip code-named “Denverton” will succeed Avoton at some point in 2014. Denverton will employ the same basic CPU architecture, but it will be built on Intel’s 14-nm fabrication process with the firm’s second generation of 3D transistors. (It’s a bit jarring to think how far ahead of the competition Intel may be at that time. For instance, AMD’s Seattle chip based on ARM Cortex-A57 cores is slated for the second half of 2014, and it’s expected to be built on a 28-nm fabrication process that uses a planar transistor structure.)

That’s only one of the SoCs. The other one will be a server-focused SoC based on Broadwell, the 14-nm follow-on to Intel’s Haswell processor. As you may know, Haswell has been tuned extensively for low-power operation in ultrabooks. Broadwell should extend that trajectory further, which makes it potentially a very nice fit for those spots where an Avoton might do but a little more computing power would be preferable. We don’t know much yet about this Broadwell-based SoC, but Waxman confirmed that it will be a single chip with integrated I/O, storage, and networking. It will likely target many of the same segments as Avoton and Rangeley, including microservers, networking, and the like. Waxman didn’t offer any specifics about power envelopes, but Haswell already stretches from under 5W to over 80W. I’d expect the Broadwell SoC to play at least in the lower half of that range.

A new competitive landscape
Seeing Intel push so aggressively to play in this emerging CPU space is a bit surreal, and understanding it all takes some calibration. Processors in this new class are often sold on the basis of being low-power devices—and they are that, with power envelopes typically ranging from 5-20W—but modest power budgets are perhaps too often conflated with power efficiency, which isn’t entirely the same thing. For instance, it’s quite possible a larger Xeon chip would expend less energy per instruction when executing a given workload. That efficiency advantage could well extend to the rack level. There’s much to be said for the deep voodoo built into those big CPU cores, at the end of the day.

However, the pitch for low-power SoCs in the data center often focuses on the end-to-end efficiency of a solution. The performance of, say, a big storage appliance might be entirely gated by the speed of its disks and network uplink. A low-power SoC might be a more appropriate choice to drive that device than a big Xeon.

That logic is taken to its most jarring conclusion in the case of microservers, where a bunch of lightweight user sessions might be served more efficiently by a collection of low-power CPU nodes. Microservers have been all the rage lately, but I’m still not sure that handling server-class workloads with a large collection of relatively weak CPU cores makes sense outside of a few fairly unusual players like Facebook. Heck, once that Broadwell SoC hits the market, the case for microservers based on smaller cores like Silvermont or the A57 may look awfully shaky, even from a power-efficiency standpoint.

Still, the focus on power sometimes obscures another advantage of these small SoCs: low costs. These tiny chips are cheap to make and cheap to buy, which is why Intel continues to pursue multiple microarchitectural development tracks, even when its big cores can squeeze down into single-digit power envelopes. Chips like Avoton can serve cost-sensitive markets that Xeons cannot.

Once you’ve seen the details, Intel’s new strategy comes into sharper focus. Yes, Intel’s expanded vision for its role in the data center is about potential growth. For 2016, the company estimates the market for public cloud servers will be $15 billion, and it cites analyst estimates that place the value of the distributed storage market at $21 billion and software-defined networking at $5.5 billion. But this new direction is also about denying opportunity to ARM and its partners. We all know the ARM ecosystem has been gearing up for a push into the server space. One can imagine Intel realizing that ARM and its partners have credibility in the data center—and resources—since they’re already shipping inside of all sorts of routers, switches, storage appliances, firewalls, and the like. Rather than simply polishing up its Xeon processors and bracing for impact, Intel has elected to contend against ARM and its licensees directly, denying them uncontested market share and revenue.

As we noted in our recent look at ARM, the virtue of ARM’s IP licensing regime is the variety of solutions the resulting ecosystem produces. Intel is facing that challenge by expanding the range of its own offerings into ARM’s traditional areas of strength, including low-cost, low-power SoCs. In order to compete well against the wide variety of custom solutions out there, though, Intel will likely have to offer some products tailored to the requirements of large customers. Waxman touched on this point, mentioning that Intel has already built custom solutions for eBay and Facebook—although those evidently weren’t anything too fancy, just CPUs with custom policies for dynamic CPU frequencies. He cited a “50% frequency variation depending on workload” in the eBay offering.

Custom SoCs more often integrate specific technology in order to address a particular workload efficiently. Intel traditionally hasn’t built custom IP into its own silicon for partners, but Waxman offered a solution that could serve the same purpose. Intel offers two different versions of its Haswell processor, the SoC package and the GT3e variant, that include two chips on one package. Waxman noted that Intel could use this ability to build multi-chip modules (MCMs) as a means of incorporating custom IP into its SoCs in the future. One could imagine an Avoton die sharing a module with custom video compression chip in order to better serve as the engine for, say, a video sharing service like YouTube. Waxman didn’t share any details of such projects currently in the works, but he made clear Intel might use this MCM capability in order to win some business.

What’s remarkable is how far Intel has already progressed in pursuing this strategy for re-architecting the data center, not just in terms of the chips and roadmaps, but also the platforms and tools. Waxman showed off an Intel reference platform for high-density microserver deployments that incorporates 30 Avoton compute nodes, situated on plug-in cards, into a 2U rack enclosure. The compute nodes can be upgraded by swapping in new cards.

He also displayed a reference design that crams three 2P Xeon servers into two rack units in something of a resource remix. The three servers can share PCIe and Ethernet connectivity between them, and there’s no power supply unit in the box. Instead, 12V power is delivered to the box from a rack-level PSU with n+1 redundancy. Snaking out of the box is a fiber cable; that connection is driven by Intel’s in-development silicon photonics technology. According to Waxman, the solution can drive four 25-gigabit connections simultaneously.

Perhaps the most radical statement of all was the reference design for a network switch chassis, front face honeycombed with Ethernet ports, that uses Xeon processors for the control plane. The box’s Ethernet NICs and switch silicon are from Intel, as well. This design exists to enable Intel partners to build reconfigurable network switches based on standards like OpenStack and OpenFlow. Some of the ports on the front of such a system could be dedicated to routing or switching, while other ports could be allocated to software-defined functions like firewall protection or intrusion detection—all running on virtual machines inside the same box.

When it’s investing in building reference designs like this one, Intel is obviously moving beyond the traditional server box. What happens next, as the company and its partners work to make inroads into markets where Intel hasn’t played before, should be fascinating to see.

Latest News

Apple Might Join Hands with Google or OpenAI for Their AI Tech
News

Apple Is Reportedly Planning to Join Hands with Google or OpenAI to License Their AI Tools

YouTube Launches New Tool To Help Label AI-generated Content
News

YouTube Launches a New Tool to Help Creators Label AI-Generated Content

YouTube released a tool that will make creators clearly label the parts of their content that are generated by AI. The initiative was first launched in November in an attempt...

Ripple Dumps 240 Million XRP Tokens Amid 17% Price Decline
Crypto News

Ripple Dumps 240 Million XRP Tokens Amid 17% Price Decline

Popular crypto payment platform Ripple has released 240 million XRP tokens in its latest escrow unlock for March. This comes at a time when XRP’s price has declined significantly. Data from...

Crypto Expert Draws A Links Between Shiba Inu And Ethereum
Crypto News

Crypto Expert Draws Link Between Shiba Inu And Ethereum

The Lucrative FTX Bankruptcy Trade and Ongoing Legal Battle
Crypto News

The Lucrative FTX Bankruptcy Trade and Ongoing Legal Battle

Bitcoin (BTC) Price Set to Enter “Danger Zone” – Time to Back-Off or Bag More Coins?
Crypto News

Bitcoin (BTC) Price Set to Enter “Danger Zone” – Time to Back-Off or Bag More Coins?

SNB to Kick Off Rate Cut Cycle Sooner Than Expected
News

SNB to Kick-Start Rate Cut Cycle Sooner Than Expected