With the broad success of ARM-based devices in the mobile market and the introduction of a 64-bit version of the ARM instruction set, a host of companies are rumored to be developing ARM-based server products. x86 stalwart AMD is among them. Most of these server products are likely to be "microserver"-style systems, which concentrate large numbers of small, low-power CPU cores into a single enclosure. Microservers have been touted as very efficient, particularly in terms of energy use, for certain types of workloads, including cloud services and virtualization.
Can microservers really carve out a sizable portion of the lucrative server market? Doing so isn’t easy, as AMD itself can attest as its Opteron products struggle for a foothold against Intel’s dominant Sandy Bridge-EP processors. For one thing, Amdahl’s Law stubbornly stands as a roadblock against the magical formula of simply adding more cores and threads without regard for the performance of certain thorny individual threads.
So what are the prospects? Over at RealWorldTech, David Kanter comes at the problem from a somewhat unique angle by considering the amount of die area that past and present server processor designs dedicate to three main components: cores, cache, and system infrastructure. Without giving everything away, he finds that the choices made by architects of server processors haven’t varied terribly much, in terms of proportional die area. "Microserver"-style designs like Sun’s Niagra have simply traded fewer complex cores for more, simpler cores while keeping similar overall ratios of compute, cache, and system infrastructure. Those ratios were likely guided by the necessities of server-class workloads, which means there’s perhaps less architectural leeway than one might suppose when building a general-purpose server-class CPU.
Kanter doesn’t see any substantial opportunity for ARM-based solutions to distinguish themselves with a more efficient instruction set, either. That should be no great surprise to those who have watched x86 chips clobber purportedly "simpler" and "more efficient" competition over the past 20 years.
He reckons that leaves the makers of microservers facing a tradeoff. They can probably win some specific types of customers by biasing their architectures in certain ways—say, providing much more compute and less cache to target specific workloads—but in doing so, they’ll limit the size of their potential markets. That approach may work to some degree. (Flexibility and quick time-to-market have been the two hallmarks of ARM-based SoCs, in fact.) But economies of scale are crucial to profitable chipmaking, and the market for tailored server solutions generally doesn’t involve the same sort of large potential customer bases as smartphones and consumer electronics.
I really won’t give away everything. You’ll have to read the article to see how much room Kanter thinks there is for would-be makers of microserver solutions to succeed. Knowing the issues he’s outlined, though, one can imagine that the path to success might not be as easy as the initial hype would seem to suggest.