AMD throws some Epyc shade on Xeon-powered servers


— 11:54 AM on May 17, 2017

During a portion of its 2017 Financial Analyst Day, AMD made a splash by officially showing its 32-core Epyc (née Naples) server chips. Each of those chips comprises four eight-core Zen units, connects up to 8 memory channels, and has 128 lanes of PCIe 3.0 connectivity.

Those specs are enticing enough on their own, but the first questions on every datacenter administrator's mind are how exactly do these CPUs perform and how exactly does AMD intend to fit them onto rack units. As it turns out, AMD isn't trying to aim cannonballs at top-end Xeon servers. Instead, the company wants to take a slice from the meaty middle of the two-socket (2S) server market, and is posing its Epyc offerings as single-socket alternatives that should hopefully deliver the same performance with much smaller boards—thereby potentially lowering cooling and power requirements.

To illustrate the point, AMD offered a demonstration of a 2S Epyc system going against two high-end Xeon E5-2699A v4 CPUs, each with 22 cores and 55 MB of cache. AMD's machine had 256 GB of RAM thanks to its eight channels, while the Intel-powered server was rolling with 128 GB. The Epyc box finished a Linux kernel compilation in 15.7 seconds, around 43% faster than the 22.5 seconds it took the Intel system.

Those figures may raise a couple eyebrows on their own, but AMD proceeded to point out that not a whole lot of systems are shipped with the mighty Xeon E5-2699A v4 onboard. According to the company, servers powered by Intel's Xeon offerings between the ES-262x and ES-265x series make up the bulk of shipments, and among those, the ES-264x series is the best seller among them. AMD proceeded to offer another demonstration, then: a single-socket Epyc system with 128 GB of RAM going up against a 2S box with Xeon ES-2650 v4 CPUs (12 cores each). For a Linux kernel compile, the Epyc system took 33.7 seconds, while the Xeon box did its work in 37.2 seconds. While this is but a single benchmark, it's nonetheless impressive and lends some credence to AMD's plan of going after two-socket systems with single-socket Epyc offerings.

AMD offered some more figures to support its case. The company says that an Epyc 1S system has "significantly" lower power consumption versus a comparable 2S Intel-powered machine and can offer a 30% lower total cost of ownership (TCO). Taking a direct jab at Intel's world-famous and world-hated product segmentation, AMD says that all Epyc CPUs are "unrestrained," meaning that the entire lineup from the lower-end model to the fanciest CPU will offer the same PCIe connectivity, all eight memory channels, the same security stack, and the same set of chipset features.

Finally, the company played up Epyc's I/O (namely those 128 PCIe lanes) as an advantage for machine learning applications (you thought you'd read an entire post without those words?) As a sample case study, AMD showed what it takes for an Intel system to drive six compute accelerator cards—two CPUs, a storage controller, two sets of eight DIMMs, and two PCIe switches. In comparison, an Epyc system would require a single CPU directly reaching out to the drives, 16 DIMM slots, and all six compute cards at once.

Only time will tell whether AMD's Epyc moves in the server space will pan out, but we're at least cautiously optimistic. After all, I can barely even remember the days when I saw "Opteron" in a text console.

Tip: You can use the A/Z keys to walk threads.
View options