Why did Bulldozer disappoint? Some possible answers


— 11:12 AM on May 31, 2012

AMD's "Bulldozer" microarchitecture has been something of a disappointment, particularly in the FX desktop processors, where it doesn't consistently outperform AMD's prior-generation Phenom II chips. Since Bulldozer is the first full refresh of AMD's primary x86 architecture in many years, we've been left with lots of questions about why, exactly, the new microarchitecture hasn't performed up to expectations.

There are some obvious contributors, including lower-than-expected clock speeds and thread scheduling problems. Then again, using the Microsoft patches for Bulldozer scheduling didn't seem to help much during the testing for our Ivy Bridge review.

Some folks have speculated about one or two very specific problems with Bulldozer chips—such as relatively high cache latencies—being the culprit, which offered hope for a quick fix. However, the host of improvements AMD made to the "Piledriver" cores in its Trinity APU only offered gains of 1% or less each in per-clock instruction throughput, yielding relatively modest progress overall. There was no one, big change that fixed everything.

Now, Johan DeGelas has shed a little more light on the Bulldozer mystery with a careful analysis of Opteron performance in various server-oriented workloads, and his take is very much worth reading. He offers some intriguing possible reasons for Bulldozer's weak performance in certain scenarios, and those reasons aren't just cache latencies. Instead, he pinpoints this architecture's inability to hide its branch misprediction penalty, the low associativity of the L1 instruction cache, and—yep—a focus on server workloads as the most likely problem areas. There is hope for future IPC improvements, but some of those will have to happen in the generation beyond Piledriver, whose outlines we already know.

   
Register
Tip: You can use the A/Z keys to walk threads.
View options

This discussion is now closed.