With a little help from AMD, researchers at North Carolina State University have been able to increase the performance of general-purpose GPU computing by an average of over 20%. Their method requires a "fused architecture" in which the CPU and GPU reside on the same die and share both last-level cache and system memory. AMD's Llano-based A-series APUs qualify, as do Intel's Sandy Bridge processors.
As this press release explains, the approach involves using the CPU to prefetch data for the GPU. Dr. Huiyang Zhou, the associate professor of electrical and computer engineering who co-authored the team's paper on the subject, contends that this approach works because it allows CPUs and GPUs to focus on their strengths. Each one is capable of fetching data from memory at "approximately the same speed," but GPUs are much quicker to crunch the data once they have it. With the CPU fetching data for the GPU, which can then pull the data directly from cache, the researchers observed performance increases as high as 113%.
Obviously, some workloads will benefit more than others. The fact that this approach requires a processor with integrated graphics is also rather limiting, at least in practical terms. Even the fastest integrated GPUs are painfully slow compared to the performance offered by discrete graphics hardware. Thanks to CPU World for the tip.
|The TR Podcast 175: the Zen of chipmaking and ARM's Cortex-A72 revealed||4|
|Elon Musk lays out vision for a battery-powered future||101|
|Inside ARM's Cortex-A72 microarchitecture||33|
|Asus' 144Hz MG279Q monitor may top out at 90Hz with FreeSync||55|
|Deal of the week: A Bay Trail netbook for $161, free case fans, and more||17|
|DirectX 12 Multiadapter shares work between discrete, integrated GPUs||95|
|Gigabyte's 9-series motherboards are Broadwell-ready||44|
|The TR Podcast will be live on Twitch shortly!||3|
|AMD delays FreeSync support for multi-GPU systems||40|