With a little help from AMD, researchers at North Carolina State University have been able to increase the performance of general-purpose GPU computing by an average of over 20%. Their method requires a "fused architecture" in which the CPU and GPU reside on the same die and share both last-level cache and system memory. AMD's Llano-based A-series APUs qualify, as do Intel's Sandy Bridge processors.
As this press release explains, the approach involves using the CPU to prefetch data for the GPU. Dr. Huiyang Zhou, the associate professor of electrical and computer engineering who co-authored the team's paper on the subject, contends that this approach works because it allows CPUs and GPUs to focus on their strengths. Each one is capable of fetching data from memory at "approximately the same speed," but GPUs are much quicker to crunch the data once they have it. With the CPU fetching data for the GPU, which can then pull the data directly from cache, the researchers observed performance increases as high as 113%.
Obviously, some workloads will benefit more than others. The fact that this approach requires a processor with integrated graphics is also rather limiting, at least in practical terms. Even the fastest integrated GPUs are painfully slow compared to the performance offered by discrete graphics hardware. Thanks to CPU World for the tip.