ATI's stream computing kickoff
Last Friday, ATI invited a number of journalists and analysts to a short but information-packed event devoted to its new stream computing initiative. ATI is using the phrase "stream computing" to refer to the class of applications more commonly referred to under the GPGPU label, an acronym which refers to general-purpose processing on a graphics processing unit. CEO Dave Orton explained that ATI chose the term stream computing because the class of computing problems the GPU handles well are primarily about data flow, a characteristic that separates these problems from the types of computation at which CPUs have traditionally excelled.
Orton identified a number of specific areas where ATI sees opportunities for GPUs to accelerate computation, including medical research, analysis of video and audio data for security applications (such as facial recognition), financial analysis, seismic modeling for oil and gas exploration, media search applications, physics simulations in video games, and media encoding.
In these areas, he said, the GPU has the potential to be "orders of magnitude" faster than CPUs due to its nature as a highly parallel floating-point processor. Orton pegged the floating-point power of today's top Radeon GPUs with 48 pixel shader processors at about 375 gigaflops, with 64 GB/s of memory bandwidth. The next generation, he said, could potentially have 96 shader processors and will exceed half a teraflop of computing power.
Orton was quick to emphasize that ATI is not looking to compete directly with CPUs, just to find and address a set of problems that map especially well to the GPU. He described the CPU-GPU relationship as complementary and symbiotic. He also made it clear that the day's events were not part of a new product launch. ATI is just inaugurating a new direction in seeking out this business, he said, and showcasing some actual applications where the GPU has been fruitfully applied.
Much of the rest of the event was devoted to speakers who had actual stream computing applications to discuss or demo.
First among them was Vijay Pande of Stanford University, Professor of Chemistry and Director of the Folding@Home project. TR readers should be very much familiar with Folding, since we field one of the top ten Folding teams in the world. Pande was there to talk about the new beta Folding client that uses the GPU. Currently, it only runs on newer Radeons, where it shows big performance increasesbetween 20 and 40 times the speed of a CPU. Pande said the client is presently achieving around 100 gigaflops per GPU. To give some perspective, he then demonstrated the graphical versions of the CPU and GPU clients side by side, and the GPU version showed constant motion, while the CPU one chunked along at a few frames per second.
This particular implementation of stream computing has now gone live. The FAH project released the first beta of the client to the public earlier this week.
I talked with Pande about the possibility of a Folding client for Nvidia GPUs, and he had some interesting things to say. The Folding team has obviously been working with Nvidia, as well as ATI. In fact, Pande said Nvidia has their code and is running it internally. At present, though, ATI's GPUs are about eight times as fast as Nvidia's. He was hopeful Nvidia could close that gap, but noted that even a 4X gap is pretty largeand ATI is getting faster all of the time.
The bottom line for Pande and his colleagues, of course, is how Folding on a GPU can further research about diseases like Parkinson's and Alzheimer's. Pande characterized the move to GPU Folding as one that opens new possibilities.