Programmers say using CUDA is tough
The folks at Ars Technica have spotted a couple of pretty interesting blog posts about CUDA, Nvidia's new general-purpose application programming interface for GeForce 8-series graphics cards. The first blog post is by Bryan O'Sullivan, a programmer who worked on a compiler upon which the CUDA compiler is based. According to O'Sullivan, CUDA is incredibly complex and forces programmers to juggle three different global memory types, a complex thread hierarchy, and a compiler that apparently fails to automate many tasks. According to O'Sullivan, "it's worth reading the programmer's guide in its entirety to get a sense of just how complex CUDA is, and how many different constraints the determined application programmer will have to keep in mind at a time."
The second blog post is by Michael Suess, a German programmer doing a PhD on parallel programming. Suess references O'Sullivan's post, and he quotes a student who looked into CUDA after working with the Cell processor during an internship at IBM. The student's comment is simply paraphrased as, "Working on the Cell was soo much easier!" Both Suess and O'Sullivan agree that organizations like government agencies and Wall Street will undoubtedly get "phenomenal speedups" by using CUDA, but that the technology is far from accessible just yet.