With the OpenCL specification now complete, the spotlight is turning toward AMD and Nvidia and their plans for supporting the new general-purpose GPU programming interface. AMD announced earlier this week that it will release a "developer version" of its ATI Stream SDK with OpenCL support in the first half of next year, but Nvidia was somewhat more vague—even though one of its VPs actually chairs the OpenCL working group.
This morning, we spoke to Nvidia CUDA General Manager (and former Ageia CEO) Manju Hegde to learn more about Nvidia's OpenCL plans and how those plans relate to CUDA. Apparently, Nvidia has updated its terminology somewhat: CUDA now refers solely to the architecture that lets Nvidia GPUs run general-purpose apps, and the programming language Nvidia has been pushing is now known as C for CUDA. Hegde made it clear that CUDA is meant to support many languages and APIs, from OpenCL and DirectX 11 Compute Shaders to Fortran.
Hegde also gave the impression that Nvidia doesn't see OpenCL as a competitor to C for CUDA, since the new API should pave the way for a greater number of GPGPU applications. The more GPGPU apps come out, the more chips Nvidia will sell—and that's the whole point, as far as the company is concerned. (Nvidia makes no secret of the link between Apple's spearheading of OpenCL and its decision to put GeForce GPUs in all of its new MacBooks, either.)
So, with that out of the way, when can we expect to run OpenCL software on our GeForces? Nvidia plans to introduce beta OpenCL support in the first quarter of next year, with a "full implementation" to follow in the second quarter. The company can't move any faster, Hegde explained, because the OpenCL working group "has not completed its conformance sets, which are essential to release an implementation, and they expect it will take a couple of months."
We went on to ask about some of the differences between C for CUDA and OpenCL. According to Hegde, OpenCL is designed to be "OpenGL-like" in that it gives developers complete hardware access and expects them to handle "all the tedious hardware housekeeping" like initializing devices, allocating buffers, and managing memory. By contrast, C for CUDA offers two styles of programming: a high-level style where "the abstraction level is at the same level as C," and a driver-level API that's on "the same level as OpenCL."
Hegde told us the vast majority of developers using C for CUDA favor the higher-level style. That applies particularly to developers writing scientific applications, since those folks may be experts in their fields and have a good grasp of C, but they might not necessarily care to learn the intricacies of the computing hardware.
We were also curious about the potential performance differences between OpenCL and C for CUDA apps. As far as that goes, Hegde noted that performance largely depends on how programmers break up their algorithms into multiple threads and match the host hardware's architecture. The high-level flavor of C for CUDA might induce "some performance loss" compared to a lower-level approach, but Hegde said that's a small consideration. Because Nvidia also offers a driver-level API, OpenCL and C for CUDA should almost be in a "dead heat" in terms of compute performance.
|Corsair sells a majority stake to private equity for $525 million||1|
|AMD turned a $25 million operating profit in Q2 2017||43|
|Rumor: Radeon RX Vega benched in 3DMark Fire Strike||42|
|National Merry-Go-Round Day Shortbread||6|
|Flash will be dead by the end of 2020||37|
|Adata wants to brighten your life with its XPG Spectrix D40 RAM||7|
|Rumor: Geekbench entry hints at 16-core Core i9-7960X performance||20|
|GeForce 384.94 drivers bring a bevy of security fixes||10|
|Thermaltake Smart RGB PSUs dazzle budget builders||10|