With the OpenCL specification now complete, the spotlight is turning toward AMD and Nvidia and their plans for supporting the new general-purpose GPU programming interface. AMD announced earlier this week that it will release a "developer version" of its ATI Stream SDK with OpenCL support in the first half of next year, but Nvidia was somewhat more vague—even though one of its VPs actually chairs the OpenCL working group.
This morning, we spoke to Nvidia CUDA General Manager (and former Ageia CEO) Manju Hegde to learn more about Nvidia's OpenCL plans and how those plans relate to CUDA. Apparently, Nvidia has updated its terminology somewhat: CUDA now refers solely to the architecture that lets Nvidia GPUs run general-purpose apps, and the programming language Nvidia has been pushing is now known as C for CUDA. Hegde made it clear that CUDA is meant to support many languages and APIs, from OpenCL and DirectX 11 Compute Shaders to Fortran.
Hegde also gave the impression that Nvidia doesn't see OpenCL as a competitor to C for CUDA, since the new API should pave the way for a greater number of GPGPU applications. The more GPGPU apps come out, the more chips Nvidia will sell—and that's the whole point, as far as the company is concerned. (Nvidia makes no secret of the link between Apple's spearheading of OpenCL and its decision to put GeForce GPUs in all of its new MacBooks, either.)
So, with that out of the way, when can we expect to run OpenCL software on our GeForces? Nvidia plans to introduce beta OpenCL support in the first quarter of next year, with a "full implementation" to follow in the second quarter. The company can't move any faster, Hegde explained, because the OpenCL working group "has not completed its conformance sets, which are essential to release an implementation, and they expect it will take a couple of months."
We went on to ask about some of the differences between C for CUDA and OpenCL. According to Hegde, OpenCL is designed to be "OpenGL-like" in that it gives developers complete hardware access and expects them to handle "all the tedious hardware housekeeping" like initializing devices, allocating buffers, and managing memory. By contrast, C for CUDA offers two styles of programming: a high-level style where "the abstraction level is at the same level as C," and a driver-level API that's on "the same level as OpenCL."
Hegde told us the vast majority of developers using C for CUDA favor the higher-level style. That applies particularly to developers writing scientific applications, since those folks may be experts in their fields and have a good grasp of C, but they might not necessarily care to learn the intricacies of the computing hardware.
We were also curious about the potential performance differences between OpenCL and C for CUDA apps. As far as that goes, Hegde noted that performance largely depends on how programmers break up their algorithms into multiple threads and match the host hardware's architecture. The high-level flavor of C for CUDA might induce "some performance loss" compared to a lower-level approach, but Hegde said that's a small consideration. Because Nvidia also offers a driver-level API, OpenCL and C for CUDA should almost be in a "dead heat" in terms of compute performance.
|HP upgrades Envy and Spectre x2 laptop lineups||7|
|Asus ROG Strix X370-F and B350-X mobos take wing||1|
|MSI debuts slot-powered Radeon RX 560 Aero ITX OC cards||4|
|Lian-Li PC-O12WX puts graphics cards under glass||6|
|Asus B250I Gaming brings ROG Strix bling at a lower price||16|
|Lenovo Legion Y920 is a mobile gaming beast||14|
|Radeon 17.5.2 drivers boost Prey performance||5|
|Deals of the week: nice mobos, cheap RAM, and more||11|
|Synaptics shows how some fingerprint sensors can't be trusted||9|
|Pool cleaners. Or possibly Lifeguards.||+47|