OpenCL 1.0 spec released, AMD and Nvidia are on board

Right on schedule, the Khronos Group has released the first version of the OpenCL specification. The 1.0 spec has been ratified by a large number of companies, including AMD, Apple, Broadcom, Intel, Motorola, Nokia, Nvidia, and many others. Since this is a public release, developers can grab the spec itself and documentation right now through this page.

If you’ve been tuning out the constant flow of stream computing news lately, OpenCL stands for Open Computing Language, and Khronos defines it as “the first open, royalty-free standard for cross-platform, parallel programming of modern processors found in personal computers, servers and handheld/embedded devices.” Here’s a more detailed rundown:

OpenCL enables software developers to take full advantage of a diverse mix of multi-core CPUs, Graphics Processing Units (GPUs), Cell-type architectures and other parallel processors such as Digital Signal Processors (DSPs). OpenCL consists of an API for coordinating parallel computation and a programming language for specifying those computations. Specifically, the OpenCL standard defines:

  • a subset of the C99 programming language with extensions for parallelism;
  • an API for coordinating data and task-based parallel computation across a wide range of heterogeneous processors;
  • numerical requirements based on the Institute of Electrical and Electronics Engineers’ IEEE 754 standard;
  • efficient interoperability with OpenGL, OpenGL ES and other graphics APIs.

OpenCL will likely spur growth in the number of consumer apps that use GPUs for general-purpose computing, since developers should be able to avoid proprietary APIs and just use OpenCL to write software that runs on both AMD and Nvidia GPUs (and presumably Intel discrete GPUs when those come out, too).

As a matter of fact, both AMD and Nvidia have announced imminent support for the new spec. AMD says its engineers have already gotten OpenCL code up and running in-house, and it plans to offer a “developer version” of the ATI Stream development toolkit with OpenCL 1.0 support in the first half of next year. The company also intends to provide a “transition path” for developers who currently use its Brook+ language.

Nvidia boasts that its CUDA architecture “was designed to natively support all parallel computing interfaces and will seamlessly run OpenCL.” That means GeForce 8-series and newer GPUs should support the new language, although Nvidia doesn’t say when developers will be able to harness that capability. Considering Nvidia got one of its VPs to be chair of the OpenCL working group, though, that might not take very long.

Finally, the OpenCL 1.0 release announcement wouldn’t be complete without a shout-out to Apple. The iPod maker’s Senior VP of Software Engineering, Bertrand Serlet, points out that Apple developed OpenCL and will use it in Mac OS X 10.6 (a.k.a. Snow Leopard). OpenCL will allow the upcoming operating system to “harness an amazing amount of computing power previously available only to graphics applications,” Serlet says.

Comments closed
    • Kamisaki
    • 11 years ago

    Yeah, just because your graphics card will be useful for other things besides gaming doesn’t mean it will have to run all the time. Just when you’re using the apps that you need it for. Obviously, the more professional stuff will appreciate it even more, but there’s good stuff for everybody.

    • herothezero
    • 11 years ago

    q[

      • adisor19
      • 11 years ago

      l[

        • tfp
        • 11 years ago

        macs *sigh*

      • ltcommander.data
      • 11 years ago

      You know the whole point of OpenCL, which is why nVidia and AMD support it, is to break the GPU away from games and allow it to be useful in other things.

      And in terms of the Mac Pro, it’s quite true that the GPU is terrible. But at the same time, I don’t think Dell, HP, or anyone else offers a workstation with dual quad core Xeons at the Mac Pro’s price that uses a 1600MHz FSB with DDR2-800 FB-DIMMs instead of the cheaper 1333MHz FSB and DDR2-667 FB-DIMMs. The Mac Pro is of course a workstation rather than a gaming machine.

    • Fighterpilot
    • 11 years ago

    If Apple is so tech savvy with this game API and they have so much money in the bank,how come they don’t go to a major game developer and have them make a killer 3D title that works on Apple computers?
    Seems like it’s one of the major dings we always hear about whenever Apple are mentioned….

    • ltcommander.data
    • 11 years ago

    Unified shaders aren’t really a prerequisite for GPGPU operation. Afterall, the first really popular use of GPGPU by consumers was the DX9.0c X1600, X1800, and X1900 which could do video transcoding using the original AVIVO Video Convertor and were also the original GPU Folding@home clients.

    Similarly, ATI’s Close to Metal platform uses Brook+, but the original BrookGPU framework can actually generate output to most OpenGL and DX9 GPUs including the Radeon 9700 and even the GeforceFX 5200. It all depends on how the standard is defined. It’s probably in the Khronos Group’s best interest to try to include as many GPUs as possible for a larger installed base. Certainly it’d be nice if ATI supported the X1xxx series seeing they’ve been shown to be able to operate as GPGPU.

    • Voldenuit
    • 11 years ago

    DirectX 11 (with DX11 Compute) is due out Feb 2009*. It’s a race!

    * Won’t affect me, since I’m still using xp x64, and DX11 will be Vista/Win7 only.

    • blastdoor
    • 11 years ago

    Snow Leopard may end up being a more important OS upgrade than Leopard, despite the lack of “new features”.

    So what is Microsoft’s answer to OpenCL? I can’t imagine that they would actually support this — surely they will try to roll their own thing… DirectMP or something?

      • UberGerbil
      • 11 years ago

      Compute shaders have been in the DX11 spec for over a year now (and in working CTPs/betas/whatever for a couple of quarters).

    • thinkman
    • 11 years ago

    What about writing a nice solid DX5/6/7/8 emulator for DX10(+) hardware? My HD4850 is useless for a lot of olden goldies. All sorts of compatibility problems could/should be addressed?

      • kilkennycat
      • 11 years ago

      Golden Oldies? Maybe swap over to nVidia?? I have a large selection of full 3D-action Dx “Golden Oldies” that seem to run flawlessly on even the latest nV drivers, assumihg that you can encourage them to run under XP or Vista, using suitable emulators if necessary. For me, that seems to be the biggest problem, not the graphics compatibility. There are the usual few exceptions for both nVidia and ATi, like Thief1 and Thief2. For these and for my Glide games, I have an old machine with a Voodoo5 installed. Whether by accident or design, nVidia has consistently been pretty good about supporting Dx backward compatibility in their graphics drivers. Since I play legacy PC games as well as many of the latest titles, I have avoided the ATi route for the past several years. This is not the first report of very spotty backward-compatibility support from the ATi graphics-driver team, even to the extent of having to load one or other old driver to run specific legacy games without graphics problems. The monthly official driver updates from ATi do not leave much time for legacy QC; it seems as if ATi concentrate most of their QC on a moving group of “latest titles”.

        • thinkman
        • 11 years ago

        Thanks for the heads up. I wasn’t aware the situation was much better in the green camp. I just find it unacceptable that such a large portion of older games won’t work properly with new ATI cards & drivers. If it works in the green camp then why not in the red, right? Sloppy or lazy engineering?

          • moriz
          • 11 years ago

          it depends on the game i suppose. i’ve successfully ran diablo I/II, starcraft, heroes of might and magic III, and nox on my HD4850. my only other “old” game would be guild wars, and it has ran very well regardless of driver versions.

          what kind of games are you trying to run?

    • Usacomp2k3
    • 11 years ago

    So on an nVidia card you use OpenCL within CUDA? That seems like too many steps.

      • Silus
      • 11 years ago

      Just like AMD will need to use Brook for it.

        • Usacomp2k3
        • 11 years ago

        Is CUDA used for the compiling or the run-time? (or both)

          • Silus
          • 11 years ago

          CUDA will be used to translate OpenCL into what NVIDIA’s GPUs “understand”. Think of it as a high-level programming language compiler, that translates the code, to bits. Same thing for Brook.

            • Usacomp2k3
            • 11 years ago

            Okie dokie. Thanks.

    • StashTheVampede
    • 11 years ago

    AMD and Nvidia had their own spec and it took Apple to come up with one that brought them together! This is a good thing, ihmo — cannot wait to see the fruits of its labors.

    Bring on the Xbox70000 with a Larabee!

      • MadManOriginal
      • 11 years ago

      Please reread the second sentence of the news article.

        • StashTheVampede
        • 11 years ago

        Why don’t YOU go read it:
        “Proposed six months ago as a draft specification by Apple, OpenCL has been developed and ratified by industry-leading companies including 3DLABS, Activision Blizzard, AMD, Apple, ARM, Barco, Broadcom, Codeplay, Electronic Arts, Ericsson, Freescale, HI, IBM, Intel Corporation, Imagination Technologies, Kestrel Institute, Motorola, Movidia, Nokia, NVIDIA, QNX, RapidMind, Samsung, Seaweed, TAKUMI, Texas Instruments and Umeå University. The OpenCL 1.0 specification and more details are available at §[<http://www.khronos.org/opencl/.<]§" Proposed by Apple, ratified by a standards body. Again, WHO submitted it?

          • Scrotos
          • 11 years ago

          Point for the round goes to Stash.

          • MadManOriginal
          • 11 years ago

          First off, chill.

          Second, you made it sound as if Apple is fully and entirely responsible for the standard all on their own which is not the case. Post #19 does not say ‘proposed (by Apple then developed by a consortium)’ it says ‘it took Apple to come up with one’ as if it popped out of Apple’s labs fully formed Athena-like, perhaps you didn’t mean it that way but that’s how it reads. Your quote clearly supports the former in fact:
          “r[

            • StashTheVampede
            • 11 years ago

            Wait and how isn’t Apple a huge part of getting OpenCL spec out?

            Microsoft didn’t come up some with similar API for multi-core CPU/GPU and submitted it to a standards body. AMD and Nvidia both had their own API to talking their general purpose GPUs and had little interest in supporting someone else’s work, right? Did Sony, Sun or IBM come up with another spec and submitted it for ratification?

            No.

            Apple did. A royalty-free framework that anyone can use. It’s been approved (read: those additional companies that pay to be on the board also agreed that the spec should be considered a standard, as submitted) by companies that compete against each other; showing that they think it’s a good initiative to support.

            Again, who submitted it?

            • MadManOriginal
            • 11 years ago

            I never questioned who i[

            • derFunkenstein
            • 11 years ago

            Stop treating this like Mac Report! It’s a PC website, dammit! 😉

            • StashTheVampede
            • 11 years ago

            Who am I, adi?

            All I’m saying is: when you’re enjoying your next generation game that uses all the possible “processors” you are throwing at it and the required OpenCL drivers are needed to run the game, just remember who the fun started with.

            • MadManOriginal
            • 11 years ago

            AdiTheVampede 😀

            Fair enough, saying who started the initiative is both accurate and fair. Tbh some standard would have come along eventually but the sooner the better, and I do applaud many companies working together on an i[

            • adisor19
            • 11 years ago

            Apple invented Firewire.

            That is all.

            Adi

            😉

            • MadManOriginal
            • 11 years ago

            Steve Jobs invented God.

            That is all.

            Adi

            😉

            • StashTheVampede
            • 11 years ago

            Adi is the constant, staunch Apple supporter. I’m a supporter when I really believe they deserve good word. OpenCL is pretty cool. I’ll work to make a better WinAMP.

            • adisor19
            • 11 years ago

            I resent that. I’ve called foul on Apple in the past when such foul was deserved and not based on anti mac fanboi bashing.

            Adi

            • MadManOriginal
            • 11 years ago

            heh. It was a reference to a recent post by ssidbroadcast I think, or maybe indeego, that imitated you and was quite humorous. I don’t think you ever replied to it though. I can’t find it because there are too many news posts to sift through 🙁

            • eitje
            • 11 years ago
            • adisor19
            • 11 years ago

            Thanks, i missed that one indeed.

            Adi

            • eitje
            • 11 years ago

            i’d like to see a link to that.

            • adisor19
            • 11 years ago

            Sure thing : §[<https://techreport.com/discussions.x/14409<]§ Post no 33 on that page. Adi

            • eitje
            • 11 years ago

            holy crap!

            • ltcommander.data
            • 11 years ago

            I think Apple does deserve quite a bit of credit. If I’m not mistaken, Apple presented a fairly fully formed draft of OpenCL rather than having the working group start from scratch which would have taken forever. The working group then mainly had to adapt it in a way so that it can be implemented by all parties.

            Apple’s Aaftab Munshi seems to have headed the OpenCL working group and he’s previously worked on the OpenGL ES 1.1 and 2.0 specs and Apple hired him away from ATI. It seems Apple’s really put some thought into putting everything in place so OpenCL could be agreed upon as quickly as possible and no doubt held the Snow Leopard release deadline over everyone’s head. From a PR standpoint, it’s probably in everyone’s best interest to have OpenCL ready for Snow Leopard anyways to promote it. At least it didn’t take as long to ratify as OpenGL 3.0, which I believe is still controversial.

            • End User
            • 11 years ago

            This is a tech site.

    • Shinare
    • 11 years ago

    So does this mean that I don’t have to choose nVidia to get PhysX acceleration in my games soon? My next graphics upgrade was going to the green team for that even tho AMD seems to have the better product right now.

      • stmok
      • 11 years ago

      No it does not. PhysX is an Nvidia only solution. For ATI to officially support PhysX, they will need to pay a royalty to Nvidia for every GPU sold. (You can see why they aren’t supporting it).

      PhysX has nothing to do with OpenCL. Its a completely different API.

        • Silus
        • 11 years ago

        And they have to pay a royalty to Intel, to use Havok aswell. And Havok is CPU only, which makes AMD decision’s to partner with Intel, even more ridiculous.

          • MadManOriginal
          • 11 years ago

          Intel would rather team with AMD with whom they have cross-licensing agreements going both ways already than with NV. NV is a threat to both AMD and Intel so they are willing to gang up against them.

    • R@jj@h
    • 11 years ago

    Does this mean that Geforce 7 or Geforce 6 will be supported aside from Geforce 8 and above?

      • Meadows
      • 11 years ago

      No.
      You mean you haven’t upgraded?

      • pogsnet
      • 11 years ago
      • Kurotetsu
      • 11 years ago

      I believe only video cards that use the unified shader architecture will work. Which the Geforce 6 and 7 series don’t have, I believe.

        • R@jj@h
        • 11 years ago

        I guess Nvidia graphics card lower than the geforce 8 series would only support 2D acceleration as what Adobe Reader (utilizing the graphics processor for 2d acceleration) was doing…right?

          • Forge
          • 11 years ago

          At best, yes. Most likely, nothing at all. Pre-GF8, there’s not a lot of general purpose programmable functionality to use.

          That’s why unified programmable shaders were so exciting. Now that most everyone has them, they’re going to start getting used more and more.

    • bittermann
    • 11 years ago

    I think this is great but for the average home pc not so much. Your video card uses enough electricity at idle but now with your graphics card chugging all day long at 30%-100% load your electricity bill will bump up quite nicely! That is why I primarily quit doing folding at home because of the cost. Figure an average cpu uses 65 watts max and you can certainly times that by 3 or 4 running your gpu all day long. I think it will be great for scientific, developers, graphics work, etc…..disclaimer: I’m no expert so I could be totally wrong. 🙂

      • sdack
      • 11 years ago

      It is great for everyone. Applications like Photoshop will be able to make use of effects that previously would have been too intense to compute. Video editing will profit, too. We will even see advancements in audio compression with this extra amount of computing power. And there are the computer games of course.

    • MadManOriginal
    • 11 years ago

    q[

      • Scrotos
      • 11 years ago

      I think GLIDE is poised to make a comeback!

        • KeillRandor
        • 11 years ago

        I’ve still got a voodoo2 lying around here somewhere, so I’ll be ok 🙂

          • BiffStroganoffsky
          • 11 years ago

          Just one? You’re not ok until you have SLI!

        • deruberhanyok
        • 11 years ago

        GLIDE, psh. I think S3 MeTaL is going to be supported. 🙂

    • Prodeous
    • 11 years ago

    This is (I hope) a step in the right direction. a standard platform. no CUDA or STREAM. With one standard, this might give application developers the freedom not to chose a specific platform.

    Now if only the Animation packages (Cinema4d, Max/Maya/Softimage, Blender, and others) would adapt it, this could have a profound improvements in the movie industry.

    Keeping my fingers crossed.

      • Meadows
      • 11 years ago

      That won’t happen, it would put a dent in intel’s finances. Note that movie studios (especially fully computer-animated ones like Pixar – not only due to their history) use intel-based (presumably Xeon) server farms for producing films, and if they’d switch over to a few handfuls of nVidia or ATI dual-GPU cards to do the same, that would mean less money for the blue monster.

        • MadManOriginal
        • 11 years ago

        I don’t see the connection in your reasoning since there’s nothing to stop them (movie studios or other rendering application users) from using GPUs. You say it would cut down on Intel profits which is true assuming those places use Intel CPUs, but what’s to stop them from using graphics cards instead? The Intel mafia?

          • Silus
          • 11 years ago

          Stop what ? AMD ?
          AMD is bound by their licensing agreement with Intel in regards to Havok, much like they are with the x86 license. Since Intel owns Havok and they will eventually enter the discrete graphics card market, they gain absolutely nothing in letting AMD run Havok on their GPUs, especially since Larrabee will just be a set of x86 cores working in parallel. So, Intel will want to keep Havok as it is i.e. run on x86 cores and nothing more, unless some drastic change occurs.

          This is why I see no sense in AMD partnering with Intel, instead of NVIDIA, in terms of a Physics API. AMD makes CPUs and GPUs. PhysX runs on both GPUs and CPUs, while Havok only supports CPUs at this point…It makes no sense…

            • MadManOriginal
            • 11 years ago

            I edited my comment to make it clear that I was asking what’s stopping rendering application users from switching to GPUs. ‘Intel profits’ is a silly reason unless Intel sends out gangsters to beat up non-Intel users.

            • Silus
            • 11 years ago

            Well, I specifically mentioned Havok and PhysX for a reason. There is no Physics API based on OpenCL, but there is one that runs on both CUDA and x86 cores (PhysX) and one limited to x86 this far (Havok). Allowing AMD to run Havok on their GPUs, doesn’t seem like something Intel would benefit from, especially when Larrabee is a set of x86 cores running in parallel.

            Obviously, in other types of applications, end users will choose what they want, and Intel can’t control it, but in terms of harnessing the power of their GPUs, for physics calculations, AMD is limited by the “Intel mafia” (i.e. license agreement) alright, so OpenCL is useless in that regard for them and changes nothing in this situation, which is directly related to games and the graphics card market.

            • Meadows
            • 11 years ago

            g{

            • Silus
            • 11 years ago

            Yeah, I just re-read my post @ 15 and realized something was missing, which linked my point in replying to 14 and the topic. But I think I covered that in 20 🙂

            • MadManOriginal
            • 11 years ago

            Try to answer questions about your own comments. It only makes your comments look meaningless and more troll-tastic than normal when you entirely avoid answering questions.

            • sdack
            • 11 years ago

            On rare occasions have these companies been seen to do something out of an interest for their customers. It may have happened again.

            Am I being sarcastic?

            • Silus
            • 11 years ago

            If you can point me to one, I’ll be happy to read about it, but I’m not aware of anything like that, so I would bet on sarcasm on your part 🙂

        • Prodeous
        • 11 years ago

        First of all, some studios do use AMD cpus. Not everyone uses Intels.

        Second, to the studios, the most important is the speed of render. And if a GPU can do a frame in lets say 1hr, vs intels 5hr. The math is simple. Better movies at fraction of the cost. And No company is that stupid.

        Now we all might have ideas and comments on the subject. But I’ve been following the industry quite close, since I’m trying to get into it my self.

      • kravo
      • 11 years ago

      It is already happening I guess, since Photoshop CS4 does benefit of GPGPUs. Autodesk seems to be a slower monster, they are more worried about buying up the competition…

        • Prodeous
        • 11 years ago

        From what i remember, Adobe uses OpenGL/DirectX to perform the manipulations. And is limited to only a few filters.

        However it is very nice to see a big company to show the power of GPUs.

          • ltcommander.data
          • 11 years ago

          I’m pretty sure Adobe’s GPU acceleration is not really true GPGPU work in the sense that you aren’t feeding in pure code and numbers to be processed on the GPU. Adobe’s implementation seems to be doing what Apple’s Core Image framework has done for OS X and applications like Aperture and Motion since 10.4 Tiger was released in 2005.

          What they are doing seems to be as you said, rendering an image onto an OpenGL surface and then manipulating it like a game would. This would explain why Adobe’s acceleration isn’t dependent on CTM or CUDA support and can work on any DX9 GPU including Intel ones. CUDA programmability is available through third-party plug-ins rather than the core Adobe apps themselves.

        • flip-mode
        • 11 years ago

        Yeah, Autodesk sucks. I hate that company. Now they’re putting the stupid Ribbon in all their products too.

        Autocad 2009 sucks – slow cluncky resource hogging bloated POS. Thank gord BIM is the future. Just too bad Revit is getting the Ribbon too.

Pin It on Pinterest

Share This