AMD counters CUDA with renewed ATI Stream initiative

Expect consumer software to start using Radeon GPUs for general-purpose computing in the very near future. AMD has announced a new round of stream processing initiatives aimed at bringing stream computing "from the desktop to the datacenter."

To start off, AMD has taken a page out of Nvidia’s marketing playbook and tied a catchy name to its general-purpose GPU computing efforts: ATI Stream. Unlike Nvidia’s CUDA, which is really a set of development tools, the ATI Stream label covers everything from GPU-based parallel computing to apps that use the graphics rendering pipeline in non-traditional ways. That means that GPU-accelerated PDF rendering and GPGPU scientific computations both fall under the ATI Stream umbrella.

On the more concrete side of the endeavor, AMD plans to release a new Catalyst graphics driver on December 10 with the Compute Abstraction Layer (CAL) runtime built in. If you read our interview with AMD’s Patti Harrell in June, you’ll know CAL essentially bridges the gap between high-level programming interfaces (like Brook+) and Radeon GPUs. Right now, AMD supplies the CAL runtime DLL to developers, and those developers have to distribute it with their software—an iffy approach from a compatibility standpoint, since new driver releases could break compatibility with old CAL DLLs. From the Catalyst 8.12 release onward, AMD will include the CAL library with the graphics drivers.

Instead of waiting for third parties to take advantage of these updates and create consumer GPGPU apps for Radeon users, AMD will offer one together with the new Catalyst release. As far as we can tell, the new Avivo Video Converter will more or less mirror the functionality of the Elemental Badaboom video transcoder we wrote about recently, but with two key differences: it’ll run on AMD GPUs, and it won’t cost $30. While AMD hasn’t given us a full list of supported input and output formats yet, the firm’s presentation mentioned GPGPU-accelerated transcoding of MPEG-2 and 1080p H.264 video for use on handheld players, DVDs, and other devices. In the company’s own benchmarks, the encoder cut HD video transcoding times from three hours to just 12 minutes on a system with a 2.6GHz Phenom X4 and a Radeon HD 4850. There’s a catch, however—you’ll need a Radeon HD 4600- or 4800-series GPU to use the tool.

The upcoming Avivo Video Converter. Source: AMD.

A third-party alternative will follow in the first quarter of next year. CyberLink plans to release an update for PowerDirector 7 that will go "beyond ATI Avivo Converter capabilities with multiple HD videos transcoding" and offer "enhanced video editing, effects creation, playback and transfer," in AMD’s words.

Simultaneously with the Catalyst 8.12 driver release, AMD expects to introduce version 1.3 of its Stream SDK to developers. The new-and-improved development toolkit will bring "significant performance enhancements" to the Brook+ high-level programming language, and it will support Radeon HD 4350, 4550, and 4600 graphics cards plus a new high-performance computing card—the AMD FireStream 9270.

Due out later this quarter at $1,499, the FireStream 9270 will have 800 SPs, a 750MHz core speed, 2GB of 850MHz GDDR5 RAM, and typical power consumption of 160W. AMD says this card will be able to hit 1.2 teraFLOPS of single-precision computing power and 240 gigaFLOPS of double-precision power. Judging by the GPU specs, the FireStream 9270 may well be a souped-up flavor of the Radeon HD 4870.

Comments closed
    • pogsnet
    • 11 years ago
    • darryl
    • 11 years ago

    can someone interpret this article for me? In the near future I hope to purchase a new video card, as part of a new system build. It’s purely for gaming. How would Stream technology benefit sims that are generally considered CPU-intensive? I’m sure I’m not understanding, but does it mean that buying a 4xxx gpu and updating to the new driver will ease the cpu burden by offloading physics calculations to the GPU on a home-built pc.
    tia for helping an ignoramus understand

    • SpotTheCat
    • 11 years ago

    there are artifacts on the jpg!

      • indeego
      • 11 years ago

      pretty much true for every jpg, amiriteg{

    • MadManOriginal
    • 11 years ago

    Scott you need to bug AMD for a preview of the AVIVO app. The quality of the app and the quality of the output will be key in my nearing dual versus quad CPU decision.

      • sigher
      • 11 years ago

      Here’s hoping for an ‘advanced’ button with some decent settings options eh.

    • MadManOriginal
    • 11 years ago

    Frickin’ sweet. I was recently looking at video cards and was torn between a 9800GT and a 4850. I’d had a 4850 before but refunded it so I knew it was good, the price/performance between the two scaled almost exactly as did pricing. I knew the 4850 had better performance but CUDA apps, though limited, was an advantage for NV. I ended up getting a good deal on a 4850 so this news is great 🙂

    Related to that is CPU choice and I’ve been back and forth between a q9550 and an e8400, the price isn’t an issue since I’d keep it for a while and quad core would have usefulness, the decision was tough because of the small enclosure I want to use and the limited cooling options. I’d gone back and forth and was leaning toward the q9550 which I would just oc when needed but if this is good at release (better than Badaboom) the advantage will be back to the e8400.

    Now the million dollar question is how good the conversion and compression quality is. If it’s Badaboom-like I’ll be dissapointed. If it’s somewhat more open, maybe with plug-in options for various command line or open source encoders or the built-in encode is good I will be very pleased. In fact if it has plug-ins of some sort and support for lots of audio AND video formats I would say it will quickly surpass NV’s current options.

    • thermistor
    • 11 years ago

    OK, OK, here’s a scenario question…if we are offloading compute-intensive tasks like video rendering, photo filters, etc. to GPGPU (CUDA and/or Stream) in the future to do heavy lifting….and GPGPU processing is so much faster than using an x86 processor to do the same thing, is the following possible:

    General purpose add-in graphics (general processing cards) will be come as important, if not more-so than having a fast x86 CPU? In the future will everyone buy a Celeron just to boot Windows and then let the GPU do all the compute-intensive stuff? I don’t mean enthusiasts…just the general way HP/Dell/etc. will configue off-the-shelf boxes?

      • jon_lui
      • 11 years ago

      I don’t think so, at least not in the near future. I’m not an expert at this, but from what I know, x86 is an industry standard and there have been millions of programs already written for it and will continue to write for it. On the other hand, Nvidia and AMD graphics cards each require its own unique programing methods. Therefore, each application will have to be written twice if both companies continue to exist. Or people who have Radeons can’t use any software that is written for Geforces. Furthermore, GPUs are only really good at parallel processing, while CPUs are good at many things, but do not excel in them.

      • holophrastic
      • 11 years ago

      No. It’s the same stop-gap experiment measure that every technology goes through during every growth cycle.

      GPUs differ from CPUs in really only two ways: they play with fewer data types — i.e. floating points — and they do hardware-based manipulations — e.g. lighting calculations. GPUs are specialized hardware, just like math co-processors were in the 486 era, physics cards are now, and camera-based input will be in five years. Think of every add-in card that ever became a chip.

      It’s simply that until a technology is mature enough to become a part of the CPU, no one would dare risk ruining the CPU with it. CPU is far too general-purpose to be used for specialized ideas — until those ideas become popular enough that they are in effect general.

      The whole GPGPU world is this era’s math co-processor. The day that it becomes popular, standardized, and generally mature, it’ll become MMX — on the CPU.

      Right now, the technology is still up-in-the-air, and no one knows if it’ll stick around long enough to matter.

        • Scrotos
        • 11 years ago

        I agree. I believe AMD’s Fusion or Torrenza or whatever is an attempt to get a 4-core CPU and make one of the cores a GPU, so you’d have a 3-core CPU with built-in GPU. Like the analogy given, it’d be like this generation’s math coprocessor. In that case, you wouldn’t bother with a celeron and GPU, it’d all be built into one die.

        • FubbHead
        • 11 years ago

        Well, as far as I can tell, the Cell is already there.

        • moritzgedig
        • 11 years ago

        /[

    • Forge
    • 11 years ago

    Avivo Video Converter is NOT NEW. I’ve seen it out and running since roughly the 4850 launch, and I saw betas of something similar back around the 38X0 card launches. This may be the first time AMD is packaging it up and pushing it out the door with fanfare, but it’s far from a new product.

    Also, much like Badaboom, the range of supported input/output formats is depressingly low, at least on the previous versions. Basically if you wanted to take a DVD and make an iPod compatible MP4 file, this could do it, and do it really fast, but the encode quality and file size were far inferior to software encoders.

      • Scrotos
      • 11 years ago

      Perhaps he means it’s a new version of the video converter, not that the application itself is a brand new application. And the GPU support is probably new.

      • MadManOriginal
      • 11 years ago

      I was excited when I first read about Badaboom but then Anandtech’s in-depth review of the release version turned me off, mainly for the quality reasons you cite. I understand lots of people convert video to mobile devices but that doesn’t interest me, I want the highest quality first, if I want to make a mediocre version quickly that’s always an option later. Why Badaboom is poor when there are good open source encoders like x.264 is hard to understand.

    • StashTheVampede
    • 11 years ago

    The “write to general purpose GPU” is picking up some steam! Unfortunately, you’re still picking between red/green and soon you’ll be picking from blue for which one you support.

    I’m hoping OpenCL will be adopted, soon.

      • UberGerbil
      • 11 years ago

      OpenCL, Compute Shaders in DX11, and several 3rd party efforts (Brook, Shallows, etc) — this is like the early days of 3D, with GLIDE and other vendor-specific libraries out to an early lead, to be eventually supplanted by more general libraries from independent (ie non-GPU) vendors. Intel might even generalize Ct to work with GPUs from nVidia and AMD, just to get people to write to it (with the idea that it’ll also get a boost from their massively multicore procs in the future).

      Having a single standard isn’t as important as having something that works with any hardware out there. Of course, because the hardware vendors have to write drivers of one sort or another, and they want to do that as little as possible, we’ll probably end up with just a couple of standards, much as we have today with DX and OpenGL.

        • Sargent Duck
        • 11 years ago

        Enter Microsoft?

        • sigher
        • 11 years ago

        “more general libraries from independent (ie non-GPU) vendors”
        Good luck with that since they won’t release enough detail about their hardware to make it possible.

      • MadManOriginal
      • 11 years ago

      Considering how much these offload the CPU a program that could translate between the standards using the CPU could do well, regardless of OpenCL. Or is that what OpenCL does in effect? I’ll have to look it up…

    • designerfx
    • 11 years ago

    not just that, but Nvidia is losing marketshare little by little. This should do enough to bring AMD into the 50% mark and start pushing for proper support and stuff, instead of all this “designed to play on Nvidia” trash.

      • Meadows
      • 11 years ago

      Valve and Blizzard already struck deals with AMD (ATI is marketed with WoW, vice versa, and Half-Life 2 episodes show some “plays best on ATI” logo in the advanced settings menu), which is not much, but they’re very big “players” in the field with a lot of gamers behind them.

        • TheEmrys
        • 11 years ago

        I’m suddenly realizing how long HL2 has been around. It was a coupon with the 9600 (pro?) and 9800XT. I’m glad they updated the engine with TF2.

          • arsenhazzard
          • 11 years ago

          Those coupons were around about a year before HL2 was released. It got delayed and was finally released around the time of the x800 series. They’ve also been doing incremental updates to the engine since the release, with the larger updates coinciding with the release of Episodes 1 and 2.

    • Zymergy
    • 11 years ago

    So….. How long until a new “ATI Stream” F@H client is out??
    (And will the ATI 4000 series now compete with the Nvidia GeForce 200 series CUDA F@H client?)

      • Forge
      • 11 years ago

      No, and this is F@H’s ‘fault’, not ATI’s. Nvidia has a small number of complex and very highly clocked SPs. ATI has a large number of lower clocked and less complex SPs. With all the current data, the number of atoms being simulated is less than the number of Nvidia SPs. F@H would like to do more and more atoms as time goes by, but hasn’t done much work in that direction so far. As the number of atoms simulated rises, first Nvidia cards will have to loop back, cutting their speed in half, and then they may have to go to a third or fourth pass per frame, cutting speed even more.

      ATI can simulate lots more atoms in one pass than NV can. NV can loop back to complete a pass, but it hits speed. ATI won’t need to.

      F@H wants to increase atom count moving forward. Things will change then, but not until.

      There’s nothing that AMD needs to be doing that they haven’t. There’s mild grumbling that Brook+ isn’t as nice to develop to as CUDA, but it’s mild grumbling, and NO ONE is pinning the F@H performance disparity to it.

    • flip-mode
    • 11 years ago

    OMG thank you thank you AMD for the transcoding software.

    Now me’s 4850 has use beyond gaming, and there is even less incentive to upgrade my CPU.

      • shank15217
      • 11 years ago

      Lol u and your old ass cpu! Come on man get a slow quad core or something!

      • l33t-g4m3r
      • 11 years ago

      yup. have a 4870 with my ol opteron 175 @ 2.5, and 4 GB ddr500.
      I don’t see upgrading my system to be a necessity for another year or so.
      The price of fast quad cores should be cheaper too.

        • moose17145
        • 11 years ago

        Ha, wish i had your cpu. Im still on a P4 paired with 3GB DDR400 and a 512MB AGP 3850! Im hoping AMD releases a Hotfix of these drivers for us AGPers out there still. And no i am not gonna be upgrading / doing a new build anytime soon. Between school, student loan payments, and my girlfriend i simply don’t have the money.

        Personally what shocks me if how well an older CPU can really perform when it’s paired with up to date hardware.

          • MadManOriginal
          • 11 years ago

          Overclocked Northwoods didn’t get the respect they deserved once A64s came out. Is yours a NW?

            • moose17145
            • 11 years ago

            That it is. 3.2GHz Northwood on the good ol’ socket 478 with a 875P Northbridge on an Abit IC7-G mobo. I have to admit the Northwood P4’s really have been about the best workhorses i have ever seen.

            • ludi
            • 11 years ago

            Ayep, still got a system that started as a 1.8A/400 and later upgraded to a 2.66/533. 2GB of RAM and I finally put an X800 Pro in it not too long ago. Presently running in my hobby workshop, and not as nice as my main Opteron180/8800GT desktop, but I could live peaceably with it if something happened to my main system.

    • sweatshopking
    • 11 years ago

    why, why why did i buy an 8800gt?

      • Jigar
      • 11 years ago

      Don’t feel bad, you are not the only one. 😉

      • Creamsteak
      • 11 years ago

      Depending on when you bought it, it might have been the right choice at the right price at the right time.

      • ludi
      • 11 years ago

      Yeah, I bought one too, and then the 4850 and 4870 were launched just two months later.

      That’s okay, it was the right card at the time and I’m not /[

      • ssidbroadcast
      • 11 years ago

      Because of it’s great performance:price ratio?

    • wingless
    • 11 years ago

    This 4870 of mine has really turned out to be a great purchase.

      • Faiakes
      • 11 years ago

      This is good news, especially if the codec support is wide.

      And as Meadows says…it won’t cost anything.

      This may well turn out to be a major selling point for ATI cards.

    • Meadows
    • 11 years ago

    g{

      • Jigar
      • 11 years ago

      Mirror my thoughts… This should give a some boost to AMD cards sale.

      • sdack
      • 11 years ago

      Are you saying people who make a living of software have no brain?

        • Sargent Duck
        • 11 years ago

        Just Nvidia…

        • Kurotetsu
        • 11 years ago

        When you’re trying to push a new standard for software development (GPGPU), but then again go ahead and charge everyone a fee for applications using the new standard, you don’t have a brain.

        “Here’s an awesome, brand spanking new thing that all of you should be using! Oh, by the way, we’ll be charging you a fee before we actually let you see this brand spanking new thing in action.”

        When you’re trying to push a new standard for software development, and give out applications for free so everyone can see the enormous potential this standard provides, you have a brain.

        “”Here’s an awesome, brand spanking new thing that all of you should be using! Here’s a bunch of free samples to show just HOW awesome this new thing is!”

        Which approach do you think will actually work?

          • Silus
          • 11 years ago

          I hope that you understand that those you are saying “have a brain”, are doing it because they are wayyy behind in the GPGPU bandwagon and need something to level the playing field. Much like they needed low prices with their new graphics cards, since they were losing market share badly in that sector, for two generations.

          It’s all about market share and making one product more appealing than the other. Since their timing is very poor, they need to appeal users with something and this is it.

            • Kurotetsu
            • 11 years ago

            In other words, they have a brain and are using it well. Thanks for confirming.

            • Silus
            • 11 years ago

            Not a very good way to miss the point, especially when we are talking about “using the brain” and whatnot…but suit yourself.

            • moriz
            • 11 years ago

            pay $30 to use something… or wait a bit and use something similar for free…

            i don’t know about you, but i’ll go with option 2. especially since i happen to own one of those cards.

            • Silus
            • 11 years ago

            Read my post in #11 and you’ll understand my point.

            The “pay for a product to do something” or “not pay to do the same thing”, is not in question here. Obviously everyone prefers to get stuff for free.
            The point is why they need to do it that way and it’s not due to the goodwill of their hearts, as the fanboys think it is. I should know, since as a software developer, I don’t want to share my software for free, unless that gives me the attention I need, to shift interest from a feature one of my competitors has for a long time now.

            • Anomymous Gerbil
            • 11 years ago

            Hmm, now I think *you’re* missing the point. Software developers sell software, and need to charge for it. Nvidia and ATI sell hardware, and offering the associated software for free is a valid business model. Trying to compare the two models isn’t really valid.

            • SPOOFE
            • 11 years ago

            Charging for the associated software is also a valid business model. You seem completely hung up on this “free” versus “not free” thing, and he’s trying to tell you that it’s just one factor out of many. You can not ignore the fact that nVidia’s been pushing their CUDA tools for a lot longer and harder than ATI’s been using stream processing (Folding notwithstanding).

            In essence, ATI’s offering is free because nVidia’s isn’t.

            • Sahrin
            • 11 years ago

            Your post doesn’t make any sense because you’re mixing economic reasons with emotional ones. You’re saying “ATI is duplicating nVidia’s efforts, and because they are late the price is lower.”

            That’s not the case; ATI had to invest the same amount in the initiative that nVidia did, they didn’t ‘save’ anything – they had to invest the same money in software development that nVidia did; ATI has chosen to ‘roll out’ their product in a different manner.

            ‘Who got their first’ would only matter if nVidia had saturated and controlled the market; something that was economically impossible as a result of their decision to charge $30. So not only did nVidia fail to gain any significant advantage by releasing early, they also now are competing with a price-superior product. nVidia has lost this round.

            I usually don’t put ‘losing’ and ‘brainy’ in the same category; but to each his own.

            • sigher
            • 11 years ago

            Isn’t the application that does the transcoding on nvidia from a company completely separate from nvidia? I certainly agree nvidia should release some applications, but they are nvidia and not other software companies, unless they buy the company that made that application, but I wouldn’t because it looks awful and is of very limited capabilities.

            • Kurotetsu
            • 11 years ago

            Yes, ATI might only doing this because they are behind the curve (its possible they always meant to do this). BUT, its a very good way to play catch up considering what both Nvidia and ATI are trying to do with GPGPU development. Charging people a fee to demonstrate a new technology is never a good way to convince people to use this new technology.

            • Silus
            • 11 years ago

            Exactly. Now you got my point 🙂

            • Silus
            • 11 years ago

            Replying to your edit, because you seem to have miseed how almost all of NVIDIA’s offerings in the same performance sector, cost LESS or the same as ATI’s cards. So I would say NVIDIA learned their lesson alright.

            • Kurotetsu
            • 11 years ago

            Whatever they learned apparently hasn’t trickled down to GPGPU applications.

            • Silus
            • 11 years ago

            Badaboom is not made by NVIDIA…so I’m missing your point…

            • Kurotetsu
            • 11 years ago

            Oops, good point.

            • Sargent Duck
            • 11 years ago

            I’m reminded of the browser wars here. IE was very much behind Netscape, but it caught up, and eventually overtook because it was free.

            Now granted, Baddaboom is a third party app that has every right to charge for it’s product. If Nvidia was smart (ie, had a brain), they would license the software and release Baddaboom for free in their next driver release.

            As it is, this just made ATI a whole lot more attractive to many.

            • Sahrin
            • 11 years ago

            What? ATI had the first major GPGPU software package in wide release – Folding@Home. ATI didn’t ‘get late to the party’ they ‘partied’ with CTM, a machine-level interface; unlike CUDA, which was a development kit. nVidia doesn’t expose machine-level tools to developers, they control the API which gives developers less control over the final product. ATI is now duplicating nVidia’s CUDA efforts by releasing their own high-level development kit; but this doesn’t mean ATI is ‘behind the wagon’ in any regard.

            • SPOOFE
            • 11 years ago

            Yes, ATI’s offering started and ended with Folding. They didn’t do anything with it since, even though they began talking about it first.

            • Sahrin
            • 11 years ago

            That’s because ATI doesn’t do the deve work; the customer does. nVidia is paying a lot of money to write applications, and because no one has any use for them they are either a) charging consumers or b) trying to create good pub with them.

            ATI’s solutions are better because the end-user (usually researchers) gets to custom design their solution to the problem they are trying to solve; nVidia is shoving a SDK down their throats that doesn’t necessarily address their needs. Now in the consumer space, the focus is different – so you do need an SDK; but ATI offers the same functionality as nVidia now for a better price.

            • SPOOFE
            • 11 years ago

            If it’s not ATI’s work, then why are you crediting ATI for the work? I don’t think you’ve devoted an awful lot of thought to this.

          • sdack
          • 11 years ago

          That is one way to see it, but the way I see it means that the development costs have to be paid by every single customer no matter if they need it or not. Do you like paying for stuff you do not need?

          Microsoft got sued over the integration of their web browser into the OS and because it makes competition impossible. Is ATI then making it possible for software companies to compete when they give away free tools?

          Let the libraries and toolkits be free, but do not do the jobs that others want to do to make a living. It will scare off third party companies instead of supporting them.

      • FubbHead
      • 11 years ago

      ATI has a good history doing that with new features. Although I guess third party developers (or at least whoever getting the chance to take advantage of the oppotunity) NV’s approach better.

Pin It on Pinterest

Share This