AMD reveals Radeon R9 290X with TrueAudio, Battlefield 4 bundle

After sorting out some difficulties with its live stream earlier today, AMD released a torrent of new information on its upcoming graphics products. Scott live blogged the event, and he’s learning even more about AMD’s latest GPU technology as I write this. I’m sure he’ll have more to share with us soon. In the meantime, here’s a quick recap of the graphics cards announced at the event.

The big daddy is the Radeon R9 290X, whose Hawaii GPU boasts a whopping six billion transistors. Here’s another big number: five teraflops, which is what AMD quoted for the 290X’s compute capability. The chip can push four billion triangles per second, and its memory interface has over 300GB/s of bandwidth. The card has 4GB of RAM, suggesting a 512-bit path to memory.

The R9’s graphics architecture is based on an enhanced version of what will ship inside the Xbone and PlayStation 4. DirectX 11.2 support is on the menu, but we don’t have many other details right now. AMD didn’t say when the Radeon R9 290X will ship or how much it will cost. However, you’ll be able to pre-order a special Battlefield 4 edition of the card starting October 3. The BF4 bundle will be available in limited quantities and only from select partners.

Hawaii is limited to the R9 290X, but there are other members of the R9 series that appear to be based on older silicon. The Radeon R9 280X will be priced at $299 and will come with 3GB of RAM. It looks like it might use the Tahiti silicon familiar from the Radeon HD 7900 series. Then there’s the Radeon R9 270X, which will be a 2GB card priced at $199.

If you have a stricter budget, there are a couple of cards in the Radeon R7 series. The Radeon R7 260X appears to be based on the Bonaire chip that currently powers the Radeon HD 7790. It’s slated to come with 2GB of GDDR5 memory and to ring in at $139. For under $89, you can look forward to the R7 250, which probably isn’t much to look forward to at all. At least it comes with a gig of GDDR5 RAM.

Interestingly, the R9 cards and the R7 260X will include TrueAudio, a DSP-powered audio solution that promises better surround sound for PC games. TrueAudio is supposed to allow developers to easily integrate rich positional audio effects in games. It will also provide surround virtualization for stereo output. There’s a lot to the audio tech that I won’t get into here, but it looks like an interesting addition, especially if AMD has deployed similar tech in next-gen consoles.

The other item of note is Mantle, an API that provides lower-level access to the GPU. AMD has been working on Mantle with the BF4 folks, and they claim it improves graphics performance and better exploits multi-core CPUs. AMD will share more about Mantle soon, and you’ll be able to download a version of Battlefield 4 that takes advantage of the API in December.

Comments closed
    • Fighterpilot
    • 6 years ago

    R9 290X looks to be an exciting new card,looking forward to TR benchmarking it.
    The Mantle programming API for GCN based cards has made a big splash around the tech world.
    Let’s hope the low level access it gives brings some fat performance advantages over just running DirectX 11.
    The reaction from nvidia fans has been amusing so far πŸ™‚

    • Leeb2013
    • 6 years ago

    and just in case you were wondering what a $400 pair of HD 7950 cards would perform like;

    [url<]http://www.3dmark.com/3dm/1299210[/url<] Graphics score: 14554. (Single 7950 scored 7805). I wonder if the $600-700 they'll be asking for the R9 290X will be worth it for 7950 performance.

      • anotherengineer
      • 6 years ago

      There are many other features besides gaming & 3Dmark to make card purchasing criteria on.

      Such as deinterlacing performance, AF Performance(filter test), power saving features, quality of cooler (noise), hardware decoder, version of hdmi/displayport etc., multi-monitor support, connectivity, compute & tessellation performance, etc. etc.

      [url<]http://www.rage3d.com/reviews/video/amd_hd6970_hd6950_launch_review/index.php?p=4[/url<]

      • Johnny Rotten
      • 6 years ago

      Yes Leeb because clearly AMD’s next gen flagship card/chip is only as fast as a 7950. That obviously makes the most sense.

    • Leeb2013
    • 6 years ago

    I’ve just run Firestrike performance settings (as the AMD chart above states – see the little * in the bottom left corner).

    My stock HD7950 ($202) scores 7494 graphics score.
    [url<]http://www.3dmark.com/3dm/1299091[/url<] Overclocked to 1000MHz (it'll easily reach 1200MHz) is scores 7805. These scores make it nearly the same as the R9 290X. All I can think is that the chart above includes the CPU score and they've used an Intel Extreme CPU or AMD FX-9590. My combined score with Physics included (purely CPU) using my i5-3750k @ 4.4GHz is 6803. Either way, these charts aren't entirely convincing to say the least. Last time I looked, the Titan, GTX780, HD7970 were noteably faster than my 7950. How can they say the R9 290X will beat the Titan, when it barely beats the current 7950?!!!!

      • chuckula
      • 6 years ago

      I’ll wait for the TR benchmarks before I make a final judgment on performance, but it is curious that the only benchmark that AMD cites doesn’t appear to be all that favorable to their new card.

      If this card is really that amazing, and if it is actually launching next month, then there’s nothing that Nvidia could do in a practical manner to one-up it. That means complete lack of even a few cherry-picked benchmarks isn’t overly encouraging. I still stand by my earlier prediction that the R9-290x high-end card will end up between the GTX-780 and Titan (likely closer to Titan), but the TR review and info from other unbiased sites are the final arbiters.

    • BIF
    • 6 years ago

    So how does the 290 compare to the 7970 (for folding)?

    And will there be a dual GPU version of this (for folding)?

    • kamikaziechameleon
    • 6 years ago

    I hate these kinds of announcements. GUESS WHAT!? WE ARE MAKING THINGS THAT WILL COST YOU MONEY AND THEY ARE COMING EVENTUALLY! <looks for nod of approval from the board> THANK YOU FOR COMING AND HAVE A GOOD NIGHT!

    I’ve seen leaks 12 months out that had more relevant info in them.

    • torquer
    • 6 years ago

    Is anyone else as amused as I am that after all of these years, fanboyism is still so alive and well on both sides? Almost 12 years since the release of the 9700 Pro, the first ATI card to really challenge Nvidia and nothing has really changed as far as the enthusiast mindset.

    Its sometimes amusing, often annoying, but still strangely comforting that some things never change.

    • kamikaziechameleon
    • 6 years ago

    There was a whole thing about this coming out but realistically very little information is flowing about these things.

    • Chrispy_
    • 6 years ago

    I wonder if the stock cooler is any good.

    Whilst the aftermarket solutions are generally fine and plentiful, there is an AMD-sided shortage of fully-exhausting blowers worth getting for those using them in mITX builds (or just people who don’t want their case heated up).

      • internetsandman
      • 6 years ago

      If you want a high powered ITX rig an Nvidia card with a Titan style cooler seems the best way to go. Yay for competition

    • Wirko
    • 6 years ago

    A new volcanic island pops up from nowhere in Pakistan just a day before AMD’s launch, and nobody takes note?

    Unfortunately (and accompanied by other very sad news from Pakistan), it’s been classified as a “mud volcano”.

    • WarnerYoung
    • 6 years ago

    The first thing I thought of was the old MeTaL API from S3. Raja Koduri was one of the lead people behind that, too.

    • TrptJim
    • 6 years ago

    Will these new cards incorporate hardware-based frame pacing, like what Nvidia has?

    • albundy
    • 6 years ago

    i cant wait til i start playing with my Xbone!

    • DPete27
    • 6 years ago

    So…we’ve got Hawaii GPU for high end, Tahiti for mid-high, Bonaire for mid-low end…..that’s not confusing at all.

      • cynan
      • 6 years ago

      What about Pitcairn. Wouldn’t that be the R270? I suppose it could be a cut down version of Tahiti, as in Tahiti LE products we’ve seen.

        • DPete27
        • 6 years ago

        My biggest gripe is that this lineup of AMD GPUs will use 3 generations (if you consider Bonaire a generation) of GPUs

    • OneArmedScissor
    • 6 years ago

    Further evidence that 8 core consoles β‰  8 core PC games.

    There goes audio! Mantle may take even more away from the CPU.

    What’s next? GPUs integrate a specialized core to avoid hopping between the primary CPU, PCIe bus, and system RAM?

    Monstrous console chips are effectively GPUs with x86 bolted on. ARM to “big” core chips are being swallowed by integrated GPUs. Hybrid “GPUs” like Xeon Phi merge everything. Nvidia’s Volta will have stacked RAM.

    The lines are blurring and even GPUs are becoming SoCs.

      • dragontamer5788
      • 6 years ago

      Becoming SoCs? Or becoming part of SoCs?

      • Krogoth
      • 6 years ago

      Its not surprising at all since Xbox 1’s CPU is based on the successor of Bobcat family which were build around power efficiency and ultra-low voltage setups. It is kinda unrealistic to expect it to perform as well as Bloomfields (first generation i7) of yesterday.

      Microsoft chose it because they do not want to repeat the thermal issues and ROD nightmare that plagued the earlier generations of the 360.

    • deruberhanyok
    • 6 years ago

    Mantle sounds interesting. Anand mentioned in his liveblog that it made him think of Glide. For those who aren’t old enough to remember the pre-DirectX gaming hayday, Glide was an API created by 3dfx to take advantage of their Voodoo graphics cards. Typically when you played a game you would select whether you wanted to use software rendering or Glide. Glide was obviously 3d accelerated, worked with higher resolutions, etc, etc. It was a night and day difference for us.

    Eventually other companies followed suit with their own APIs. S3 had “Metal”, some of the features of which eventually made it into Direct3D (texture compression comes to mind), and I want to say there was at least one other but I’m old and my back hurts, so I can’t remember the 90s as clearly as I’d like. πŸ™‚

    A lot of that changed when the first GeForce came out, anyways, but for a time, it wasn’t odd to be able to pick between Direct3D, OpenGL, Glide, Metal, or whatever other renderers while configuring a game. I expect Mantle will be the same, and I’m especially interested to see how the “cross-platform” aspect of it works.

    At any rate, this will probably be handled much like Glide – there will be comparisons between Direct3D and Mantle performance, and if Mantle really does open up that much more performance on a system, well, then everyone with recent AMD video cards will be really happy to be able to use it. And maybe NVIDIA will whip up a CUDA rendering pipeline to compete with it.

    Direct3D and OpenGL still have to exist for general purpose compatibility, but this is AMD basically saying “if you have supported hardware, we’re giving you an option to get better performance out of it by bypassing those standards.”

      • Goty
      • 6 years ago

      I miss my Voodoo3 2000 and Tribes running in Glide…

      • Lans
      • 6 years ago

      I don’t think comparing Glide to Direct3D/OpenGL is like Mantle to Direct3D/OpenGL/DirectCompute/OpenCL/etc because AMD has something else going for them. And that is consoles… If AMD has Mantle for consoles and PC then all the sudden it makes sense porting games to PC will be that much easier. At least that is what I think is going to happen.

      • slaimus
      • 6 years ago

      Nvidia already has this:

      [url<]https://developer.nvidia.com/nvapi[/url<]

      • Kaleid
      • 6 years ago

      I remember glide being much faster than d3d, which was incredibly sluggish, very inconsistent framerate.

    • tbone8ty
    • 6 years ago

    Why didn’t they reveal specs and price?

      • derFunkenstein
      • 6 years ago

      There were NDA sessions later in teh day.

    • slaimus
    • 6 years ago

    I was hoping for H.265, or at least 10-bit H.264, hardware decode support. Looks like it will take another generation of chips for that to happen.

    • jessterman21
    • 6 years ago

    Streamlined naming scheme my arse – still two unnecessary characters in each name. What’s the 2 designate? Why slap an X on the end like it’s 1999?

      • jihadjoe
      • 6 years ago

      And still makes no sense.

    • jdaven
    • 6 years ago

    I’d bet the main purpose of the R7 250 is to provide a crossfire pairing with the IGP in the upcoming Kaveri APU.

    • steelcity_ballin
    • 6 years ago

    My 560TI is getting long in the tooth. It’s still a capable performer but for $200 I can justify a major upgrade, I like seeing this type of hype from the red camp, it’s been awhile. Can’t wait for benchmarks that aren’t 3dMark.

    • chuckula
    • 6 years ago

    Here’s a thought: If it is actually true and not marketing hyperbole that Mantle allows for 9x the GPU calls vs. regular APIs then at least one of the following absolutely has to be true because the laws of physics aren’t dictated by fanboys:

    1. Issuing 9x the GPU calls causes no or minimal increase in power consumption for the GPU. This means that AMD’s current GPUs are being utilized at only about 11% capacity, and AMD’s idle GPU power consumption is absolutely atrocious because and 11% GPU load has about the same power draw as a 100% GPU load.

    2. Issuing 9x the GPU calls is going to cause a *massive* uptick in power consumption because AMD’s current GPUs are only being utilized at 11% capacity and when the rest of the silicon is actually turned on, you’ll see some big jumps in power consumption (not necesarily 9x BTW, but even a 50% increase would be noticeable).

    Of course there is a third option… that the whole “9x” thing is a marketing buzzword likely derived from a meaningless microbenchmark that has little to do with real gaming performance… but we’d never *ever* see marketers make stuff up for the fanboys… would we?

      • HisDivineOrder
      • 6 years ago

      Nope, never. Marketing and AMD would never make something up to get people worked up, then show up with far less performance than promised and shrug post-release after the sales were made. That’d have all the subtlety of a bulldozer and I’m sure they’d never do that.

      Like ever.

      • Mr. Eco
      • 6 years ago

      It probably is marketing hyperbole, just like with WiFi speeds. That does not mean there is no truth in it.
      Only 3x performance would be great; imagine if Mantle provides only 2x the performance – would it not be enough?

      Let’s see in a month, it will show in the benchmarks. Until then no need to act like a fanboy and deny everything.

      Edit: Oops, to make myself clear – I care about performance, not about the number of API calls. Hopefully “9x” API calls will provide at least “2x” performance for Mantle-optimized games.

      • cynan
      • 6 years ago

      The marketing blurb says that it can achieve this 9x increase specifically in CPU-limited environments. So yeah, maybe if you’re chugging along with first gen Core processor and a GCN GPU, you’d see 9x the benefit. With an i5 Sandy/Ivy/Haswell, it’s very likely much less.

        • beck2448
        • 6 years ago

        True Audio is marketing wank.

      • Theolendras
      • 6 years ago

      9x draw calls does not mean 9x more performance. The calls themselves can have varying workloads. But heavy bevy calls is tough and can probably have adverse effect on engine input latency. Don’t take worse shortcut than a marketing team.

        • chuckula
        • 6 years ago

        [quote<]9x draw calls does not mean 9x more performance.[/quote<] I never said it would lead to 9x the performance. I did say that increasing the calls by "9x" would lead to increased [i<]utilization[/i<] of the components in the GPU, and if you are able to increase the calls by "9x" then by definition the pre-9x utilization has to be substantially lower, or else there wouldn't be operating headroom to actually increase the call rate in the first place! P.S. --> AMD's slides never said that they were changing the calls that were made to the GPU as you just said they did. They just said that current software can't make the calls fast enough.

          • Theolendras
          • 6 years ago

          Well you pretty much did say it might mean were actually having 11% utilization, which translate pretty well into a 9x performance increase potential. Maybe that’s not what you meant.

          It could lead to more GPU utilization in some cases, but I don’t think it’s the target, as GPU utilization does not seems to be a problem for many of the modern engines as they are using it pretty much fully.

          In DirectX right now developpers are trying to keep CPU draw calls to a minimum not to hurt CPU utilization. Mantle has the potential to make those CPU calls more of a multi-core thing than DirectX and also lower the CPU intervention in the pipeline for each of these calls. It might be more benefic to CPU utilization than GPU in the end.

          • bcronce
          • 6 years ago

          The two biggest limitations for drawing was the user to kernel mode transition of making system calls and lack of threading.

          My assumption it is the user to kernel mode transition.

      • 0g1
      • 6 years ago

      Just because you can send calls across PCI-Express 9x faster, doesn’t mean the graphics are going to go any faster. Its like a 9x increase in PCI-Express speed basically. What is that going to do for you? About 1% more fps.

        • chuckula
        • 6 years ago

        [quote<]Just because you can send calls across PCI-Express 9x faster[/quote<] I never said that, where did you come up with this technology being the same thing as a new CPU-to-GPU pipe that increases throughput by 9x?

          • 0g1
          • 6 years ago

          Because that’s how the GPU receives calls — from PCI-Express. How else do you think it happens? By telekinesis?

            • chuckula
            • 6 years ago

            GPUS GET DATA OVER PCIE THEREFORE AMD IS A MIRACLE… Uh… have you ever heard of a logical non-sequitur?

            Have you ever heard of this thing called a “bottleneck” in your life? It isn’t PCI express, or else there would have been major GPU improvements when the jump to PCIe 2.0 to 3.0 occurred… and the speedups were only minor. So no, AMD (which won’t even support PCIe 3.0 until next year and only on lower-end Kaveri parts at that) didn’t magic up some out-of-band communication scheme to improve throughput, and the +1s for these inane comments are getting pretty silly.

      • flip-mode
      • 6 years ago

      Have you done any research on what a GPU call is? Maybe that’s a good place to start.

      I spent two minutes googling and found this:
      [quote<]First of all, I'm assuming that with "draw calls", you mean the command that tells the GPU to render a certain set of vertices as triangles with a certain state (shaders, blend state and so on). Draw calls aren't necessarily expensive. In older versions of Direct3D, many calls required a context switch, which was expensive, but this isn't true in newer versions. The main reason to make fewer draw calls is that graphics hardware can transform and render triangles much faster than you can submit them. If you submit few triangles with each call, you will be completely bound by the CPU and the GPU will be mostly idle. The CPU won't be able to feed the GPU fast enough. Making a single draw call with two triangles is cheap, but if you submit too little data with each call, you won't have enough CPU time to submit as much geometry to the GPU as you could have. There are some real costs with making draw calls, it requires setting up a bunch of state (which set of vertices to use, what shader to use and so on), and state changes have a cost both on the hardware side (updating a bunch of registers) and on the driver side (validating and translating your calls that set state). But the main cost of draw calls only apply if each call submits too little data, since this will cause you to be CPU-bound, and stop you from utilizing the hardware fully. Just like Josh said, draw calls can also cause the command buffer to be flushed, but in my experience that usually happens when you call SwapBuffers, not when submitting geometry. Video drivers generally try to buffer as much as they can get away with (several frames sometimes!) to squeeze out as much parallelism from the GPU as possible. You should read the nVidia presentation Batch Batch Batch!, it's fairly old but covers exactly this topic.[/quote<] It seems like a complex issue that affects more than just the GPU. Maybe it's not the GPU that is the hangup, maybe 9x the drawcalls are possible because they have figured out a way for drawcalls to have a diminished affect other parts of the system. Maybe the programmer's work is simplified because the programmer doesn't have to figure out how to load a single draw call with more data. I don't know, those are just thoughts off the top of my head. I'm not a programmer at all, other than knowing a bit of simple HTML. But you just seem really interested in getting in an internet argument with a fanboy rather than actually learning something or trying to understand anything about the information that was provided. I think you should stop worrying about "the fanboys". Stop correcting them, stop trolling them, stop challenging them.

        • chuckula
        • 6 years ago

        That’s great & all but you, along with some of the other responses, are making an assumption that AMD stated that they are dynamically changing the [b<]types of drawing calls[/b<] that are made to the GPU when they never made any statement of that kind. 1. I never stated that all draw calls are equal or that every operation that a GPU performs uses exactly the same resource and has exactly the same memory/computational burden. Other people made up that assumption, and then attacked it. Stop trying to do AMD's homework for them. 2. AMD very simply said that the current systems like Direct 3D can only make "X" drawing calls per unit of time and that their new wonderful mantle can do "9X". AMD is the entity that expressly claimed a factor of 9 improvement. Now, once again, if you are claiming that you can issue all these calls (the *same* call because AMD never said differently, don't put words in their mouth) to the GPU then: 1. The GPU at only "X" is massively underutilized. If so, then my power consumption ideas are going to be proven. 2. The GPU at only "X" isn't massively underutilized and the other "8X" calls just end up sitting in a queue somewhere until the GPU executes them... which is to say that this "9X!!" claim is AMD's way of saying you won't see any real performance improvements. Neither one is all that amazing.

          • flip-mode
          • 6 years ago

          Research it. That’s what I’m telling you. You are on the one hand tacitly admitting you know nothing about how it works but on the other hand demanding that it must have one or the other ramifications. If you know nothing of how it works then you can’t lecture “the fanboys” on the possible ramifications.

          In my two minute of googling it seems to me that drawcalls are more taxing on the CPU than the GPU. So, if you’re going to make claims regarding the ramifications of 9x drawcalls, get some data on what AMD has done, otherwise, you’re talking out your ass.

          Interestingly, cynan’s post above also mentions that AMD referred to “cpu limited scenarios”. Gee, that’s interesting.

          Interestingly #2, Theolendras’s post above mentions that 9x more drawcalls does not mean that the GPU is doing 9x more work (he used the term “performance”, I’m using the term “work”) so that mean’s that nothing is necessarily going to have 9x the power consumption if it’s not doing that much more work..

          So, since you’re educating “the fanboys”, go do your homework first and figure out what all this means. What’s the benefit of 9x more drawcalls to the programmer? What’s the benefit of 9x more drawcalls to the machine – does it mean more performance, for the CPU or for the GPU; does it affect power consumption – of the CPU or of the GPU.

          All you’ve illustrated to me is that you’re eager to make a bunch of uneducated claims in order to call out “the fanboys”. It’s weird, man.

            • Waco
            • 6 years ago

            My only real take on this is that AMD had to do this to be able to do the same number of draw calls as a “fast” desktop CPU on the slower Jaguar cores in the XBOne/PS4.

            GPUs in desktops that have fast CPU are already twiddling their thumbs waiting for GPUs to complete draw calls of significant complexity.

            Replace that fast CPU with an Atom…and the picture changes.

            • Theolendras
            • 6 years ago

            Yep but it can also be a boon for AMD CPU that does not look quite good in gaming performance. I saw in some post about that Mantle will be more a multicore optimized thing than DirectX. Since most AMD CPU at a given price point tend to have more threads than Intel it might help them more then the competition. Not even talking about reducing the CPU tax itself. If well executed and well received by development community it could help them compete a little better than in recent years.

            • chuckula
            • 6 years ago

            1. AMD says that GPUs are underutilized or else it wouldn’t be possible for them to handle “9x” the drawing calls. The fact that the bottleneck for *why* the drawing calls aren’t making it to the GPU is in the CPU is a nice piece of background information that has exactly zero relevance to the result that the GPU is being underutilized.

            2. I take AMD’s own point at face value and then point out the logical consequences of how massively increasing the utilization of a heavily underutilized piece of silicon will affect power.. it’s called physics you know.

            3. You go off on some rant about how drawing calls are setup in the CPU before being sent to the GPU and call me an idiot for… I’m not exactly sure what… the end result of your tirade is that… wait for it… the GPU is underutilized. You just call me an idiot for not going all wikipedia on the precise reasons why another part of the computer could cause a GPU to be underutilized, even though I never said that there were no-such-thing as bottlenecks. Wonderful.

            4. I then point out that if AMD’s own GPUs are as actually massively underutilized as this “9x” claim implies, there would be some major real-world consequences to bringing the utilization up to something near 100%. I then note that this is probably a bunch of marketing hyperbole and that in the real-world the GPU is likely *not* underutilized at such a disproportionate rate as is stated in AMD’s own slides.

            • Waco
            • 6 years ago

            I think you’re reading too much into this.

            Right now, DirectX calls are “heavyweight”. To avoid that, more is crammed into fewer calls.

            Having the ability to run more calls with less overhead can make some things easier to program. Just because the card can handle more calls (with fewer instructions each) does not make the GPU itself underutilized when you run fewer larger calls (with the exception of the actual block that does the call decoding).

            • Theolendras
            • 6 years ago

            You’re seem to imply that any DirectX call are created equals. See a DirectX call as a delivery for example. Developpers are trying to send 45 feet truck to minimize the number of deliveries. But developpers find it hard to manage the sheer complexity and logistic some delivery in their business model it somtime would be preferable to augment the number of deliveries to meet customer expectations as they want to be in the pizza business, but sending it it by truck currently cost too much and is too bulky. Mantle would hopefully be introducing to the industry the delivery car. Either way the customer is not necessiraly starving, but it’s not practical.

            PCI-Express would then be lanes in that analogy just as they are in reality.

            The CPU would be the shipping guy that do the heavy lifting and customer the GPU.

            • chuckula
            • 6 years ago

            [quote<]You're seem to imply that any DirectX call are created equals.[/quote<] No, AMD did (and they said it expressly, not implicitly). They didn't make a slide that says: One Direct 3D call is the moral equivalent of 9 smaller Mantle calls. They did say: 9x the call rate of Direct 3D. If AMD posted something that is incorrect... then that's AMD's issue.

            • flip-mode
            • 6 years ago

            In order for your first post to be less sucky, you just needed fourth and fifth options:

            4) Issuing 9x the GPU calls is merely a convenience for developers so that coding is easier.

            5) Issuing 9x the GPU calls is only useful in CPU-limited situations.

            Instead, you posted some trollspeak and now you’re either blaming other people for misunderstanding you or you’re blaming AMD for poor communication. All the game developers in the room probably understood AMD’s statement just fine and probably not a single one of them had the reaction that you are having. It is not likely that AMD is going to get up in front of a bunch people and game coders and make developer-centric claim that some non-developer, average-guy-on-TechReport is going to expose as utterly silly marketing speak. You, though not a game developer think you have some kind of special insight into this situation. You don’t know what a GPU call is or how it works or what it means in respect to other computer subsystems, yet, you think you know that 9x the-thing-that-you-don’t-have-any-understanding-of is [i<]obviously[/i<] going to result in one of only two possible outcomes that you do happen to be able to relate to, or else it is just silly marketing speak. That's some hubris. Anyway, I don't care, I just thought that your initial post was silly and seemed like you were out trolling for fanboys because you have nothing better to do, and I honestly think you can find better things to do with your time.

            • Pwnstar
            • 6 years ago

            It’s not AMD’s fault that Microsoft can’t code DirectX properly.

            [quote<] AMD says that GPUs are underutilized or else it wouldn't be possible for them to handle "9x" the drawing calls[/quote<]

      • WaltC
      • 6 years ago

      Well, one thing your post proves, and that’s that this announcement has sure brought out the fanboys of another persuasion to try and poo-poo the whole thing without knowing anything concrete about Mantle themselves.

        • Theolendras
        • 6 years ago

        I’m not so sure yet. Depends a lot on their DirectX translation tool I’d say. There real concerns to be had for dividing the market in architecture specific API.

      • wof
      • 6 years ago

      Errm no πŸ™‚

      Read the things under: “Minimize the Number of Draw Calls” in this nVidia guide: [url<]http://http.developer.nvidia.com/ParallelNsight/1.51/UserGuide/HTML/Profiling_A_Graphics_Frame.html[/url<] So the problem isn't invented by ATI... Short attempt at an explanation for non-programmers: Using current apis allow for very few calls per second. In fact so few that if you have a lot of different materials or lots of small individual pieces of geometry it will be faster rendering the thing in software than using the calls the get it done by the card. So unless you go to great lengths to try to make your content as dumb as possible with few (but large) textures and very large pieces of geometry you will have crappy performance. Every graphics programmer knows this. So if AMD has found a way to bunch up these calls and send a bigger batch of calls to the GPU at once they should easily improve on this. Does it matter? Yes, but not to a current super streamlined game using already dumbed down geometry. Poorly optimized games would be faster and future games could have more cool stuff in them.

      • sschaem
      • 6 years ago

      Its been documented by EVERYONE, nvidia included.

      [url<]http://www.bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/2[/url<] "On consoles, you can draw maybe 10,000 or 20,000 chunks of geometry in a frame, and you can do that at 30-60fps. On a PC, you can't typically draw more than 2-3,000 without getting into trouble with performance, and that's quite surprising - the PC can actually show you only a tenth of the performance if you need a separate batch for each draw call. " The way people get around that is by spending a lot of effort to reducing draw call, or simply relying on the massive CPU power of PC's to pick up the slack of the Direct3D API. But when you have mobile APU, it make sense to remove that artificial limitation. This is NOT claiming a 9x speedup in frame rendered, it simply mean you will need less CPU power to acheive the same number of draw calls. But thats not what Mantle is all about, specially for multicore.

      • tipoo
      • 6 years ago

      This equivalency doesn’t make any sense. 9x more draw calls doesn’t mean 9x more total work. It just means draw calls. On PC, developers usually do things to utilize the GPU well with less draw calls.

    • chuckula
    • 6 years ago

    OK AMD fanboys repeat after me:

    Mantle is AMAZING… because.. uh… AMD!
    Physx is EVIL… because.. uh.. Nvidia!

    (repeat over & over until you convince yourselves that contradiction is good)

    The deeper issue with Mantle is that people have no sense of history. Back before there were those nasty APIs like OpenGL and Direct 3D, [b<]everything[/b<] was written using the "Mantle" model, they just didn't dress it up with marketing buzzwords back then. It meant that your software was either locked into a single graphics processor from a single company, or you had to re-write the same piece of software over & over again for each piece of hardware that you wanted to support. Then, as time went on, people started to think about how they could abstract away the variations between hardware and... boom, OpenGL/Direct 3D/etc. were born. There's one cloud to the supposed silver lining of Mantle: Guess what this does for AMD's ability to introduce a brand new GPU architecture that moves away from GCN in the future? Nothing good, that's the answer. When you look at the standard units in GPUs today compared to just 5 years ago, they are radically incompatible at the binary level. APIs like D3D and OpenGL hide these massive incompatibilities so software can still run. "Bare-metal" programming means you are going to be stuck supporting the same architecture for a LONG time unless you want to re-write the same software over & over again.

      • Zizy
      • 6 years ago

      NV cannot run Mantle because of hardware differences, it isnt like PhysX, CUDA, TressFX or whatever else.

      As for the rest – I mostly agree with AnandTech here.
      No sane dev is going to cut such a vast market of NV and Intel hardware. Therefore, games will continue to offer DX11 path. It will just make porting from X1/PS4 easier. No need to ditch low level optimizations that were already made.
      So, situation isnt like past (dark) ages. Devs can target just DX11, or possibly boost one (or more) architecture. Probably AMD will be the first to optimize for, even if Intel and NV make their own Mantles – because of consoles.

        • HisDivineOrder
        • 6 years ago

        You must not remember those “past (dark) ages.”

        Glide mostly preceded, then eventually co-existed with both OpenGL and DirectX for a time. Glide was just more efficient than OpenGL and DirectX back in the early days. It died because it required developers to create yet another API to support and flesh out for a marginal part of the community already served by the two more open standards (that also continue to exist). It was cheaper to just produce a DX version or an OpenGL version and, hey, it may perform a little less optimally, but at least everybody’s GPU’s could run the game.

        Yeah, they could produce a game in both DX11 and Mantle/Glide, but why should they do it if they aren’t being paid by AMD? Double the support costs? Double the patching for engine problems?

        In every way, it is Glide but for AMD. Fortunately, few developers are going to take the bait mostly because they won’t want to have to support another API just for the hell of it and because the majority of publishers are going to balk at spending any manhours coding for an engine only 40-ish% of the already small PC gaming market that have the GPU’s… and of those, how many would even have PC’s capable of really seeing the framerate/frame latency advantage?

        It’s not a trivial cost or investment of time/manpower involved even if it is minimal at the outset.

        However, you will see the usual suspects support it. I expect EA to support it since they’re making a lot of noise with it right now and I expect Square-Enix to get behind it because, hey, they even did TressFX. Perhaps an indie or two.

        Mostly, though, Mantle will be seen like Glide by companies trying to minimize their investment to get into the PC gaming market. In a time when companies are trying to reduce the cost of games, this has the potential to increase the cost. If you want to code to Linux, you code to OpenGL, a spec any publisher or developer is already going to know pretty well given they’re doing it for PS4 and Android.

          • Zizy
          • 6 years ago

          I do remember these times. My point wasnt that Mantle is different than Glide in any way, because they are the same shit. My point is that EFFECTS will be different. Back in these dark ages, developers did target Glide along not yet established OGL and DX. Result = crappier DX code and a worse situation for consumers. Nowadays, they will not code specifically for Mantle with DX as an afterthought. It will get used only because developers could use existing console optimized code.
          Same as why PhysX isnt a problematic physics library – nobody is going to use its GPU part seriously, when it only cuts a vast market of potential customers.

          • Mr. Eco
          • 6 years ago

          I would disagree. Those 40-ish% of the game market will see tremendous boost of graphics power, thus would be able to show much more effects – being cloth pieces, women’s hairs or some really useful and nice physics effects.
          That is quite a lot of advantage against games w/o such effects.

            • Klimax
            • 6 years ago

            They won’t see any of it. Development of graphics is already hard, this is even worse. It’s proprietary crap, which is usable by few percent of devs for little in return and high dev cost.

            And knowing drivers, I wouldn’t be surprised if there wil be significant glitches and API subpar. (Creating sane and stable API is hard. Ask people who created DirectX)

            • travbrad
            • 6 years ago

            [quote<]Those 40-ish% of the game market will see tremendous boost of graphics power[/quote<] As far as I can tell Mantle will only work on GCN architectures (7000 series and later), and it'll be long time before 40% of the market has a 7000 series or later AMD card. If the Steam hardware survey is accurate, only about 5% of PC gamers have a 7000 series card. That's a very small group of gamers to dedicate separate code and development resources to, unless it's almost trivial to do.

          • Concupiscence
          • 6 years ago

          Direct3D and OpenGL both tried to support a full initial feature set on hardware that wasn’t necessarily ready for it. Some chips took a big performance hit from z-buffering (like the Rendition V1000); some had issues with incomplete feature support (the Matrox Mystique didn’t support bilinear filtering, alpha transparency, or antialiasing, and the Rage Pro didn’t perform bilinear filtering on alpha textures); and some were just achingly slow if you asked them to do anything but display output on par with software rendering at 16-bit color (hello, S3 ViRGE). One of the big reasons Glide succeeded was because it was basically a low-level rasterization library that allowed for easy porting from software rendering with immediately perceptible results. Other APIs weren’t necessarily bad, but the limitations of the hardware needed more TLC on behalf of developers, and when time costs money the solution that’s easiest and most gratifying is an obvious choice. Glide eventually fell by the wayside due to 3dfx’s declining market share, the increasing ability of other new cards to handle the abstraction overhead of Direct3D/OpenGL, and its own creeping obsolescence due to feature lag.

          Mantle makes sense in the context of the console market. Reducing CPU draw calls matters when you’re running at 1.6 GHz, even with eight cores available. Porting it to the PC afterward is a little… odd, but if AMD manages even a handful of high-profile titles with Mantle support AND substantial performance benefits to go along with it, that could be a real coup. I’ll reserve judgment until it’s been out in the wild for a while.

          • mesyn191
          • 6 years ago

          The difference is today most game developers don’t write their games from the ground up.

          They use middleware. For physics. For audio. For the game engine itself.

          Many if not most or even all of these middleware makers are going to be supporting Mantle by default for the consoles.

          Therefore game developers will essentially have Mantle support by default much if not most of the time. They’ll have to essentially port their games to a DX11.whatever path to run on windows for Intel/nV hardware.

          • Theolendras
          • 6 years ago

          If you readed right, Mantle should have a linux implemantation as well. I would find it somewhat stupid if not the fact that probably many console developpement will already use it, so, the Mantle code path should not cost that much to support on the PC for developpers, why waste those optimization ?

          Done right it might even enable my Phenom II PC to play BF4 reasonably well in 1080p, something I was kind of doubting before that announcement.

        • Theolendras
        • 6 years ago

        Bingo ! Mantle is more alike to Cuda than PhysX. In fact a Cuda more focus on rendering than computing. I personnaly like the concept, DirectX version are almost a given anyway somehow, this is the benefit from sweeping all the consoles architecture at once. PhysX is also a great concept would also like it if they provided a DirectCompute or OpenCL code path, even tough it could still be closer to the bare metal on Nvidia’s own hardware with a Cuda one and doing so favoring it’s own hardware. In fact, there is also the fact that multiplatform effort should be much easier than with DirectX code path and that Direct X path is supposedly not difficult since it is HSLSL compatible.

        I guess this is something Gabe Newell intended to exploit with Steam OS from the start. If Direct X can get some competition it’s fine, they are getting lazy because of their dominant position. I’m not much into 3d rendering, but I’m told this feature wise it’s still quite good, but the overhead is handicapping somehow.

      • HisDivineOrder
      • 6 years ago

      I wouldn’t put it past AMD to drop support from a product that quickly, though. For example, they drop current driver support for their GPU’s at a much faster clip than nVidia does.

        • Klimax
        • 6 years ago

        Tesselation by NPatches. AMD dropped it from cards and drivers much faster then NVidia…

      • Mr. Eco
      • 6 years ago

      Your last paragraph is a bit messed up, or unclear.
      [quote<]...Guess what this does for AMD's ability to introduce a brand new GPU architecture that moves away from GCN...[/quote<] Direct3D and OpenGL do not magically support newer architecture. Should AMD or NV introduce a newer architecture, they still have to write the drivers to implement the API for Direct3D, OpenGL, Mantle or whatever. And it would work in the same way as it is now - any software that uses Mantle, will continue to work, because AMD will provide Mantle drivers, with the same and more API methods as the previous architecture - just like it is for D3D and OpenGL.

        • chuckula
        • 6 years ago

        Uh.. yeah… I never once claimed that an API auotmatically supports a new GPU architecture. That’s up to the *driver* to implement… at least it is right now anyway. Given AMD’s driver problems, this is a great way for them to pass the buck on to game developers so AMD doesn’t have to expend the effort into improving their drivers.

        Once again: Game developer writes code for Mantle: YAY!
        AMD introduces new GPU architecture: Bye Bye Mantle code!
        AMD doesn’t introduce new GPU architecture: Mantle code still works, but is AMD going to be better off in the long run?

          • Concupiscence
          • 6 years ago

          Well, not necessarily. Apps written for OpenGL 1.1 still work with a 4.4 driver, so long as the driver implements a workaround for the undersized extension storage buffer present in older applications. Mantle could have a similar fallback mechanism. At this point it’s simply too early to wonder about how gracefully hardware deprecation will be handled. Let’s hope they’ll learn from the two dominant APIs and create a sane support solution – Direct3D tends to reinvent itself dramatically with each major version change, and OpenGL tends to cling to its past very tightly. A middle road would probably be ideal…

          • sschaem
          • 6 years ago

          You got it all wrong.

          Mantle code doesn’t need to be rewritten. It no different then Directx

          We had so far over 30.. yes ***30*** version of direct3d.. some require massive code rewrite.
          (ex: Ever ported dx9 code to dx10?)

          DirectX design and architecture is inneficient. The way Drawcalls have been fixed in Mantle is not specific to GCN. Case in point, xbox360. the 360 is NOT GCN based, yet support 10x better API efficiency. Same goes with the Ps3.

          What Mantle does is bring a modern, high performance, high efficiency, design for multitasking graphic API.
          Like Direct3D it sits on the same low level driver layer. Like Directx and OpenGL it uses a high level language (HLSL / GLSL), etc…

          And I wouldn’t be surprised if AMD provide a version of the Mantle API that leverage OpenGL.
          You will get a 10x drop in efficiency in some cases, and will have to do like directx, code for feature differences…

          If your fear is that when we move to rays VS triangle Mantle code wont take advantage of the new HW architecture … well guess what, it will be the same if you coded your engine using opengl or direct3d.

          This is actually one of the best thing that could have happened to this industry!

          Mantle is a huge benefit to gamers & developers, no reason to poop all over it…
          People that want it to go away ? nvidia / Intel and their loyalists.

            • bjm
            • 6 years ago

            Exactly. And further, if the developers ever needed to, they can now go down to the assembly level for further optimization and still have it applicable to all platforms since it’s the same hardware. That level of optimization has always been done on the PC side, but it took a significant nose-dive in practice as console development took focus away from the PC. Now, with consoles now having the same hardware, it’s not too hard to foresee such bare metal optimizations coming back and going further than what even Mantle provides.

            Mantle, DirectX and OpenGL can always be there as a fall-back for other architectures, but there is far more reason now to optimize for the AMD platform than there ever was for any platform before. Mantle is just there to make this easier.

      • Mr. Eco
      • 6 years ago

      As for “PhysX is evil” – actually all gamers wanted it to be successful. It was NV decision to lock it to work only on NV hardware. I definitely wanted the physics to take off, after the initial attempts in HL2. Instead 10 years later I see only demos of cloth pieces from NV, and some wet hairs from AMD.

        • Voldenuit
        • 6 years ago

        [quote<]As for "PhysX is evil" - actually all gamers wanted it to be successful. It was NV decision to lock it to work only on NV hardware.[/quote<] Some blame also has to go to Ageia for failing to optimize the x86 codepath (some say deliberately sabotaging it), in a bid to make their PhysX PPU add-in card more desirable. Ironically, the PPU is no longer supported today. Too many companies trying to push proprietary standards, and even when they succed in the market (Microsoft, I'm looking at you and XInput), it's not always the best thing for the industry. I've been impressed by the way AMD has been handling things like TressFX, letting it run (albeit poorly) on competitors' (is there a need for a plural when there's only one?) cards. [s<]Hopefully Mantle will follow suit.[/s<] EDIT: After some reading about how Mantle is a low-level API targeted at GCN, I doubt it will be transferrable (or worth transferring) to nvidia GPUs. That's what I get for posting before I read up on something. πŸ˜›

      • bjm
      • 6 years ago

      The Mantle API is lower-level than DirectX, but it’s not assembly. AMD’s diagram has a Mantle driver between the API and the hardware instructions. Should that diagram prove accurate, then clearly there isn’t a 1:1 relationship between the API and the hardware (as opposed to true bare metal programming). Mantle has more in common with DirectX than the assembly-like programming methods you are comparing it to.

      So, given the information presently available, your assertion that Mantle is so low-level that developers will have to re-write the same software again and again is idiotic. And despite the console comparisons going on here, they are coding against the API, not the hardware. This also makes your assertion that AMD locked themselves into the GCN architecture equally as idiotic since they can just develop a new Mantle driver to interface the architecture to the API. As such, it’s possible that Mantle can be opened up to other architectures and become a standardized API. Will it? We’ll hopefully find out in November.

      Quitely frankly chuckula, I expected a more technically sound troll post from you. You disappoint me. </shakeshead>

        • chuckula
        • 6 years ago

        [quote<]So, given the information presently available, your assertion that Mantle is so low-level that developers will have to re-write the same software again and again is idiotic.[/quote<] No, it's dead-on right. Or are you saying that Mantle works with all those Nvidia cards out there.. oh wait, no it doesn't, so yeah, I'm right and the rest of your post is just wishful thinking.

          • bjm
          • 6 years ago

          Sure, AMD opening up Mantle to other third party architectures is wishful thinking, but you are still wrong. Mantle doesn’t have to work on nVidia cards for your entire point to be blown out of the water. Here, I’ll remind you what you said when you tried to clarify your point:

          [quote<] I never once claimed that an API auotmatically supports a new GPU architecture. That's up to the *driver* to implement... at least it is right now anyway. [/quote<] To make it clear: [b<]There [u<][url=http://www.anandtech.com/show/7371/understanding-amds-mantle-a-lowlevel-graphics-api-for-gcn<]still is[/url<][/u<] a driver responsible for providing Mantle API support to the GPU architecture.[/b<] This isn't 1:1 assembly code like you are claiming. Using the Mantle API does not mean that developers have to re-write software for every generation of AMD GPU nor does it lock AMD into the Mantle API. I'm not sure if your just trying to troll or just simply too inept to understand the technical details, but whatever it is, it's flat out wrong.

      • cynan
      • 6 years ago

      Liking Mantle, but not PhysX is not a contradiction as both are possible. A contradiction has to be more concrete. For example, I love the HD 7950 because I love all graphics cards based on Tahiti GPUs, but I hate the HD 7970.

      However, if you truly believe that Mantle to AMD = Physx to Nvidia, you can possibly defend that liking one, but not the other is logically inconsistent or incoherent.

      And people don’t convince themselves that inconsistent or incoherent beliefs are good. Rather, they convince themselves that the beliefs aren’t inconsistent or incoherent in the first place, because of the cognitive dissonance they perceive when this inconsistency is recognized on some level.

      • dragontamer5788
      • 6 years ago

      You’re got a point, but I want to taper it down a bit.

      With all of the OpenGL extensions (ex: AMD_pinned_memory, or even “standardized” extensions like GL_ARB_copy_image have been NVidia-only till recently), there is a lot of GPU-specific code that has to go on even in modern engines.

      Life isn’t perfectly compatible between AMD / NVidia GPUs, and it never has been perfect. As long as different GPUs are optimized for different tasks, you can bet that these companies will put out proprietary extensions.

      Furthermore, you look at OpenGL 4.x and compare it to OpenGL 1.x, and notice that it is _completely_ different. Ditto with DirectX6 vs DirectX11. Graphics APIs change over time, and even the standards (OGL / DX) change as GPUs change. The famous DX9 vs DX10 split resonates across games… with many games having entirely different implementations of their engine.

      These APIs are not designed for backwards or forwards compatibility. They are all based on hardware at the time, and what they can do.

    • odizzido
    • 6 years ago

    I hope the 290X kicks ass. I am running a downclocked 5850 ATM.

    • Tristan
    • 6 years ago

    AMD Mantle = NV CUDA. They now like proprietary standards ?
    Let they force M$ to use Mantle from C++ AMP πŸ™‚

      • Klimax
      • 6 years ago

      Microsoft has its own API for that: DirectCompute (also compute shaders [url<]http://msdn.microsoft.com/en-us/library/windows/desktop/ff476331(v=vs.85).aspx)[/url<]

      • maxxcool
      • 6 years ago

      Mantle IS microsofts. the api they reference is a direct port of the hal/gpu api from xbone. for amd this is awesome.. for us not so much.. but it WILL mean amd will have a better d3d/ogl implementation on ports from-xbone and for games AMD pays to use their code.

    • yogibbear
    • 6 years ago

    All this Xbone use makes me think they should’ve made the case look like a bundle of bones.

      • Arclight
      • 6 years ago

      More like a bundle of sticks.

        • steelcity_ballin
        • 6 years ago

        Subtle…

        • willmore
        • 6 years ago

        I’ll huff and I’ll puff and blow your marketing plan down.

    • Klimax
    • 6 years ago

    TrueAudio… as if we haven’t EAX already. (Yet Another Proprietary Stuff)

    Frankly, DirectSound and XAudio as system level or EAX if you want something already existing. (Or OpenAL if you want crossplatform)

    Re Mantle API
    And if you want lower level access, then why not go DirectCompute (or equivalent) outright. (Vendor neutral)

    ETA: Apparently forgot that EAX is no more for new titles. No need for new attempt at lock in…

      • HunterZ
      • 6 years ago

      Sounds like it’s just another Xonar-like feature set, which can do things like using HRTF modeling to downmix 5.1 to stereo headphones, but probably doesn’t do anything to provide new features to game developers via proprietary APIs or anything like that.

      Basically they’re just making it viable to connect to a sound system via HDMI without sacrificing goodies that you’d get from an addon sound card.

      I wonder if it supports uncompressed 5.1 LPCM encoding?

    • ronch
    • 6 years ago

    [quote<]Hawaii is limited to the R9 290X, but there are other members of the R9 series that appear to be based on older silicon. The Radeon R9 280X will be priced at $299 and will come with 3GB of RAM. It looks like it might use the Tahiti silicon familiar from the Radeon HD 7900 series. Then there's the Radeon R9 270X, which will be a 2GB card priced at $199. If you have a stricter budget, there are a couple of cards in the Radeon R7 series. The Radeon R7 260X appears to be based on the Bonaire chip that currently powers the Radeon HD 7790. It's slated to come with 2GB of GDDR5 memory and to ring in at $139. For under $89, you can look forward to the R7 250, which probably isn't much to look forward to at all. At least it comes with a gig of GDDR5 RAM.[/quote<] I wish AMD and Nvidia stop rebranding old silicon as parts of a new generation. Still, that audio built inside these things (integrated?) looks interesting. Perhaps TR should see how well they sound compared to some discrete audio cards out there. Was it designed by AMD entirely in-house?

      • HisDivineOrder
      • 6 years ago

      I swear it seems like a lot of people are forgetting that modern discrete cards on the high end and mid-high range include HDMI audio out to a receiver already. This has been true for a long time. Audio out from your GPU isn’t even extraordinary.

      The only thing amazing about this is their claims of 24.1 virtualized down to 7.1 with spatial ranging of up and down (verticality) with use of some reverb that hasn’t been …heard since the old glory days of 2.1-only Aureal3D.

      It’s a great idea. If it were an open standard and it were being done by MS with DirectX or even Valve with SteamOS or some form of OpenAL or anything at all a standard, I’d be on the rooftops saying, “Yay, this is great.”

      But it’s AMD trying to pull a PhysX with audio. It amazes me to watch all the people so furious about PhysX praise AMD for the new audio non-standard they’re building into a small selection of their GPU’s. I’ve said from Day 1 I thought PhysX was a bad idea even though I have a Geforce card. I’d say it if I owned a Radeon. It really doesn’t matter. Proprietary nonsense is nonsense. Sure, they’re within their rights to do it, but it doesn’t help the market.

      AMD doing it with audio is just as nonsensical and just as useless.

      And Mantle is worse than that…

        • Mr. Eco
        • 6 years ago

        I do not get your points – to my understanding Mantle is an open API, read so on the other TR article. I hope the audio part will be open too.

          • gosh
          • 6 years ago

          And Mantle works om Xbox One and Play Station 4

            • Arclight
            • 6 years ago

            “The mantle belongs to us”” (Intel forerunners)

        • abiprithtr
        • 6 years ago

        If I remember correctly, wasn’t the PhysX option deliberately (in the sense, actively) when it detected an AMD hardware?

        As long as AMD does not do such a thing, shouldn’t such a ‘closed’ rather than open system (TrueAudio) be ok?
        Of course, all this based on the presumption that nVidia was doing what I said they were doing.

        (I am ready to defend nVidia even if nVidia were doing that. Say… nVidia’s deliberate action of detecting AMD GPU and preventing PhysX to run on it could simply have been to protect any malfunctions and therefore get a bad PhysX experience)

          • destroy.all.monsters
          • 6 years ago

          Yes, nvidia deliberately wrote it that way in order to stop people using mixed card systems. Yes, they publicly lied and used the same sorts of excuses you mentioned. That’s not defensible. Similarly they patched out support for the PPU in mixed systems. Both were hacked in by enthusiasts some time ago.

        • mesyn191
        • 6 years ago

        TA is nothing like Physx.

        Physx is a proprietary software GPU physics package made proprietary via licensing restrictions for nV GPU’s only.

        TA is a hardware feature that only AMD GPU’s have on the PC but appears to also sort of be in the Xbone and is not at all a standard or even an attempt at one.

          • HisDivineOrder
          • 6 years ago

          [quote<]The challenge for AMD is that they’re going to need to get developers on board to utilize the technology, something that was a continual problem for Aureal and Creative. We don’t know just how the consoles will compare – we know the XB1 has its own audio DSPs, we know less about the PS4 – but in the PC space this would be an AMD-exclusive feature, which means the majority of enthusiast games (who historically have been NVIDIA equipped) will not be able to access this technology.[/quote<] Anandtech seems to suggest Xbone (and probably PS4) have different DSP's.

            • mesyn191
            • 6 years ago

            TA is supposedly a more powerful version of the xb1 audio block. It can do stuff like HRTF which the Xb1’s audio sub system can’t.

            The PS4 doesn’t have anything like TA or the Xb1’s audio sub system. It has something closer to what was in the X360, which isn’t bad, but it is totally out classed in comparison.

    • End User
    • 6 years ago

    – The 290X should be a R10 as it is the only Hawaii based product.
    – Is Mantle just for the 290X?
    – They price everything but the 290X. πŸ™

      • Alexko
      • 6 years ago

      Mantle will work on every CGN chip, HD 7000s included (except for the VLIW4/5 rebrands, that is).

    • Unknown-Error
    • 6 years ago

    WOW! The amount of hype for this non-event has been painful. Lot of blah, blah but very scarce in details. And what happened those 9000+ firestrike score? $600 290X smashing the $1000 Titan?

    • bjm
    • 6 years ago

    The Mantle API is definitely the most interesting tidbit of news today. I’ve been waiting to hear more from AMD about this API since they first mentioned HSA. Once they have a defined architecture, the next logical step is to have an API.

    It definitely looks like Mantle is the culmination of all the work put into HSA, or at least a very significant component of it. Remember the executive from AMD who shouted from the rooftops that there will be no DirectX 12? Well, now we know what they decided to do instead. With AMD now having an audio stack, they can provide a full API that covers audio, graphics, and general processing. Combine this with AMD’s CPU and GPU being inside every next-gen console, and you have yourself a company that is in a very powerful position.

    This hasn’t translated to any hard earned cash just yet, but man.. who would’ve thought their outlook would be so bright 18 months ago? I’m surprised to be saying this, but I can’t wait for AMDs developer conference in November–especially to find out where Linux fits into their plans.

      • HisDivineOrder
      • 6 years ago

      Don’t forget they can bundle in hardware-assisted physics and do an end-run around PhysX. If the API were an open standard, I’d say this was great news. Anything to break the stranglehold MS has on PC gaming.

      But Glide 2.0 is not what we need. We need it to be a neutral third party or at least a cooperation between the three major GPU companies (AMD, nVidia, and yes even Intel) to ensure everything is kosher.

      Instead… we have the past repeating itself. The next step is for nVidia to announce a CUDA-based variant of their own.

      Cue Yoda.

      “Begun, the API Wars have.”

      Behold the future of gaming. Instead of you not getting cloth that moves realistically or extra stompy effects on the ground, you’ll have entire games that don’t work on large swaths of PC gamers’ rigs on both sides. And anyone hoping integrated GPU’s one day get good enough for some light/midrange gaming, forgetaboutit. Suddenly, you’re going to watch PC gaming fragment into a bajillion pieces. Oh, it seems great now, right? BF4 for AMD, yay!

      Imagine Intel pays for the next GRID to be exclusive to their integrated GPU’s.

      So next year, let’s imagine AMD is still around pays to get exclusives outright. Suddenly, there is no DX version. And even if AMD didn’t do that, opening Pandora’s box means nVidia’s already grinning to themselves at the thought of Call of Duty, exclusive and green.

      The PC gaming market is nascent and growing. Why would you dredge up the very worst of the past, AMD? Why can’t AMD and nVidia and Intel work together just enough to fix this problem without resorting to proprietary nonsense?

      Used to be, one could count on at least MS to give a damn and make them all get along. Now they profit from that not happening, so… unless Valve and SteamOS can prove a stabilizing force, let’s hope nVidia resists their baser instincts…

        • Klimax
        • 6 years ago

        Glide 2.0 vs. OpenGL 1.0 and EAX 666.0 vs. Aural 999.0 vs. DirectX/XAudio (better case or how about SoundBlaster vs. Roland vs. Gravis Ultrasound vs. Adlib?)

        Frankly, DirectX is good (open APIs are generally killed by vendors to get proprietary things as lock-in/advantage – see OpenGL and its extension hell)

        (Aka mostly agree with you)

          • bcronce
          • 6 years ago

          Aureal won, but then promptly lost.

            • Klimax
            • 6 years ago

            Didn’t matter. There were two proprietary things competing at marketplace, which got replaced by software driven vendor neutral stack. (Microsoft, Core Audio + XAudio; OpenAL as open)

            • Theolendras
            • 6 years ago

            The only reason AMD has any reasonable chance to get it going is the cross developpement on console and PC a phenomenon never seen before to that extent. Other initiatives other dans OpenGL and DirectX will have a hard time get attention from developpers.

            • Klimax
            • 6 years ago

            I don’t want any initiative like this. They took the worst ideas from 90s and reintroduced all idiocies Microsoft wanted to eliminate. (OpenGL was too problematic for games for long)

            Frankly, AMD is extremely shortsighted here, because they see their own lock-in API as means to attempt to best NVidia, but fails to see that it restricts their ability to change architecture.

            And lastly, their drivers don’t allow much confidence that this will work well.

            • Theolendras
            • 6 years ago

            I see this as well the pitfall is enormous, it is worrying me as well. But I’m willing to let them the benefit of the doubt until developpers comment on their DirectX translation porting tool. Best case, getting some more performance for little efforts, worse case, DirectX or OpenGL port suffer because of the diversity of losing focus from developpers.

            • mesyn191
            • 6 years ago

            For single GPU configurations AMD drivers are fine. CF needs lots of work still though.

            I wouldn’t doubt AMD would be happy to achieve vendor lock in but I think they don’t expect that to happen. AMD saw they had a advantage they could leverage (ie. universal console support + MS letting DirectX stagnate) and took it. If we can get some extra performance for free so much the better.

            • Klimax
            • 6 years ago

            Actually, biggest problem is, that such low level access is inherently tied to implementation in hardware. Effectively, AMD won’t be able to change architectures, because their drivers won’t be able to effectively deal with games and in turn they won’t be able to react to NVidia. (They don’t have resources to keep both paths in driver)

            Similar thing already happened: Pentium 4. (Radical change caused massive performance regression in older code. New code like A/V processing was able to benefit) There were no drivers to create things even more complex…

            Also, I suspect that it will tax significantly middleware providers, most likely increasing costs for users.

            (Drivers were problem even on single cad solutions performance-wise.)

            • mesyn191
            • 6 years ago

            The API is not tied into the hardware like some sort of ISA and the situation is nothing like the P4 at all on either the software or hardware side of things.

            Apparently you can use DirectX HLSL of all things and compile as needed for Mantle.

            The x86 market is pretty much built around backwards compatibility. That doesn’t apply to GPU’s and has since ever. The GPU IHV’s have been introducing new hardware and software features, including custom instructions introduced into other API’s, and abandoning them constantly over the years.

            • Klimax
            • 6 years ago

            API may not be tied per se to hardware, but in order to even beat DirectX/OpenGL, you’ll have to code against hardware implementation. (That’s where my analogy lays, P3 and P4 where different implementations, but compatible from POV of programs) Don’t forget, many optimizations are done in drivers, but here they won’t be able. (Aka another try to shift burden on somebody else)
            And if they just crosscompile, there won’t much change and still requires brand new code path for developing, testing and optimizing, which is much bigger and error prone.

            Just note: what you later described was situation with DIrectX9 and lower (capability bits, which were demanded by GPU makers themselves), but doesn’t apply anymore to 10 and higher.
            BTW: Tesselatino by way of NPatches… (perfect example)
            Note: OpenGL was and still is there. Extensions…

            • mesyn191
            • 6 years ago

            Not necessarily. Neither DX or OpenGL are immune to the problems associated with design by committee.

            As someone else noted, getting 9x the calls is probably more of a bug fix than some sort of special hardware optimization.

        • Sagia
        • 6 years ago

        That why we still need Microsoft too beat all of those proprietary stuff

          • HisDivineOrder
          • 6 years ago

          We need a neutral third party. It doesn’t have to be MS, but it has been in the past. I don’t think it will be now unless MS takes the threat of Steam as a fire lit under their butts and then moves to reshape Windows into the Gamer’s OS it should have been. The secret of Windows has always been that gaming sold faster and faster PC’s, but MS ignored the gamers that the OS was built on the backs of.

          This is much the same as the way Steve Jobs treated the earlier versions of iOS. Eventually, though, they figured out that gaming mattered and suddenly you have Apple (post-Jobs admittedly) paying EA in ways that are not money to make PvZ2 exclusive in a very console-like way.

          One hopes that perhaps the threat of SteamOS, Steam Machines (I prefer the name Steam Engine or even Steam Box to this really) and a repeat of the Android Effect will put MS into full-on panic mode that will lead to them taking Windows as a gaming platform seriously again.

            • Klimax
            • 6 years ago

            DirectCompute? (One of original authors of DirectX argues this approach)

            • Sagia
            • 6 years ago

            One of the reasons why I said that Microsoft should beat the others API out there is because I think that Microsoft is the only company that can controlled, force, regulated and make standardized so we can used it to run on any software. Yup, i now that the are closed source and can’t be used on any operating system beside MS Windows but OpenGL as standard API? Some company even have modified there OpenGL implementation (different extension, different feature) and might also making there own forked (Glide for example) so it only exclusive for them, The same thing also happened to AMD Mantle also only exclusive for GCN architecture. If all next gen games decide to use AMD Mantle as there main API, what will happened to our PC gaming industries? At least with physx, AMD owner can still playing the games with it (without all eye candy off course), but if all the next games only using Mantle? Where is their commitment for open standard?

        • bjm
        • 6 years ago

        While I agree that it’d be a good thing for a neutral party to step-in and have a more hardware agnostic API, now isn’t the time for one. Standard bodies are always slow to react to market changes and never make the first moves–it will just hold things back. The Glide comparisons that everyone is making to Mantle makes sense. Just like Glide, the market needed an API to push things forward until the forward direction stabilized. With hardware for PCs and Consoles coming closer together, and with the PC market fumbling between tablet/smartphone convergence, a standard body will just be a waste of time.

        That being said, we still don’t know enough about the Mantle API to count it out in terms of it being that open standard. The Mantle diagram showed GCN still requiring a driver between the API and the hardware, indicating that there isn’t exactly a 1:1 relationship between the hardware instructions and the APIs. As such, it is possible for this API to be opened up to other architectures, such as those from nVidia or Qualcomm. And remember, AMD has already made some very open efforts for HSA. It’s not entirely out of the question that they would do the same for Mantle:

        [url=https://techreport.com/news/23102/amd-arm-team-up-to-push-heterogeneous-systems-architecture<]AMD, ARM team up to push Heterogeneous Systems Architecture[/url<] [url=https://techreport.com/review/22452/a-closer-look-at-the-new-amd/4<]AMD hopes to turn HSA into an open, industry-wide standard.[/url<] [url=https://techreport.com/news/23673/amd-teams-with-oracle-on-java-acceleration-qualcomm-on-hsa<]AMD teams with Oracle on Java acceleration, Qualcomm on HSA[/url<] And on the topic of Microsoft: At one point, I would've put eggs in the Microsoft basket to do this, but that isn't the case anymore. With Microsoft newly found focus on this "Devices & Services" strategy BS, they've shown that they are willing to alienate customers of their best on-premise software (Exchange, Windows Server, and Sharepoint, and killing TechNet and Masters certs) for the sake of Services, and alienate their hardware partners for the sake of Devices. If they are willing to stab the legs of their multi-billion dollar on-premise enterprise market for the cloud services, then certainly they'll have no problems killing DirectX for the sake of Xbox and the Microsoft Store. If Microsoft allows Xbox games to run on Windows 8, then I'd be wrong (and boy, would I love to be wrong on that). Anyway, getting back to AMD, with their hardware being in both the next-gen consoles, they are as close to a standard body as we'll get today. If Mantle successfully takes off, AMD can do two things: 1) Keep it to themselves and use it as technical leverage against their competitors. 2) Open up the API and use it as leverage from a marketing standpoint, thus emerging as a leader. I'm looking forward to their developer conference; I'm sure we'll find out more where Mantle is going then.

          • Theolendras
          • 6 years ago

          There is a lot of wisdom in your comment. I agree, but I think AMD financials might tempt them to go for a vendor locking scheme, even if the have been relatively open in nature traditionaly.

        • Theolendras
        • 6 years ago

        Can you say OpenGL…

        • Anonymous Coward
        • 6 years ago

        [quote<]Instead... we have the past repeating itself. The next step is for nVidia to announce a CUDA-based variant of their own.[/quote<] The past repeats itself in many ways. For example neutral 3rd party standards being irrelevant in the face of corporations going after their own self-interest. Anyway nVidia can announce APIs all they like, AMD has the consoles and a nice chunk of the Windows market.

      • cynan
      • 6 years ago

      This may be a very noob question, but do graphics APIs have to all or nothing? Everyone is talking like a game will either use Mantle OR DirectX. Can’t a graphics application be mostly DirectX 11, with particularly poorly optimized drawcalls interfaced through Mantle?

      If this makes sense, then this seems like the best case usage scenario for Mantle. Code the game for DirectX by default, then go back and tweak some particularly inefficient effects (or add new ones) using Mantle. This all but guarantees that games will be made for all hardware (due to DirectX API), but offers potentially increased efficiency for GCN hardware should the developer think/be convinced by AMD it worth the effort.

      Edit: In other words much more like PhysX than Glide. With any luck it will be more useful than PhysX has panned out to be thus far.

      • Anonymous Coward
      • 6 years ago

      [quote<]With AMD now having an audio stack, they can provide a full API that covers audio, graphics, and general processing. Combine this with AMD's CPU and GPU being inside every next-gen console, and you have yourself a company that is in a very powerful position.[/quote<] If PS4 and XB1 both use the same API as AMD can use on Windows, or hell OSX and Linux... that would be slick. They could maybe turn this into a situation where games are ported to AMD APIs and the OS is less of a concern.

    • Meadows
    • 6 years ago

    Is it professional to write “Xbone” in an official news post?

      • Sahrin
      • 6 years ago

      Yes.

        • Meadows
        • 6 years ago

        No.

          • lilbuddhaman
          • 6 years ago

          its about as professional as idiotic words like “blog” and “tweet”, so yes.

        • Wirko
        • 6 years ago

        ever(max((upvotes-downvotes)/length(post))

      • sschaem
      • 6 years ago

      Should be capitalized ? XBOne

        • l33t-g4m3r
        • 6 years ago

        Should be XbOne to help pronunciation. EX BEE One.

      • chuckula
      • 6 years ago

      Of course not. TR should never have misspelled “Xboned” like that.

      • HisDivineOrder
      • 6 years ago

      Xbone is the name of the console. Anyone who tells you differently is an employee of MS or someone with a $500 preorder. The rest of the world knows…

      It’s Xbone.

        • Srsly_Bro
        • 6 years ago

        And after you buy it, MS just Xboned you.

        • willyolio
        • 6 years ago

        actually, i have a friend who IS an employee at microsoft gaming… she calls it x-bone, too.

          • echo_seven
          • 6 years ago

          Pronouncing it “cross-bone” actually wouldn’t be all that bad either. It would be strangely ironic.

      • Generic
      • 6 years ago

      That was my question too.

      PS4 & XB1: Great
      Playstation 4 & Xbone: Not so much

      I guess MS should stop trying to be creative with their console iterations. πŸ™‚

        • Chrispy_
        • 6 years ago

        Nah, it’s a “pee-ess-four” and an “ex-bone”.
        I don’t think I’ve heard the word “playstation” uttered in full since the PS1…..

          • Anonymous Coward
          • 6 years ago

          No way! I always wondered if the “PS” stood for something, now I know.

          • jessterman21
          • 6 years ago

          Haha, and I always think it in the robot-voice from the commercials.

        • jihadjoe
        • 6 years ago

        It’s Xbone to me.
        The original Xbox (which was awesome, btw) is the only thing I’m gonna consider as the Xbox 1.

    • Sahrin
    • 6 years ago

    “R9 290X.” Who the hell came up with that name?

      • Meadows
      • 6 years ago

      They couldn’t have an “8000 series”, because [i<]someone else[/i<] already did.

        • internetsandman
        • 6 years ago

        ATI Radeon 8000 series, anyone?

        I can’t imagine that would create much confusion, I think the bigger reason is that they wanted to get out of that numbering scheme before they hit the 10,000 series

          • moose17145
          • 6 years ago

          They already had that issue after the 9000 series (9800 pro was still an epic videocard btw…)

          It was what lead to the X800 series of cards, and then the X1800, and X1900, and then X1950XTX… Thankfully XFX was only making GeForce cards at the time… otherwise we could have wound up having the XFX X1950XTX XXX Edition card…

            • derFunkenstein
            • 6 years ago

            Now THAT’S an Xbox!

            • Rakhmaninov3
            • 6 years ago

            Man I remember getting a 9700 Pro. That thing was ridic. Doom 3 was LIKE REALITY, MAN!

        • Sahrin
        • 6 years ago

        I know all the BS rationalizations; but I thought we were mostly past the stupidly complicated names.

        I think nVidia is on the right track with the Titan/x80/x70/x60 naming scheme.

        My thinking is: come up with a name for each generation (like literally call the RV1070 generation “Megamouth” or something) and then add a three digit number. “Radeon Megamouth 900.”

        (Obviously don’t really use Megamouth as that’s stupid).

          • demani
          • 6 years ago

          It’s not worse than half the other names that come out of these companies. At least it says “big and loud” which supports the fast chips and audio components- you just might be a marketing [i<]genius[/i<].

          • clone
          • 6 years ago

          it really isn’t a complicated name compared to the old AMD X800XT platinum edition or the Nvidia Geforce 6800 Ultra Extreme edition.

          • Liron
          • 6 years ago

          How would a consumer know if Megamouth 900 is better/newer or older/worse than Hyperbelly 900?

            • Sahrin
            • 6 years ago

            Megamouth is a kind of shark.

      • madmilk
      • 6 years ago

      It’s sort of similar to Intel’s i3/i5/i7 naming scheme, which isn’t too bad. At least with GPUs there’s much less feature segmentation going on.

      • FuturePastNow
      • 6 years ago

      Are you kidding? That just rolls off the tongue. /s

      • clone
      • 6 years ago

      they couldn’t call it a 290, Nvidia already took it.

      tbh I’m good with the name. R9 silicon, they were already up to 9590 with their cpu’s so rolling it over just made sense…..and then it’s “extreme”… cos’ they put an X on the end and it’s gotta be extreme.

      • Wirko
      • 6 years ago

      There are at least two theories, the nVidia theory, and the AMD theory.

      When nVidia ran out of 4-digit numbers, they invented the GTX 280. AMD copied that, but made a typo. Of course, nVidia’s number makes sense if X is read as 10, so GT 10280 > 9800 GT. But nVidia copied AMD’s earlier trick (the X300), too.

      On the other hand, 290 rhymes with Am29000, AMD’s once-successful processor. The number is AMD’s way of telling us, see, some of the engineers that have been with us since 1988 are still here, not all have been fired.

      • JustAnEngineer
      • 6 years ago

      ED-209?

      • jihadjoe
      • 6 years ago

      Throwback to the 2900XT? 512-bit and all…

    • Bensam123
    • 6 years ago

    Hopefully this will start a resurrection of hardware accelerated audio in PC gaming. It’s not just about the hardware, but also premade effects that are easy to use and implement for developers much like EAX was. Surround sound and sound in general in games was at it’s best in the early 00s before Windows killed directsound, I definitely look forward to where this is going.

    And yes, game developers can make the most amazing sound ever in games if they want to. But they don’t spend the budget or the time on it, so we end up with ridiculously shotty sound which we’ve had for the last decade.

    As I mentioned in the other thread, AMD should totally buy up Creative, suck up all the talent, tech, and patents and make a audio division. There really isn’t any competition in the audio market, nor has there been for a long time, and Creative has to be worth next to nothing now. It just needs someone to fund and drive them in the right direction again.

    Sort of interesting there isn’t the equivalent of a 7950 here. Assuming numbering and how they scale, the 290x would be a 7970 and the 280x is the 7870…

      • internetsandman
      • 6 years ago

      There’s no competition in this area of the audio market*. Asus has the sound card market pretty much locked down with it’s Xonar devices and it’s audio cards on it’s ROG boards (the audio solution on the Impact is of particular note here). Competition in the audio market, strangely enough, seems reserved for mainstream headphones (skullcandy vs beats by dre *shudders*) or in the niche high end markets, the people dropping upwards of $2000 on home audio or headphone setups.

      I do agree though that games need better audio. I own a Sennheiser HD 650 and I’m pretty sure that none of the games I play even come close to exposing the kind of detail, power and positioning these are capable of. I would absolutely love in an FPS game to actually be able to pinpoint someone’s footsteps just based on sound alone, to train my ear to recognize the echo of someone walking behind a wall or around a corner behind me or beside me, but sadly, this audio tech just isn’t being worked on in games. That’s the kind of detail that hardware acceleration could benefit from, being able to accurately encode those differences in a digital stream and having competent enough hardware to convert that into an accurate analog waveform.

        • Bensam123
        • 6 years ago

        Yeah, for whatever form of ‘competition’ a Xonar card amounts to or Creatives solutions now days. No one is actually trying to improve the audio, they just slap better and better DACs on it, try to soup up the SnR as much as possible, and maybe toss in a amplifier. That doesn’t amount to improving anything more then the quality of the sound, not what the sound itself is.

        I’m all for sound quality, but when there isn’t anything else to it… it’s really lackluster. Where as good sound adds an entirely different experience to a game. While it still comes down to the developer doing work, there are definitely bits that can be premade which they can simply implement (like EAX).

        • DarkMikaru
        • 6 years ago

        Sandman I am right with ya on that one! As an Xbox 360 gamer (occasionally pc), I’ve been playing games with my RCA 5.1 Surround Sound setup for years and just recently was reminded of how much I rely on audio positioning. Tired of the crappy stock Xbox 360 headset that breaks if you look at it wrong I bought a pair of Turtle Beach X12’s. And though the sound quality is great, along with chat voice quality ( I can’t say enough good things about that) I was just completely spanked in Gears of War 3 Horde & King of the Hill modes cause I couldn’t pinpoint crap! Amazing how we truly rely on our ears to hear enemies / allies positions in gaming. In Gears I can tell if someone is behind an opposing wall and plan accordingly. Are they running? Walking? With my 5.1 I can tell. I loved being able to finally hear my friends and have a headset that wasn’t sliding off the side of my head every 5 seconds I just couldn’t deal with no positioning. It is just far to important to do without.

        For me, Gears was virtually unplayable with the X12’s on. Sure you could still hear footsteps, gunfire, war cries, grenades…. but what good does it do you when you can’t tell what direction they are coming from. Numerous times I’d walk right into a firefight and get mowed down because I just couldn’t pinpoint where the heck anything was. lol Frustrating.

      • sschaem
      • 6 years ago

      No need to look to hard.
      3D Audio is actually not that difficult. What is ‘difficult’ is making a universal API developer would use.
      Thats more politic then R&D brain power.

      But we know that Microsoft got no interest in making one for the mac, or ios device, nor Linux, nor PS4, etc.. So the API cant come from them. AMD is trying to step up, but from the data we have so far, is doing it all wrong.

      But its late 2013. and how many games for example do have true 3d positional sound with correct Doppler and environment acoustic ?

      Why cant someone make an API that uses DirectCompute or OpenCL or Cuda, etc.. so I can buy a 7790 card as a dedicated stream compute HW accelerated audio ? (2GB holds A LOT of audio)

      etc…

      Could it be that nobody cares ? Because it would be so easy to leverage a secondary GPU for audio rendering. Its actually the perfect use of an old GPU card, and that unused APU compute power.

        • JohnC
        • 6 years ago

        You got it correct, Stephan. Not a lot of people care about positional sound in games so game developers simply do not want to waste their time and money on that. Force-feeding the proprietary, unnecessary hardware (do we really NEED that with modern CPUs? I doubt it, regardless of AMD’s propaganda) solutions and proprietary APIs won’t change anything. Creative has learned their lesson, so will AMD.

          • internetsandman
          • 6 years ago

          I’m fairly certain not a lot of people care for in game audio for the same reason they don’t care for high definition music or equipment: they simply haven’t been exposed to it. Give a top-tier game like BF4 or the next COD (target them young) precise audio effects, to the extent that a ninja could play the game blindfolded and do well, and then market the hell out of them, tell gamers that they can learn to tell precisely how close an enemy is, where he’s coming from, and even which corner he might be around just based on in game sound, and they just might pick up on it, they might start caring about that competitive edge, using audio to improve their ability and scores in game. Over time, this might get the ball rolling for other developers to pay attention, if the market is demanding high end, precision audio engineering in games.

            • JohnC
            • 6 years ago

            Perhaps, perhaps not. Having such precise audio positioning in such multiplayer FPS games also brings in many potential drawbacks – it will decrease the need for using mini-map and “spotting” mechanics and various related “spotting” gadgets which I personally find fun to use. It may also give an unfair disadvantage to people who may not be able to afford a better quality listening equipment or may not be able to set the volume high enough to hear distant sounds or may have a problem with perceiving certain frequencies. Writing a cheating wallhacks/radars will also be much easier (or at least more diverse and harder to detect) – take a look at Asus’ “Sonic Radar” program, it’s not “precise” enough to be considered as “cheating” but it gives you a great example of what’s possible to do with more precise audio positioning.

            I already got enough accusations of using “hacks” (when I actually do play fair) as well as random admin kicks on BF3 servers, and I definitely do not want to increase their frequency.

            • Bensam123
            • 6 years ago

            Dude, you’re trying to argue increased perception and immersion is bad because other parts are then used less? That is just as backward as infinity ward stating destructible environments are bad because you’ll just end up with a flat level which is no fun…

            If I remember right you’ve admitted multiple times on here you’ve cheated in online games…

            If I had to read between the lines you could even say you don’t want better positional audio because it’ll make your wall hack less effective.

            • bcronce
            • 6 years ago

            Not sure if I should+1 troll or -1 not-troll

            • jihadjoe
            • 6 years ago

            When in doubt, I mod toward zero.

            • HisDivineOrder
            • 6 years ago

            You’re arguing to a crowd that goes as low resolution as possible to get the best framerate. You could try arguing that having finer detail can allow them to make out more detail and give them an edge in precise targeting or some nonsense, but framerate is king.

            And you’re telling me those kids running headphones that are either shoddy Razer 7.1 virtualized nonsense OR Skullcandy/Bose/Beats brand-obsessed headphones are going to say one day, “Oh, right. Quality of audio matters now. I’m going to go buy myself a great audio system to get something out of that superior audio that AMD alone is delivering to a tiny selection of titles that support their proprietary, cutting edge, not even available for anything other than the latest GCN 2.0 products (not even 79xx cards or their refresh in the R9 280 series)…”

            I just don’t think you’re going to convince that audience. I mean, if they really cared about audio or could be convinced to care, they’ve probably already invested in a receiver, right? Which means AMD or nVidia already owned them to begin with. As for the proprietary whatsits that AMD is pushing, it seems unlikely to be used by many games because it has such a tiny, tiny target audience (only AMD users with only the very latest in GCN 2.0 cards, which is already a tiny, tiny sliver of PC gamers, which is a tiny, tiny sliver of gamers in general) that it strikes me as really rather ridiculous to think that anyone who would use this… except those paid explicitly by AMD to do it.

            Which means Gaming Evolved titles. Among Gaming Evolved titles, how many of them used TressFX? That was one of the few times there was an AMD developed technical achievement and so far we’ve seen ONE game use it at the beginning of this year. That’s it.

            How much longer (and how often) do you think AMD can throw money at these developers to pay them to keep adding more and more tech to their games before their very limited reserves finally run out? Remember this is the company that can’t even get Crossfire for 4K or Crossfire for Eyefinity or even just Crossfire frame latency fully squared away because they just don’t have the resources to do it.

            So now you think they’re going to be able to pay a TON of developers to use this? Because if they don’t pay, it won’t get used. It’s a cost to add in and to benefit… such a small percentage of gamers? Hell, it’s not even a majority of AMD graphics users. That’s to say nothing of a majority of PC gamers not getting a benefit…

            • internetsandman
            • 6 years ago

            I’m gonna reply paragraph by paragraph here, and doing this on my phone will still take a while

            I highly doubt even the most complex of audio engineering systems in game would have a noticeable impact on frame rate, offload it to the CPU or keep it on the GPU but either way, if it hurts frame rates it’s not being implemented properly

            I’m not advocating that AMD audio be the standard, same as Nvidia’s physx shouldn’t be considered a standard. I’m talking about an API that can be used across sound cards, graphics card co-processors, or the CPU itself. The whole point is opening up the concept to as many people as possible, exposing the market to better audio and seeing if the demand for it follows

            As I said above, I’m not in favor of anything proprietary or isolated, a special feature enabled only for certain people. I want as May people as possible to have access to it, and caring about audio quality doesn’t mean they have to drop a thousand bucks or more on equipment, it could just mean that they buy a somewhat decent soundcard and perhaps a $200 headset, that’s how I was exposed to better audio and took my own upgrade path from there as I could afford it, you don’t have to spend a whole ton of money right away

            As for AMD throwing money at developers, the developers already have massive teams working on eking out every last drop of performance and detail from graphics cards, they don’t need more money, they can afford to have a few guys here or there join the audio engineering team and just re-allocate some of their funding, and in a top tier game like BF4 I guarantee you that even if they spend more to get good audio, they’ll make at least five times their return on investment from sales of the game as well as DLC. The whole point is to expose the gaming community to it and bring the demand to developers. These companies only respond to what they think the market wants; the market has been screaming for better visuals for years, so we get Crysis 3, but what would happen if the market decided that they want better audio? You can bet developers would start to pay attention

            • Bensam123
            • 6 years ago

            Yes.

            Those that have know the difference. As I gave an example of BF3 compared to CoD, you can do this test with your console friends.

            • moose17145
            • 6 years ago

            I remember when BF2 came out and fully supported the X-Fi and all the neat things it was capable of… OMG that game sounded F-ing fantastic! Seriously most games today still dont even sound a third as good as BF2 did way back in the day. Truly sound quality in games has been going downhill more and more, and it has finally reached a point where it is SOO bad that people are FINALLY starting to complain about it. Personally I have been complaining about it ever since Vista came out and MS dropped hardware accelerated audio support.

            • cynan
            • 6 years ago

            Alternatively, there are some people that don’t care about/for positional audio because they [i<]have[/i<] been exposed to high definition audio. Many of the simulated 3D effects software that's been available on most PCs has been pretty lackluster. (eg, sound source is further away? just lower the volume and crank up the reverb!). I personally would much rather game with a decent stereo DAC and headphones than most of the 3D sound effects I've been exposed to. But no point in not staying optimistic!

            • internetsandman
            • 6 years ago

            I prefer gaming with headphones as well, but what’s annoying is when high definition audio (especially files recorded with binaural microphones) can recreate a three dimensional sound field so convincingly through headphones, where you can roughly determine distance and location, and hear the echo against walls or around corners in some demonstrations, but they can’t do this in videogames. This is what I mean when I talk about quality sound engineers making a great audio engine matched up with a quality DAC and even just a decent pair of headphones, it could provide seriously immersive sounds in a game if done correctly, and that would be the exposure the general market needs to high quality audio in a videogame to get the ball rolling for other developers.

            • WaltC
            • 6 years ago

            Take a listen to this…with your stereo headphones on…blew me away…

            [url<]http://www.youtube.com/watch?v=nKnhcsRTNME[/url<] I haven't been this impressed with audio since...ever, frankly.

          • Bensam123
          • 6 years ago

          I disagree completely. People can’t want what they don’t know exists or is like. The reactions you get from people who have only ever played CoD who then play Battlefield 3 is astounding. They definitely throw a compliment or two in there about the sound. Having been someone who has played all manner of games, including during the hay day of audio, I’m impressed by BF3s implementation as well.

          If there is a choice between something and nothing, I will most definitely pick something. It’s a start, it’s a place to go. Maybe if other hardware manufacturers or software manufacturers see that there is a interest in such things it will become a renewed effort in the audio front.

          I really don’t care if it’s proprietary or hardware accelerated as long as games start finally doing good sound again. Right now almost all developers simply axe good audio in favor of faster release times or more eye candy. It’s not right.

          Creative learned their lesson when a multi-billion dollar company said ‘FU cause you wont pay us fees’ and axed directsound. The lesson learned there is big companies are spiteful when pride is on the line, regardless of how much it hurts their consumers.

          • Theolendras
          • 6 years ago

          In time maybe, but they stand a chance for a whole generation of consoles.

        • mesyn191
        • 6 years ago

        How are they doing it wrong? FMOD and WWISE already support them BTW.

        Pretty much everyone already uses those 2 audio middlewares so I’d be surprised if there weren’t quite a bit of games that support TA.

        If a developer wants to get the most out of TA then they’ll have to mess with the TA API…which ATM we have no knowledge of at all and won’t until Nov. at the earliest AFAIK.

        • HunterZ
        • 6 years ago

        Every PC FPS I’ve played in the last 15 years has had 3D positional audio. I’ve been using the same Logitech Z-5500d speakers for almost a decade, experiencing 5.1 sound via Creative cards, onboard audio, and now a Xonar DX.

      • bcronce
      • 6 years ago

      My old $20 Aureal 3D soundcard was still the best sound experience that I ever had. Then Creative sued them in bad-faith. Creative never directly won, but they kept the lawyer fees running long enough that Aureal went out of business.

      After that, I vowed to never purchase Creative again.

      • bcronce
      • 6 years ago

      The entire point of “hardware accelerated” audio was to not consume 15% of your CPU. Good news, modern CPUs don’t have that issue.

      There is no benefit to hardware accelerated sound. Anything you can do in hardware, you can do in software, it just comes at a CPU cost, which is really low now-a-days.

      Need something easy for developers? Get an audio engine. If there is no great audio engine out there, then maybe someone should make one.

        • chuckula
        • 6 years ago

        Bronce… arguing with Bensam123 about his new-found religion of AMD-sound is like arguing with Adisor about his newfound religion of REGISTERS.

        Neither one of these fanboys ever made a peep about either issue until they had a vision from [s<]God[/s<] [u<]AMD/Apple marketing[/u<] that made these things the most important topics in human history. Oh, and don't even try to question Bensam about how... if GCN 2.0 is so miraculous... that AMD still had to come up with proprietary silicon to do the number crunching for sound processing that has been old-hat for CPUs and GPUs for over a decade. Bensam123 is also likely far too young to remember the 90's and how all this stuff that he claims AMD derived from thin air was around a *long* time before he started claiming that everyone else except for him is an idiot.

          • HisDivineOrder
          • 6 years ago

          It really does feel like the old guard who knew the horrid times of Glide and poor 3d audio support even during the glory days of A3D 2.1 sound are trying to tell people who weren’t around back in the day why the things AMD is arguing for (again?) were a bad idea way back and were rightly sent packing.

          Perhaps people are just so caught up in the fanboy wars they can’t see the horrible ways this is going to go wrong. No wonder they brought those hardware peeps out there on their dime to Hawaii. Do this not in person in Hawaii with people suffering from jetlag, loud music, and pina coladas… maybe you’d have a lot more doubt and questioning and real thought about all the ramifications of this.

          Hell, this is the softest of the soft launches and has Anand even brought up his previously-customary complaint of, “We hate soft launches around here” like he seemed to be doing a lot with AMD? Seems like the AMD Center and Hawaii have “softened” his take on “soft launches.”

            • Bensam123
            • 6 years ago

            I’m not sure what you remember, but game audio never really took off till EAX 4.0 and 5.0 hit. Those were the ‘glory days’. There seems to be too much hate here from the ‘old guard’ who are still looking to crucify the withered husk of Creative. Now AMD is suggesting something remotely similar to EAX and people are up in a tissy and rather infect a policy of ‘scorched earth’ then see audio done right again (in any form) in games.

            Believe it or not, you aren’t the only one over 20 here.

            You can remember things your way, I’ll remember them mine. I definitely know that the state audio is and has been in in video games is pretty horrid. Anything is better then this.

        • Bensam123
        • 6 years ago

        If you read the sentence after it, you’d see I’m as interested if not more in the API and premade effects that come along side it that are easy to implement for game developers, just like EAX.

        It’s kinda sad that people seem to think EAX is just hardware acceleration and wasn’t a complete audio SDK with premade effects for game development

        “Need something easy for developers? Get an audio engine. If there is no great audio engine out there, then maybe someone should make one.”

        That’s essentially what AMD is doing and EAX was. It’s not the engine, but it’s definitely things they can use with their in house audio. EAX was pretty much the equivalent of directx (the graphical side). I hope that’s what AMD is doing and it sounds exactly like what you want, you’re just stuck on the whole hardware accelerated bit and don’t see anything past it.

        • mesyn191
        • 6 years ago

        Modern CPU’s don’t have a performance issue with current in game audio effects because the developers aren’t doing anything complex.

        You start trying to do realistic reverb and accurate + high quality sound positioning and you will chew up a large portion of even a modern CPU’s processing time.

          • Bensam123
          • 6 years ago

          This too. This is another point: How can you gauge how taxing audio is when game developers don’t utilize it in the tinniest bit? It’s like comparing 2D graphics to 3D and saying you don’t need a graphics card cause 2D graphics can be done on your CPU.

          • Krogoth
          • 6 years ago

          Not really. Hardware accelerated audio made sense back when CPUs were still single-core and operating around ~1Ghz or less (PIII and Athlon era). Not so much anymore as modern chips from AMD/Intel can handle any of fancy DSP effects without too many trouble and have plenty of cores to spare.

          The professional audio industry has move to pure software solutions which makes discrete sound cards nothing more than overglofried DACs.

          The problem of 3D sound is that most of the PC gaming demographic do not possess the hardware (quality headphones and multi-channel speakers systems) for it be noticeable. The consequence of this is that game developers have little or no economic incentive to pursuit 3D sound outside of the basic stuff. It is kinda how 3D Vision/Eyefinity are at the moment.

            • mesyn191
            • 6 years ago

            Well the audio block in the Xb1 isn’t nearly as powerful as what AMD has put into these new/refreshed GPU’s and it has been mentioned to be roughly equivalent to 100GFLOPs of performance.

            It is also known that the original Xb1 dev kits tried to emulate the audio block using an extra 8 Jaguar cores and that wasn’t enough performance to pull it off.

            Go search the B3D forums for posts by user bkilian if you want more info. He was one of the hardware developers who worked on the Xb1 audio block.

            And the pro audio guys have not moved to pure software for many things which is why stuff like this still sells for around $1k used: [url<]http://www.ebay.com/itm/Digigram-VX1222e-PCI-Express-sound-card-w-BOB12-/151116422496?pt=US_Computer_Recording_Interfaces&hash=item232f3d9d60[/url<]

            • Krogoth
            • 6 years ago

            That card is just a fancy DAC with an on-board MPEG-2 hardware decoder. The CPU still does all of the DSP work and any modern desktop chip can its workload without too much fuss.

            Xbox 1’s CPU is based on an AMD ULV CPU design that is meant to compete against Intel’s Atom (Cedarview/Centerton) which aren’t nearly as powerful a desktop CPU. It was chosen for ultra-low power consumption and efficiency not raw performance (Microsoft doesn’t want to repeat the thermal problems that were commonplace on the first generation of 360).

            Give me an example that manages to tax a modern desktop CPU. I’ll be generous, let’s go with the lower-end Core i3 – A8/FX-6xxx/4xxx units.

            • mesyn191
            • 6 years ago

            I guess a fancy DAC is all that is needed then to beat a modern CPU for certain work loads. Being in denial about it doesn’t make it any less true. It certainly doesn’t explain away why such hardware not only exists but is still used frequently today if its so useless and yet expensive too.

            Jaguar’s single thread performance is mediocre but 8 of them is no joke and its quite facetious of you to try and down play them. Total performance would be on par with a mid to high end x86 CPU.

            In game binaural positional audio with realistic accurate convolution reverb and a few hundred channels of high quality sound to mix would meet your requirements without too much trouble. If you allow the use of raycasting then even the most powerful CPU’s today can be brought to their knees.

            • Krogoth
            • 6 years ago

            DAC = digital-analog convertor. The example card does have any hardware (ASIC chips) that handles DSP functionality that was normally found on hardware accelerated audio cards had back in the day. This is when DSP processing took considerable CPU power (20-50% overhead with P-III and Athlon era stuff). This is no longer the case with modern CPUs.

            The reason for having a discrete audio card is that is easier to reduce EMI noise and crosstalk by isolating the circuitry from the motherboard. It is the biggest reason why integrated audio gets a bad reputation on motherboards that use subpar components and shielding.

            Audio engineer and professionals prefer to use external DACs, because it is much easier to combat the problems of EMI and crosstalk when you isolate circuitry in a separate chassis. The raw digital data fed to DACs via serial/USB which isn’t affect by EMI.

            Jaguar is only somewhat impressive on a power efficiency and mass-production scale (the main reasons why Microsoft chose it). As for raw CPU performance, it is completely dwarfed by aging Bloomfield and somewhat newer Bulldozer. Newer silicon completely blows it out of the water. I’m not even going into the inherent problems with parallelism which further puts a damp on Jaguar’s theoretical potential.

            The point is that DSP processing isn’t not that demanding anymore. This is what ultimately killed hardware accelerated audio in the mainstream market. The industry moved with the times and opt for pure software DSP solutions and only used hardware solutions where it makes sense (ultra-portable, audio editing devices for laptops)

            • mesyn191
            • 6 years ago

            The linked card has a fairly powerful DSP too along with the merely fancy DAC. They’re still used today because irregardless of how fast modern CPU’s are as you can make a DSP run certain tasks faster than a CPU/GPGPU could within the same power/cost budget.

            You’ll never get rid of DSP’s because of that. So its really strange to hear you say “DSP processing isn’t that demanding anymore”. Because that is general statement, and in reality it depends on what your work load is. Software solutions do exist now but they’re not the end all be all you make them out to be nor are they necessarily practical for a console or PC cost/power budget.

            If you want to do something simple, sure a DSP might not make sense on a modern console/PC. You want to do all the stuff that TA and the Xb1 audio block are doing without blowing out your power/cost budget…you’re going to end up with some sort of DSP + fixed function hardware.

            Here are some further comments on a part of the Xb1’s audio block by bkilian:

            “the chip can do 512 “high quality” polyphase SRC (resampling factor from 1/4 to 4 I think), 512 3 band EQ – comprised of 3 biquad filters, 512 Compressors with a hard knee response, 2500 state variable filters + volume changes, and 512 WMA like decodes per audio frame. Now you’ve already said just the state variable filters a would take 90% of one of your high end DSPs, which when included alone in a card costs about $250, you also estimated the EQ/CMP at 48% of a DSP, which seems a little low, but you’re the expert. We’re already at a minimum of a two DSP card (~$500) and we haven’t even touched the SRC. Unfortunately, I don’t know the number of taps on the FIR filter, but you could use the standard mid-quality 64 taps in your calculation if it makes you feel better.

            The mix buffers will mix >4000 channels per frame, at 28 bit. The rest of the pipeline is 24 bit.

            That’s just Shape. One block out of 5 on the sound chip (and sadly, as far as I know, the only one easily accessible to developers)

            For the non-audio folks here, mixing, filtering and SRC are a huge majority of the operations that an audio engine performs. Things that are missing here are Convolutions, FFTs and other custom DSP effects, but whether you use those or not, you will always be doing a lot of mixing, filtering and SRC.”

            We already know that TA is more advanced than the Xb1 audio block, which is some serious hardware. This isn’t something you can just handwave away and come off looking reasonable.

          • Anonymous Coward
          • 6 years ago

          Maybe so, but let’s see just how powerful the dedicated hardware is before talking about anything tricky…

            • mesyn191
            • 6 years ago

            See my reply to Krogoth.

        • Theolendras
        • 6 years ago

        I’m not with you on this. True the cost is relatively marginal for a good CPU. But they don’t do anything fancy with sound anymore either. Trying to makes convincing directional audio in a 64 multiplayer universe or a big MMO tough with reallistic positionnal, doppler effect on any moving object, muffled sound of a guy moving in a close room, object falling to the ground echo differently, filled up room sounding less echo than an empty one etc… the hit on CPU might not be worth it. Having untapped dedicated sound hardware could get us closer to it. If I understood it right they enable not just dedicated fixed function but an entirely programmable pipeline using the GPU as well.

        Since AMD is providing Microsoft top-to-bottom SoC solution, dedicated hardware there makes sense as it is more efficient and it’s presence is garanteed. For AMD get it back on the PC cost very little and for developpers already developping on Xbone this is also quite simple.

        The other part of me is also think that this time around, the low power SoC, might also be a think ahead move to get into mobile market later on. A scenario like the PSP Vita successor would be in fact simply a very shrunk PS4 in a few years. Then efficiency will be even more important.

      • Bensam123
      • 6 years ago

      Taking this a bit further, we may even see AMD branded sound cards appearing on the market in the future. Or AMD solutions through third party manufacturers, like Asus or Gigabyte (the usual candidates for graphics cards).

      • HunterZ
      • 6 years ago

      Hardware-accelerated audio is an obsolete concept. Modern CPUs are so much more powerful now than they were in the 1990s that it barely takes any CPU to mix dozens of sounds and apply all sorts of effects. This is why Microsoft was completely comfortable with smashing hardware-accelerated audio support by creating a new audio stack for Windows Vista and newer.

      In addition, modern game sound APIs (e.g. FMOD, OpenAL) have way more effects capabilities in software than those old Creative cards could do in hardware; it’s just up to the game developers to take advantage of them.

      As a result of the above, SNR (which is largely determined by DAC quality, circuit design, RF shielding, etc.) really is pretty much the last thing that there is to differentiate one piece of PC sound hardware from another.

      There really just isn’t anything else meaningful to compete over in the 21st century, unless you’re interested in niche things like pushing realtime-encoded AC3/DTS 5.1 over TOSLINK/coax/HDMI (which most audiophiles aren’t, as they would rather skip the data compression used by those technologies), or using HRTF modeling to downmix 5.1 to stereo headphones, or retaining positional audio (and/or EAX effects) in pre-Vista games, or turning your PC into a karaoke machine, etc.

        • Bensam123
        • 6 years ago

        Read more of the in depth responses, such as comparing current audio tech in game to 2D graphics before 3D graphics required a dedicated processor.

        When there is no real effects in games, there is nothing to baseline it against. You can say ‘sound uses no amount of computational power’, but when there are no effects being done that’s pretty obvious; HRTF being the most notable example that’s been absent from games since EAX went under.

    • brucethemoose
    • 6 years ago

    Today is a very interesting day indeed… the Mantle API comes out of nowhere with EA/BF4 leading the charge, Nvidia just announced support for SteamOS on their facebook page, so you know how MS is feeling about all this… carmack is even tweeting about it.

    I’m breaking out the popcorn.

      • JohnC
      • 6 years ago

      It doesn’t “come out of nowhere” – AMD have been wanting to do that for a long time and gave the appropriate hints in the past:
      [url<]http://www.bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/1[/url<]

      • HisDivineOrder
      • 6 years ago

      I’m curious to hear what nVidia thinks of Mantle. Mostly, I’m curious to see if they slam it as hard and in like fashion to the way AMD always slams PhysX.

      I suspect the commentary will be the same, which will amuse me greatly. In fact, everything AMD announced struck me as what I would have imagined nVidia to do.

      Glide 2.0
      A3D ReDubbed
      Review tricks (All our refreshed line is cheaper, but ah ha ha ha, you can preorder the new card in early October yet you get no info today on what that card can really do or what it really is or even how much it costs. You’ll just have to trust us that it’s amaaaazing. Oh, BF4WTF! Enjoy the free trip to Hawaii!)

      And it’s the very softest of soft launches, yet you don’t hear the customary Anand complaining about “soft launches” being reprehensible” or the fact from TR that “requiring certain parts to be reviewed separately” or piecemealing info the way a company wants is reprehensible. Why?

      Because they’re in frickin’ Hawaii and everything is fine-okie-day in Hawaii, right, boyos?

        • Klimax
        • 6 years ago

        I guess they are laughing hard. Either it fails and nothing changes or it catches on, locking competitor into their own prison composed of GCN architecture giving NVidia good window of opportunity to gain performance advantage without threat of AMD catching up.

    • MadManOriginal
    • 6 years ago

    Xbone!

      • clone
      • 6 years ago

      saw that too, I wonder if that’s some kind of inside joke given the improbability of missing the “X” key and instead hitting the “N” and the “E” key which are not even in close proximity to one another.

        • MadManOriginal
        • 6 years ago

        I’m pretty sure you totally miss the immature humor contained within the XBONE!

        • Meadows
        • 6 years ago

        It’s “XBox One” contracted.

Pin It on Pinterest

Share This