AMD’s Polaris-powered Radeon RX 480 will ring in at $199

This past weekend, we spent some time in Macau learning about AMD's next-generation Polaris architecture. The vast majority of what we learned there remains under wraps for now, but the company is making one big graphics reveal today at Computex. The Radeon RX 480, AMD's first "VR Premium Ready" Polaris graphics card, will have a suggested price of $199. Here are some of its basic specs, straight from AMD:

  Radeon RX 480
GCN compute units 36
Stream processors 2304
Memory interface 256-bit
Memory type GDDR5
GDDR5 transfer rate 8 GT/s
Memory size 4GB or 8GB
Peak single-precision

compute performance

over 5TFLOPS
Board power 150W

No, there aren't clock speeds in the above chart—AMD isn't disclosing that information at this time. We'd love to be able to tell you more about this card today, but further details of the RX 480 and the rest of the Polaris lineup will need to wait until the card's projected June 29 launch date. We can say that the RX 480 looks like it could make quite the splash at a critical price point. AMD says its internal research shows that many people consider the $100-$300 price point a sweet spot for graphics card updates, and it's been quite some time since we've seen major leaps in performance in that segment. For now, let the rampant speculation begin.

Comments closed
    • anotherengineer
    • 4 years ago

    I was thinking last night, it’s too bad the R7 370 (rebranded 7850’s) didn’t get GCN upgrades and freesync, because once all the new 14nm cards are out they should be worth about $80-$100. Which is what they should be selling for right now.

    • ronch
    • 4 years ago

    Many people think AMD will make a clean sweep with the 480 at $200, but can we jump to that conclusion without seeing what NvIdia will offer at that price point later on?

    Also, it seems to me Polaris only has one die at this time with 480 while Pascal already has GP100, GP102 and GP104 dies. Yes, Polaris 11 and Vega are also in the works but NvIdia surely also has new mobile chips coming out. I hope AMD refreshes their entire lineup with Polaris, not roll out one or two SKUs based on it while the lineup goes through another rebrand.

    • TheMonkeyKing
    • 4 years ago

    Okay, so it hits the shelves and some [leaked] press shows it to be an adequate upgrade from those still sitting on 7900 series or nVidia 980.

    Where are you going to find one in 3Q16 or even 4Q for $199? I’m thinking the price point, considering supply, will more likely $250 for reference and perhaps(?) $280 for partner modified cards.

      • TheMonkeyKing
      • 4 years ago

      So here is my thinking too. For people upgrading their TV experience from a 1080p to a slimmer 4k TV, they will be hit with things like HDCP 2.2, HDMI 2.0a, HDR, etc. They’ll understand that their little HTPC was fine for pushing pixels to the 1080 but will struggle at 4k and really suck at interpolating 720p to a 4k screen.

      So if this GPU can do what it says it can, then this is the card to put in those HTPCs. VR is one thing, but that’s still cutting edge. And with phones using Google Cardboard, not really that big of a selling point.

    • bfar
    • 4 years ago

    While I would’ve loved a mid to high end part from AMD this Summer, there’s no denying this new product is going to be very compelling at this price point.

    Currency fluctuations have seriously driven up prices of video cards here in Europe and the UK, leaving Nvidia’s GTX 1080 perched up at halo price points (Β£619!!!). If AMD’s card comes in at Β£165 and €200 even at GTX980 performance levels, it puts Nvidia under incredible pressure to drop prices.

    • NeelyCam
    • 4 years ago

    Begun the GPU Wars have

    • Elohim
    • 4 years ago

    Am I the only one who saw the front page photo for this and thought “Resistance is futile…” ?

    [url<]https://techreport.com/thumbs.x/lgnew/2016_6_1_AMDs_Polarispowered_Radeon_RX_480_will_ring_in_at_199/polarisfeat.jpg[/url<]

      • Peter.Parker
      • 4 years ago

      Probably

    • jihadjoe
    • 4 years ago

    Back of the napkin IPC maths time!

    RX 480 5500Gflops / (2304SPs * 1.266GHz) = 1.88Gflops/(SPs*GHz)
    GTX1080 8900Gflops / (2560CCs * 1.733GHz) = 2.00Gflops/(CC*GHz)
    GTX1070 6500Gflops / (1920CCs * 1.683GHz) = 2.01Gflops/(CC*GHz)

    So very similar per-shader performance between Polaris and Pascal, with Nvidia’s main advantage being Pascal ‘s path length optimizations that allow it to clock higher.

      • 0x800300AF
      • 4 years ago

      little more accurate
      R10 480 = 2304 x 1266 = 5834 GFlops = 2.000093
      GTX 1070= 1920 x 1683 = 6462 GFlops = 1.999777
      GTX 1080= 2560 x 1733= 8872 GFlops = 1.999783

        • jihadjoe
        • 4 years ago

        Oh the 480 was actually 5.8Tflops? Sweet!

        And wow two competing architectures from two different companies really did end up with near-identical performance per SP per clock.

          • chuckula
          • 4 years ago

          [quote<]Oh the 480 was actually 5.8Tflops? [/quote<] Not according to what Raj said verbatim on stage at the event (5.5 Tflops) or to any written documentation from AMD (which merely says > 5 Tflops).

          • NoOne ButMe
          • 4 years ago

          The 5.8Ghz is derived from the 1266Mhz score floating around. If the Rx480 is 5.5Tflops it will be clocked right under 1.2Ghz.

          Based on leaked scores, I think the 8GB version may be clocked at 1266Mhz and the 4GB 1193Mhz. Or other though Is 1266Mhz was just a fake clock.

            • Northtag
            • 4 years ago

            With 2,304 shaders it can’t be over 1.3GHz or else the AMD slide would have pointed to > 6 TFLOPS.

            • NoOne ButMe
            • 4 years ago

            6 is greater than 5, right?

            AMD was surprised by GP104 clocks. AMD is trying to unveil the kimono as much as possible without actually revealing anything to force Nvidia’s hand. If the 1060 has over 5TFLOPs the Rx480 jumping over 6TFLOPs is possible I think. At least for the 8GB version.

      • Zizy
      • 4 years ago

      Newsflash, all these architectures have 2 flops/sp/hz.
      Either 480 is with higher flops or lower clocks.

    • Billstevens
    • 4 years ago

    Another minor issue with the 1060 will be DX12. It is very clear from 1070 and 1080 bench marks that Nvidia still sucks at DX12. The 1070 and 1080 just happen to be so fast that they are competitive/dominant regardless.

    The 1060 however is likely to get trounced in DX12 by every single AMD card available all the way down to the RX 470… With more and more games offering DX12 rendering that doesn’t suck that’s giving up a lot of performance.

    • jokinin
    • 4 years ago

    If this has about twice the performance of my aging Radeon HD7870, and a 8GB version with custom cooling is <=250€ VAT included, count me in, please.

    • maxxcool
    • 4 years ago

    Awful lot of bum-hurt for cards we can’t even review yet :P.

    I’ m holding scorn and derision until I see the metrics .. *IF* we get to see a review on TR (a whole NEW concern)

    • willyolio
    • 4 years ago

    you know, it says “board power” = 150 watts, but that’s just PCIe + the 6 pin connector maximum.

    i doubt the board will operate at exactly 150 watts with no headroom to spare. That’s just bad design. I wonder what the actual power draw is…

    • USAFTW
    • 4 years ago

    Soooooo… when is the full 2560 ALU card inbound? That’s what I’ve been waiting for?
    Or are the yields so s**t that even on such a small die and with such pressure from competition they can’t get good yields out of it?

      • NoOne ButMe
      • 4 years ago

      I guess we’ll find out if Apple has refresh iMac with 2560 Polaris.

    • NeelyCam
    • 4 years ago

    Sooo…. When can I buy a PS4 with 14/16nm silicon in it…?

      • tipoo
      • 4 years ago

      Sonys E3 will probably have some news on when to expect it, June 13, 6PM PST / 9PM EST

    • Thbbft
    • 4 years ago

    It’s a leap frog move to take the lions share of the brand new VR AIB market.

    If the developers see AMD taking 80% of the VR market in mobile and desktops, who is going to prioritize Nvidia’s architecture?

    Add in the near certainty the upcoming PS4 VR and Xbox One VR will be sporting Polaris architecture and with the upcoming Polaris equipped laptops and netbooks, the lions share of the developers will be prioritizing Polaris and Async, which will subsequently be applied to their non-VR games. It leaves Nvidia with no way to effectively compete in the VR or mainstream PC gaming space for at least a year.

    Projecting the RX 480 to be good for 3 to 4 years based on their ‘deep knowledge’ of what is in the games pipeline for the next few years indicates incredible future-proofing.

    Whence Nvidia?

      • chuckula
      • 4 years ago

      [quote<]If the developers see AMD taking 80% of the VR market in mobile and desktops, who is going to prioritize Nvidia's architecture?[/quote<] Yeah, because there are going to be droves & droves of people lining up to buy $600 VR headsets because AMD has a GPU that costs $200 and performs like GPUs from 2 years ago that cost $400. Or.... more realistically... people who actually care about VR probably already owned those $400 cards from 2 years ago... or heck, even higher-end cards from more recently.

        • the
        • 4 years ago

        The reverse is likely to happen in the long run. People buying $200 cards that meet the minimum VR spec and then get a headset when they come done to reasonable prices (~$300?) in the coming years. Expect marketing to highlight the idea of future proofing.

          • Deanjo
          • 4 years ago

          VR will probably be dead by then (again).

            • Thbbft
            • 4 years ago

            Because in tech, the past predicts the future. You just like to throw out pseudo bon mots?

            The tech for viable affordable VR has finally arrived, there is no reason for it to not succeed.

            • Deanjo
            • 4 years ago

            Umm ya, 3D took off huge on TV’s and PC’s as well now didn’t it?

            As far as other reasons why it will most likely die is:

            1) still a clumsy device to use. People hated wearing 3d glasses because they were clumsy, now add on the cords, weight, and just overall bulk and that problem gets worse.

            2) Competing standards

            3) High entry price

            4) Very limited content, not all content is available for another brand of VR solution

            • Thbbft
            • 4 years ago

            Xowie.

            • jihadjoe
            • 4 years ago

            [quote<]1) still a clumsy device to use. People hated wearing 3d glasses because they were clumsy, now add on the cords, weight, and just overall bulk and that problem gets worse.[/quote<] Let's not go to Camelot. It's a silly place.

          • chuckula
          • 4 years ago

          [quote<]and then get a headset when they come done to reasonable prices (~$300?) in the coming years.[/quote<] Yeah, so buy a $200 card now in the hopes that VR will become "cheap" in a few years. Or buy a $200 card in a few years because VR has actually gotten "cheap" enough with the bonus being that the $200 card in a few years will actually maybe be fast enough.

          • Thbbft
          • 4 years ago

          Add in the tech for excellent dual gpu performance is in progress, so adding another 480 card for double the VR performance when one wants to move up is a viable choice.

            • chuckula
            • 4 years ago

            OK, seriously, the schill factor is getting a little high here.

            And I don’t appreciate your arrogance. Arrogance is something that needs to be earned, and your ego is writing checks that your intellect can’t cash once we take away your pre-regurgitated talking points.

            Tell ya what sunshine, if this magical dual Radeon 480 is truly the rainbow miracle that AMD promised, lets make a bet: If AMD only sends out a single review unit to all or the vast majority of websites, your entire B.S. marketing line is exposed as the crock that it really is.

            If AMD actually does send out dual units for “crossfire” review, then we don’t judge based on idiotic vomit-inducing [this is VR remember?] average framerates, but on frame time metrics.

            You don’t have the cojones to take that little bet.

            • Thbbft
            • 4 years ago

            “Yeah, because there are going to be droves & droves of people lining up to buy $600 VR headsets because AMD has a GPU that costs $200 and performs like GPUs from 2 years ago that cost $400.

            Or…. more realistically… people who actually care about VR probably already owned those $400 cards from 2 years ago… or heck, even higher-end cards from more recently.”

            Who are you to lecture me on the quality of my post when you post this tripe that totally ignores the numerous advantages Polaris is bringing to the table vs. 2 yr. old GPUs. Why do you blithely trash AMD’s statement that it polled gamers to determine pricing is the no 1 barrier to VR entry? Which is utter common sense in any case. And why infer a $200+ savings to get into VR is meaningless?

            Mindless reactive trash talk ignoring any data that contradicts your bias.

            Lame.

            • DoomGuy64
            • 4 years ago

            [quote<]Arrogance is something that[/quote<] is just as repulsive, if not more so, when you do it. At this point I don't think anyone who knows you really cares what you have to say, because we already know what it is. Whatever the news is, you always find some way to distort things in some pro-Nvidia rant. Polaris isn't even out yet, so you've been talking out your rear this entire time. Meanwhile, everyone else with common sense is waiting for the reviews.

            • Wild Thing
            • 4 years ago

            Poor chucky just had to eat a big serving of humble pie over the Polaris launch.
            His butthurt is off the charts.
            Just enjoy watching him squirm.

            • Klimax
            • 4 years ago

            Why require two cards when SMP can solve it using only one card…

        • Thbbft
        • 4 years ago

        Pointless chatter … Polaris is a highly modified architecture targeting VR and is VR certified by Oculus and Vive and AMD polls target price as the no. 1 barrier to VR adoption.

        Got anything of value to add to the conversation?

          • chuckula
          • 4 years ago

          Pointless chatter … [s<]Polaris[/s<] [u<]Pascal, which launched first[/u<] is a highly modified architecture targeting VR and is VR certified by Oculus and Vive and [s<]AMD polls target price as the no. 1 barrier to VR adoption[/s<] [u<]Nvidia, who actually knows how to do a product launch has already sold a bunch of GTX-1080s... Raj should know since he used AMD corporate funds to purchase one for his demo the other day[/u<]. Got anything of value to add to the conversation?

            • Thbbft
            • 4 years ago

            Dude, that is REALLY lame.

            The whole point of Polaris is it’s mass market price point for a viable VR and killer 1080p gaming experience. Polaris sales are going to destroy 1070 and 1080 sales.

            WCCF’s poll with some 5000+ responses has 54% saying they’re going to buy an RX 480. Nvidia is going to lose a whole lot of market share in the next six months.

            • chuckula
            • 4 years ago

            I’m sorry, you just name-dropped Wccftroll as proof that you are right.

            Congratulations, you have lost the thread.

            Please go back there, while I’m sure they aren’t missing their idiot, they could sure use another spare.

            • Thbbft
            • 4 years ago

            That article has 3400 comments and 5000 people responded to that poll. That’s substantial and valid.

            More irrational snobbery on your part.

            • Deanjo
            • 4 years ago

            [quote<]That article has 3400 comments and 5000 people responded to that poll. [/quote<] Lol. [url<]https://web.archive.org/web/20081115032219/http://www.bioware.com/_poll/view_poll.html?pollID=136[/url<] Voters: 13730 and the majority use linux for gaming!!! Polls are never easy to skew now are they?

            • JumpingJack
            • 4 years ago

            I disagree — it is hardly a representative population sampling. Wccft is heavily AMD leaning sight and attracts heavily leaning AMD fans. Nothing wrong with that, but their data on polls such as this is equally skewed.

            • Thbbft
            • 4 years ago

            54% is still an impressive number. Polaris obviously and totally nailed the sweet spot for a huge chunk of gamers.

            Granted the reviews need to back up the promise.

            • Hattig
            • 4 years ago

            WccfTech is a bloody NVIDIA fanhole site!

            Not as bad as some (HardOCP, etc) but comment threads are regularly crapped by Nvidia fans.

            The fact that a poll there gets 50% AMD is significant, it shows that AMD’s aims with Polaris 10 (regain significant marketshare lost to Nvidia over the past few years) have some hope of being realised.

            • steelcity_ballin
            • 4 years ago

            Mass-market.
            VR.
            Pick one.

          • chuckula
          • 4 years ago

          Are you capable of making statements that have even an iota of information that doesn’t sound like it was plagiarized from one of Raj’s slides?

          Because quoting AMD’s “polls” as proof of anything doesn’t make you look good.

            • Thbbft
            • 4 years ago

            Presentations at an official presentation like this have serious legal/investor implications, so yeah, I attach validity to what is said. It’s the rational thing to do. So why wouldn’t I use that information in my posts?

            Reactively discounting what is said based on a personal bias isn’t the rational thing to do.

          • Freon
          • 4 years ago

          Headsets are still $600-800+ and that price probably won’t come down significantly for a while.

      • Kretschmer
      • 4 years ago

      So your hypothesis is that if AMD dominates the market and captures 80% of the market it will be tough for Nvidia? OK. But what are the odds? And why would you guess that?

      A 970/390-class part won’t be considered “good” in 3-4 years. Hell, a 1070 wouldn’t be considered “good” four years from now. That’s like talking about a 660Ti today. Competent, but not “good.”

        • Thbbft
        • 4 years ago

        The latest rumor is that the Xbox 1.5 will have a Zen ‘Lite’ and Polaris APU, which is eminently logical and doable for an early 2017 introduction. Microsoft is now requiring all the Xbox developers to also develop a PC version for their store. Regular games or VR games, do you think AMD Polaris or Nvidia Pascal hardware will have the advantage in those games?

      • coolflame57
      • 4 years ago

      Personally, I think this is good for AMD. They’re struggling and I sincerely hope this happens to turn their fortunes.

      • steelcity_ballin
      • 4 years ago

      What’s the weather like in shill land?

        • BurntMyBacon
        • 4 years ago

        Hot and dry like always. How else do you explain the mirages.

      • maxxcool
      • 4 years ago

      It is still to soon to determined if ASYNC will actually amount to anything. Recent dx12 tests show scant few points in AMD’s favor with that regard with newer NV driver updates to their weird two tier scheduler.

      Edit : We just need more time to see, and more games. Currently one game sampling does not a AVG make.

    • slaimus
    • 4 years ago

    I just hope AMD is getting a discount from GF to be able to price it at $199 and still make money. Comparing the specs with GP104 and the relative die sizes of cores per mm2, it seems we are looking at a similarly sized die selling for much less.

    • tipoo
    • 4 years ago

    [url<]http://i.imgur.com/Y1tTbJ5.png[/url<] Hm. Now I want to see frame times per dollar. But interesting.

      • dikowexeyu
      • 4 years ago

      Sources?

      • chuckula
      • 4 years ago

      Well I want high performance and a good value, so thanks AMD, GTX-1080 it is!

    • Demetri
    • 4 years ago

    [url<]http://videocardz.com/60824/amd-polaris-10-gpu-in-radeon-rx-400-series[/url<] Interesting note in this article where they mention they received unconfirmed info that the RX 470 is also going to be based off P10. Would that make it the cut down version of P10 that had near R9 390 performance in those leaked 3Dmark benches? So it's possible the RX 480 @ $200 is not a cut down version of P10, it's the same model that was benching very close to a base Fury in 3Dmark, if that leak is to be trusted. If so that could be really impressive.

    • Prestige Worldwide
    • 4 years ago

    Copy paste of my feeling about this that I posted at Videocardz:

    Alright. Here’s my deal. When I saw 599 for aftermarket 1080, I was pretty sold. But then the Canadian dollar came in to play. I just can’t justify spending over $900 of my maple syrup dollars on a GPU, and the 1070 for $500 beaver bucks sounds pretty preposterous.

    Along comes Polaris for $199, which would be about $260, bringing what is arguably “good enough” performance. I can’t say the performance is sexy in 2016, especially as someone who previously owned a GTX 970 since 2014 (sold for $350 in anticipation of new gen GPUs, also have been too busy to game for the last 2 months).

    Could Polaris be a sensible stop-gap to tide me over until Vega and 1080ti put some downward pressure on the prices of the GTX 1080 and 1070? I think it just might be, and for $199 I’m willing to give AMD a chance.

      • chuckula
      • 4 years ago

      TL;DR version: I sold my GTX-970 and practically anything would be an upgrade over this IGP!

        • Prestige Worldwide
        • 4 years ago

        Too bad I don’t have an IGP on my i7 3820! I do have a 670 still kicking around my spare parts pile though.

        • nanoflower
        • 4 years ago

        I was actually surprised that the IGP isn’t horrible with everything. I was able to play Torchlight 2 at decent speeds only using the IGP of an OC’ed G3258. Not that you could play any AAA game but at least it wasn’t completely unusable.

        The 480 or what ever the next highest P10 card is called looks to be my next card. Will be nice to upgrade from this 650TI that I’ve been using while waiting for the new GPUs to arrive.

      • Firestarter
      • 4 years ago

      a second hand 970 would probably be a more cost effective stopgap, just wait a bit for the prices to bottom out and get your old card back for $100

      • tipoo
      • 4 years ago

      Ugh, exactly that, stupid maple syrup dollars. So long as things stay like this, the 1070 gets inflated out of attractiveness after conversion and tax.

        • Prestige Worldwide
        • 4 years ago

        Yup, it has really gotten out of hand. I don’t blame nvidia, it’s just our dollar being so sensitive to changes in the price of oil. Dutch disease FTL! /R&P

          • NoOne ButMe
          • 4 years ago

          Yeah, I remember even when the prices were near (I think it is 1:1.06) even buying books here that would say “9.95 USD/13.95 CAD” and shaking my head. πŸ™

          Candians are right to complain about prices compared to USA most times. Unlike so many people I see on Europe when they have a 20% VAT and price converted to USD is 20% higher… And blame Nvidia or Intel or AMD for that 20% higher price. Frustrates me to no end.

            • tipoo
            • 4 years ago

            Yeah. When we’re at parity we still get boned, ironically it’s when we’re farthest below that the direct conversion prices start seeming reasonable, some retailers even take a small cut to make the Canadian price reasonable.

      • Kretschmer
      • 4 years ago

      1080: $900+/$600 = 1.5+ CAN:USD
      1070: $500/$380 = 1.3 CAN:USD
      480: $260/$200 = 1.3 CAN:USD

      Does Canuckistan levy a surcharge on flagship GPUs? πŸ˜€

        • Prestige Worldwide
        • 4 years ago

        That’s not entirely accurate, which is my bad for ballpark numbers in my OP.

        The 1080 FE is 919 CAD on NCIX. So that’s 699 US, not 599 US for the conversion.
        599 for lowest AIB model, or perhaps 639 for a decent one, would range from 785-837 CAD.

        379 US to CAD is just about $500, coming in at 496.54 with a quick Google conversion right now.

        Likewise, 199 US is $260.71 CAD for what I assume is a 4 GB RX 480.

        So that was my mistake, using the Founders Edition price for 1080 but not the 1070 in my OP.

        • anotherengineer
        • 4 years ago

        Sort of actually, in the form of 13% sales tax, more $ = more tax = more $$ πŸ˜‰

          • Prestige Worldwide
          • 4 years ago

          Only 5% for me when I order from out of province etailers like NCIX. Yay Quebec!

      • ThatStupidCat
      • 4 years ago

      OMG at $900 CAD that’s just LOONIE!

        • tipoo
        • 4 years ago

        /sadtrombone
        Haven’t we Canadians suffered enough?

          • Prestige Worldwide
          • 4 years ago

          Indeed. Now can you hold on a second while I cry into my universal health care and 5 weeks of paid paternity leave?

            • tipoo
            • 4 years ago

            B-but my shiney material stuff is more expensive

            Ehh, we’ll call it even for also not having US politics.

            • Prestige Worldwide
            • 4 years ago

            You must have missed the dark age of Stephen Harper, the George Dubba U of the north. A petty neocon dictator who ruled from 2006-2015.

            • tipoo
            • 4 years ago

            On the other hand, who is our Trump?
            Wait, goddamn it Kevin O’Leary, don’t do it!

      • ptsant
      • 4 years ago

      I kinda agree. I used to buy in the $150 range then, when my paycheck improved, moved up to the $250-300 range. Most I’ve ever spent is $350. Now, if the 1070 were at $300 I would have been tempted (despite my FreeSync monitor), but at $379 without including the founder tax I think it’s a wonderful card, just too expensive. As for the 1080, I feel it’s quite scandalous having to pay so much for a high-end card. The $500 price used to be the absolute high end but nVidia has been steadily pushing it upwards.

      For what it’s worth, local price for the 1080 is $850 (in USD!), so … yeah.

      • ronch
      • 4 years ago

      I’m speculating Vega will be a bit of a letdown. Many were expecting a good fight when Pascal and Polaris come out but instead we see AMD shying away. So now all hopes are on Vega. But as I’ve said, I think it’ll fall short. I think it’ll be kinda like Fury X: big promise, then falls behind the competition which has been around for months. Then of course AMD prices it well below Pascal to grab marketshare so it’s the same story again for AMD. Lower performance, but much lower price to have better perf/$$$, so… also razor thin margins and losses Q after Q, as usual.

        • Kaleid
        • 4 years ago

        Doubt it, it seems that they want HBM2 and perhaps the chip is so big that yields are not that great. Now, they get to start the process with a smaller chip and possibly sell them in large amounts, as a business-plan this is actually wise because this is where most of the sales are to begin with.

          • ronch
          • 4 years ago

          Of course. But they are never gonna ditch their budget option image this way.

          And if your brand becomes synonymous with ‘cheap’ for too long, it becomes really hard to ditch that image.

            • Kaleid
            • 4 years ago

            AMD has had undeservedly poor public reputation for a very long time, I doubt this will change it. It won’t take however very long until the 490 series comes..

            Nvidia has had tons of problems over the years too, but nothing seems to stick to them.

            • NoOne ButMe
            • 4 years ago

            power of good PR, and always making things feel positive.

            For the 290x for reviews it would fall under it’s clockspeed. You lose performance, AMD made you lose performance. Negative. Meanwhile boost is almost always a positive because Nvidia says “at least X” and historically they’re always over their boost. sometimes by a substantial amount. Nvidia gave you X more performance.

    • christos_thski
    • 4 years ago

    I was thinking of buying a 1070, but if this has gtx 970 performance at this price point I’ll choose a 480, instead. Not quite the same range of performance, but it’s an unbeatable value for money, for my wallet.

    • AJSB
    • 4 years ago

    No matter officially RX480 only needs a 6pin power connector, i bet AMD partners will make some cards models with a 8pin or two 6pin connectors to give more headroom for overclock.

    • HisDivineOrder
    • 4 years ago

    Need performance reviews. Theoreticals are fun and all, but let’s see the actual performance.

    • tipoo
    • 4 years ago

    I wonder how much the efficiency equation is hampered by TSMC 16nm FF+ just being better than Samsung/ Glo-Fo’s 14nm?

    • anotherengineer
    • 4 years ago

    possible clocks
    [url<]http://www.techpowerup.com/223043/amd-radeon-rx-480-clock-speeds-revealed-clocked-above-1-2-ghz[/url<] card images [url<]http://www.techpowerup.com/223044/feast-your-eyes-on-these-official-amd-radeon-rx-480-renders[/url<]

      • chuckula
      • 4 years ago

      I will say I prefer the card aesthetics over the default GTX-1080 polygon shroud.

    • anotherengineer
    • 4 years ago

    I wonder if this will end up being a 4850 repeat?

    Well at least it’s in my $180-$230 budget!!!

      • Srsly_Bro
      • 4 years ago

      I bought a 4850 512MB on launch day from Best Buy. It was pretty good except for the single slot cooler that filled with dust constantly.

    • tipoo
    • 4 years ago

    Curiously missing was an answer to SMP, Nvidias technology to reduce the rendering overhead of drawing two scenes to almost negligible. If this 200 dollar card has a VR focus, a feature like that is absolutely killer.

      • Spunjji
      • 4 years ago

      Not even joking here, my bet is they expect you to buy two of them for that.

      • DPete27
      • 4 years ago

      The product hasn’t actually launched yet, sooo.

        • tipoo
        • 4 years ago

        Kinda seems important not to mention at the event of. But I do hope it has something like that.

    • Tristan
    • 4 years ago

    RX 480 have low power efficiency, for 150W 1070 is much faster. And clocks are also low. Soon NV respond with GP 106 – faster and lower power draw.

      • Spunjji
      • 4 years ago

      Why directly compare clocks across architectures? They’re higher than the previous gen to a similar extent that NVIDIA’s are, which is the only relevant point to be made about clocks.

      Power efficiency doesn’t look as good as NVIDIA for sure, so GP106 will be interesting.

    • AJSB
    • 4 years ago

    $150 for the RX480 4GB, $229 for the RX480 8GB
    (and possibly $490 for a RX 480X2), fantastic !

    Like all know, i’m all about iGPUs, but this card makes me be interested in return to dGPUs for some builds.

    Yes, there will be a RX480X2…Powercool shown a water cooling solution for a dual GPU Polaris PCie card.

    It will be also interesting if AMD have a PCIe card based on Polaris 11…i heard about GTX950 performance and TDP of LESS THAN 50W (so, less than a GTX750(NON-Ti). That card would DESTROY completely GTX750(Ti) and GTX950.
    It would be also possible that there was Low Profile, Single slot versions of that card.
    There’s a bunch of small cases with only low profile slot(s) that would benefit a lot of such a card.

    • Chrispy_
    • 4 years ago

    So it’s a 2560 SP part, with 256 SPs disabled, Or is it like Hawaii where the ratio of CUs to other resources isn’t a nice multiple?

    If it’s a harvested part, that means yields aren’t what AMD was hoping.
    I hope it’s not a harvested part!

      • NTMBK
      • 4 years ago

      They might be saving the best chips for Apple.

        • tipoo
        • 4 years ago

        What would Apple be using dedicated chips above 480X class for? There were rumors of the next iMac using a Zen APU, but that would disqualify these chips, and I don’t think it would be these for the Mac Pro either.

          • NTMBK
          • 4 years ago

          They could use fully-enabled chips at a lower clock speed to get maximum efficiency for an iMac. Zen APUs aren’t coming until mid-late 2017, so a Skylake + Polaris/Pascal refresh for this year would fit.

            • tipoo
            • 4 years ago

            Ah, perhaps.

            Now just give me a damn Skylake+Polaris rMBP! It’s batty that the highest end Macbook you can get today is on GCN 1.0.

            • Srsly_Bro
            • 4 years ago

            Or Zen apu with Polaris

            • Chrispy_
            • 4 years ago

            If, if, if….

            Gaming laptops will be under serious threat if AMD can make an APU with half-decent CPU cores and an IGP that doubles the current A10 512SP’s performance.

            • NoOne ButMe
            • 4 years ago

            Not unless AMD has the balls to make the APU draw 65W or more. Drawing back from 45W in Llano to 35W was sad. I understand for average user, but if you need both CPU and GPU running you need more power than 35W.

            • Chrispy_
            • 4 years ago

            I don’t see why not.

            In an Optimus laptop the GPU alone usually uses 36-100W alone, and that 36W figure is for the paltry GM108 processor in the GT940M with only 8 ROPs and a puny DDR3 interface.

            A 65W APU based on, say Bonaire’s compliment of 896:56:16 would run rings around the little GT940M and likely close the gap towards a 950/960M.

            Polaris and mobile Pascal aren’t here yet, so I’m not even going to make guesses around those just yet, but APUs are definitely worthwhile candidates for Zen + Polaris on power-saving grounds alone.

          • the
          • 4 years ago

          There is a rumor goring around that Apple will embedded a GPU into their next Thunderbolt display.

    • PrincipalSkinner
    • 4 years ago

    I’d also like to hear about a full Polaris 10 chip. Hopefully it won’t end up like Tonga XT.

    • Klimax
    • 4 years ago

    25th June will see sudden landing of 1060(ti)… (Wouldn’t be first nor last time Nvidia tried to crash AMD’s party – like 980ti versus Fury (X))

      • Flapdrol
      • 4 years ago

      Didn’t the 980Ti launch well before fury, lagging behind the titan x exactly as long as the 780 did the titan?

        • Klimax
        • 4 years ago

        Between Titan X and 980ti was 1 and half month, delta between Titan and 780 was about three months, while time between 980ti and Fury X was one month.

        [url<]https://techreport.com/review/24381/nvidia-geforce-gtx-titan-reviewed[/url<] [url<]https://techreport.com/review/24832/nvidia-geforce-gtx-780-graphics-card-reviewed[/url<] [url<]https://techreport.com/review/27969/nvidia-geforce-gtx-titan-x-graphics-card-reviewed[/url<] [url<]https://techreport.com/review/28356/nvidia-geforce-gtx-980-ti-graphics-card-reviewed[/url<] [url<]https://techreport.com/review/28513/amd-radeon-r9-fury-x-graphics-card-reviewed[/url<] So no, Maxwell/Fury X had quite shorter span, then Kepler.

      • NTMBK
      • 4 years ago

      Bring on the price war!

        • Klimax
        • 4 years ago

        Some people might like price war. Price war though would be the disaster for AMD. They cannot afford such thing. And I doubt Nvidia is going to cause it and doubtful AMD will do it willingly.

      • Hattig
      • 4 years ago

      The 1060 is in a bad place right now. For Nvidia, maybe not the consumer.

      1280 cores at 1.8GHz is still behind the RX 480, and I’m being bullish on the clocks there (4.6 TFLOPS at turbo, 4 at 1.6GHz).

      We have a 5.8 TFLOPS RX 480 (if it ships at 1266 MHz) at $199.

      And a 4.7 TFLOPS RX 470 (if it ships at 1150 MHz) at a guessed $169.

      So Nvidia is now looking at a $149 1060, when three days ago they would have been looking at slotting it in at $249 with a $299 Founders Edition.

      They have options – a 1060 Ti could split the shader difference between 1280 and 1920 – 1600. That should match the RX 470, and with Nvidia’s fan base they might be able to get away with $179, maybe $189.

        • Klimax
        • 4 years ago

        There is no specification available thus your post is useless and without ANY basis. Also it seems your scaling is only based on cores and frequency. Definitely incorrect. And you made hell of assumptions for rest of it. This seems to be more of fanboy dream then anything else. Sorry.

        I don’t see as well where are you getting those FLOPS. The only precise number I saw over internet is 5,5TFLOPS. Nowhere your numbers. Also you appear to think they are of much use. Sorry, nope.

        Currently there is no such thing as bad place for Nvidia. For one, all we have are PR marketing materials by AMD and they appears to have some oddities. (Like some comparisons)

        And speculation I saw place Polaris at 980. There is massive gulf between 980 and 1070. More then enough for two GP106 or GP106+104 to land.

        I’d say, everybody would do well to wait for reviews, taking anything from AMD at face value is strongly inadvisable. However it seems as if many people didn’t learn their lesson from bulldozer nor any subsequent fun.

        • tipoo
        • 4 years ago

        Nvidias Gflop to gaming performance ratio has been lower than AMDs for a few generations at least, so a 4.6Gflop Pascal part could well have gaming performance in the league of a 5-5.5Tflop Polaris part.

        Tough to compare without them both being out though. But it seems the two philosophies are by and large intact this gen. For equivalent performance of X, AMD will be more floppy on paper.

          • Klimax
          • 4 years ago

          “more floppy”… first mental image was floppy disk, then floppy bird.

            • tipoo
            • 4 years ago

            I’m glad it wasn’t something else πŸ˜‰

          • NoOne ButMe
          • 4 years ago

          By a few generations you mean like 10 years and more? πŸ˜‰

          We will have to see as you say, but I think that Polaris will increase utilization of their shaders. Otherwise a 5.5TFLOP Polaris should be about a 4.8TFLOP Pascal and 5.8 matches 5.

          If GP106 is 1280 shaders still makes it close for Nvidia, 1900-2000Mhz stock.

    • Paine
    • 4 years ago

    So, at $200, this thing will smoke two 1080’s in SLI, and give us 180 frame 4K gaming.

    Or, more likely, perhaps I have smoked too much tonight.

    • DrDominodog51
    • 4 years ago

    Jeff, buffalo?

      • chuckula
      • 4 years ago

      YOU CAN’T USE THAT WORD!
      THAT’S OUR WORD!
      — Intel.

      Instead, I propose: Marmoset.

    • Firestarter
    • 4 years ago

    the proof is in the pudding, get to the top left of the price/99th percentile performance graphs, AMD, and I will buy your GPUs all year long

    • synthtel2
    • 4 years ago

    Compared to Tonga at 970 MHz / 2048 SPs / 190W TDP, that is a pretty solid perf/watt improvement (~75%). Presumably this also has more geom/ROP ability than is listed here, since 32 AMD ROPs really isn’t enough for VR. Tonga wasn’t the most efficient part to start with, though, and using the same power as a 1070 for this kind of performance is really not good enough. The bright side of this being a small / cheap chip may not even hold up that well, since 2304 SPs implies they might be having to disable some on the highest part they’re announcing (ouch). I hope I’m wrong on that one and at least yields are decent, for the sake of AMD’s profit margins.

    I don’t know what to think of the whole 2 cards at 50% utilization thing. All I can come up with is that the cards don’t like 100% for some reason. Power density is pretty high here. If it’s just about cooling, it shouldn’t be anything Asus/Gigabyte/MSI can’t handle, but OC headroom might be a no-show if that’s what’s up. πŸ™

    Scenarios:

    – GloFo let them down again, as in GloFo 14 LPP just isn’t as good as TSMC 16 FF+. This would really really suck, since they’ll be stuck with GloFo for a while. GloFo vs TSMC in GPUs does open up a bit of a new dynamic, hopefully not too unfavorable for AMD.

    – AMD is having implementation issues a la GF10x. AMD historically seems to have the upper hand when a process is sketchy, but now it might be just GloFo with issues, and there’s no law that says AMD is immune to this one. At least if this is what’s up, Vega will be solid, but it leaves them bleeding even more money in the meantime.

    – GCN rev. 4 has an architectural issue. This seems intuitively unlikely to me, but I can’t refine that sentiment much. It seems worth mentioning at least.

    Whichever way, it doesn’t look like AMD is having a fun time of this.

      • tipoo
      • 4 years ago

      Seems like they were playing up different chokepoints to me. i.e something among pixel/texel fillrate or geometry throughput is the bottleneck for a single 480, so they used two, and went with “and this isn’t even my final form!”

      That’s a bit…Spin-ey. We’ll see.

        • synthtel2
        • 4 years ago

        Good thought. Texturing is pretty tightly integrated with shaders, so that couldn’t be helped much, but for geom/ROPs or some internal memory bandwidth that would just about make sense (much more for the latter than the former). (I suspect internal memory bandwidth is Fiji’s problem, FWIW.)

      • NoOne ButMe
      • 4 years ago

      I posted this somewhere else here also. But when Nvidia looked into Samsung for FinFET the supposedly rejected Samsung due to voltage variation in the process.

        • synthtel2
        • 4 years ago

        While plausible enough on its face, you’re obviously not quoting any actual professional. What’s your source?

    • Unknown-Error
    • 4 years ago

    Thats it?! Felt like a $199 GTX 970. Welcome to 2014 AMD.

      • Dudeface
      • 4 years ago

      More likely somewhere between a 980 and a 980 Ti. For $199 ($229 for 8GB?) that’s pretty decent I think.

        • Kretschmer
        • 4 years ago

        Let’s wait for the benchmarks before we assume 980+ performance. You’re telling us that AMD pulled off a 1070 at almost half the price…

          • travbrad
          • 4 years ago

          Yeah that would mean they are throwing away $150+ per card in potential profits which they desperately need at this point. I wouldn’t be surprised if they had a bit better performance for the price than Nvidia, but double the performance for the price seems unlikely.

          • DragonDaddyBear
          • 4 years ago

          That would account for the higher power consumption relative to the 1070. It’s plausible, but I agree that waiting for benchmarks is a good idea.

          • chischis
          • 4 years ago

          There is a slide going around purportedly claiming that – in 3D Mark at least – performance of the 480 is similar to that of the Fury and 980 (not ti). Taking it with a pinch of salt right now but if the 480 IS in that ballpark, for $199, it deserves to sell well.

      • chuckula
      • 4 years ago

      Not sure why you were downthumbed, the hard numbers that Raj gave back up that assertion [it’s equivalent to an overclocked R9-390 or underclocked 390X, but with only a 256-bit memory interface].

        • Northtag
        • 4 years ago

        In the presentation though Raj had it as a 150W card (at 14:46 of [url=https://www.youtube.com/watch?v=fBHYdoSYMZk<]the presentation video[/url<]) but also that performance per watt was up 2.8x (next slide, from 15:12). That points to performance equivalent to that of a 420W 28nm GPU, i.e. about 1.5x that of Hawaii, 2.2x that of Tonga, 1.5x that of Fiji. Take your pick, none of those are below even the Fury X, let alone the 390X. Additionally, from what we know of Samsung's 14nm process it's 1.82x denser than the 28nm node. Together with a die size of 232 sqmm that means the 480 has about 6.0 billion transistors (just under Hawaii) but running 25% faster than that chip. Which would put it at Fury performance levels even if AMD haven't been able to coax much in the way of higher performance per clock per billion transistors out of it. Additionally, the 1070 is slightly over 3/4 of a 314 sqmm die on TSMC 16nm, which going by the A9 die sizes on both that and Samsung's 14nm process has 9% bigger areal features. So something in the 1070 class could be made in a die of about 217 sqmm using Samsung's 14nm process. nVidia have of late done a modestly better job than AMD of squeezing performance out of a given die area, but it's not unreasonable to figure the 480 (or at least the full-fat P10 if the 480 isn't it) would show performance somewhere in the Fury - 1070 range. Raj's GPU utilization + FPS numbers in the AotS demo (22:37) even suggest that a single 480 has the underlying horsepower to approach the 1080. Staggering if true (in which case, the use of two cards rather than one would have to be to market greater performance for less cash rather than most of the performance for much less cash). Gamers Nexus [url=https://youtu.be/xoSvTT9_x8w?t=139<]reported from Computex[/url<] (2:19) (presumably reporting some AMD press notes) that "As far as competition, the 8GB version of the RX 480 would be directly competing with the GTX 1070". If you're looking at the slide showing FP32 being over 5 TFLOPS, we already know that AMD have added the primitive discard accelerator which can't fail to make it more efficient at producing frames per shader-clock (i.e. better gaming performance relative to FP32 performance) than Hawaii as well as revamping several other parts of the architecture. If you're pointing to the 256GB/s memory interface as a limiting factor, scaled from Tonga that points to performance equal to or slightly higher than the 390X, even if Tonga XT were bandwidth-limited (which it doesn't show many hints of being, looking at overclocking results) and presuming that there are no compression improvements from GCN1.2 to GCN4 (with nVidia claiming a 1.2x improvement from Maxwell to Pascal on that front). There are an awful lot of different things that all point to performance exceeding the 390X, and possibly significantly higher.

          • chuckula
          • 4 years ago

          That’s a nice wall-o-text. However, Polaris is equivalent to an overclocked 390 or underclocked 390X. But with only a 256 bit memory bus.

            • Northtag
            • 4 years ago

            Can you give reasoning for that without requiring that 480 shaders have the exact same IPC in gaming situations as Hawaii, despite each apparently having a c.20% higher transistor count?

            • NoOne ButMe
            • 4 years ago

            Transistors scale as a factor of performance, 4x for 2x performance typically. And AMD focused on lower power. Spending more transistors to lower power is another hint that happens.

            We still don’t know Transistor counts. So hold off from trying to figure out transistor numbers.

            • Billstevens
            • 4 years ago

            I would consider the the $229 RX 480 outperforming the 390x in all bench marks to be on the optimistic side, but it is not ruled out. It really depends how much their architecture has improved to make up for the numbers where it clearly lags behind the 390 class cards.

          • NoOne ButMe
          • 4 years ago

          Just because a process can be up to 1.82x dense doesn’t mean it will be for any or all products. Case in point GP100 compared to GP104

      • Demetri
      • 4 years ago

      So I guess you’re not happy with the 1070 either because it’s just a $380 980ti?

        • Zizy
        • 4 years ago

        Except it costs 450 πŸ™‚

          • travbrad
          • 4 years ago

          They are all unobtanium at this point, which is a pretty poor value proposition. πŸ˜‰

      • beck2448
      • 4 years ago

      It’s a die shrink. Not all that power efficient either with low margins, a battered brand name and horrible channel execution. And btw Nvidia already has a mid range Pascal done so all this ridiculous caterwauling is a little overdone.

      • BurntMyBacon
      • 4 years ago

      [quote<]Thats it?! Felt like a $199 GTX 970. Welcome to 2014 AMD.[/quote<] Wait! The 970 could be had for $199 in 2014? How did I miss that!?!

    • CScottG
    • 4 years ago

    3:55’s in:

    [url<]https://www.youtube.com/watch?v=eqCo0LTUhsE[/url<]

    • Srsly_Bro
    • 4 years ago

    NOW I CAN GE T RID OF THIS AWFUL 7950 THAT BARELY ALLOWS ME TO POST ON HERE!

    edit…

    I expected down-thumbs…i dont understand

      • Klimax
      • 4 years ago

      People caught your blatant attempt at minus baiting. If you want to gather them, you have to do better. At minimum I suggest criticism of low-level APIs. (People dislike getting their dreams destroyed…)

        • Srsly_Bro
        • 4 years ago

        Thanks, bro. Stay tuned to newer threads and I’ll have another go.

          • sweatshopking
          • 4 years ago

          Klimax is wrong. They thought you were me, what with you thinking cause a guy makes a trip to africa for a few months you can replace him.

      • ptsant
      • 4 years ago

      I gave my 7950 to my mother. She can barely browse facebook at low settings and 720p with it.

        • JustAnEngineer
        • 4 years ago

        I gave my Radeon HD7950 to my brother. Mom is having to make do with an HD6970.

      • Peter.Parker
      • 4 years ago

      Sorry bro, but that’s on you. You know you should drill holes into your card to install a custom fan on it. That’s the only way to make that 7950 crap work! Take example from this guy and his GTX 980 Ti:
      [url<]https://techreport.com/news/29825/this-builder-proves-drill-presses-and-graphics-cards-dont-mix[/url<]

      • ronch
      • 4 years ago

      I can lend you a spare 7100GS (yes, from 2007) i have lying here somewhere, just so you could post here without difficulty.

        • KeillRandor
        • 4 years ago

        If anyone wants a 12MB PCI Voodoo 2….? Oh, wait…

    • NoOne ButMe
    • 4 years ago

    Wow. Shocked. I expected $250-300. looks like AMD just won the midrange for a whole cycle Unless GP106 is more than 1280CC.

      • chuckula
      • 4 years ago

      [quote<] I expected $250-300. [/quote<] So did AMD. [quote<]looks like AMD just won the midrange for a whole cycle [/quote<] Yeah, let's wait until Nvidia releases their lower end parts before we jump to any offbase conclusions. Oh, and while we are at it, let's wait until AMD actually launches Polaris, because they didn't do it here and they didn't even announce the date of when they intend to do it. Incidentally, AMD gave official on-stage confirmation that they think the GTX-1080 is very much a commercially available part. After all, they claimed to have used it in benchmarks for their presentation.

        • NoOne ButMe
        • 4 years ago

        I qualified the first part of my statement as “unless the GP106 is more than 1280CC”. Do you have reading comprehension problems?

        AMD’s whole presentation about having only 51% utilization to beat the 1080 is completely BS afaiac.

        Leaked embargo date is 29th of next month.

          • Ninjitsu
          • 4 years ago

          Yeah that sounds like they had a CPU bottleneck going on XD

            • arbiter9605
            • 4 years ago

            Or more likely from a few reviews i talked to, there was more then meets the eye going on with that “live demo”. As some pointed out that graphic’s quality wasn’t same in both video’s, One had settings that were Higher then the other. Down vote me all you want for saying it but A lot of people seen it and pointed it out.

            • chuckula
            • 4 years ago

            I believe it from what I saw on the screen.

            Once again, if two Radeon 480’s at only 50% usage can beat the GTX-1080… why didn’t Raj just show one Radeon 480 at 100% usage doing the same thing?

            • tipoo
            • 4 years ago

            Seems like they were playing up different chokepoints to me. i.e something among pixel/texel fillrate or geometry throughput is the bottleneck for a single 480, so they used two, and went with “and this isn’t even my final form!”

            That’s a bit…Well. We’ll see.

            • 0x800300AF
            • 4 years ago

            Try to keep up..

            [url<]https://www.reddit.com/r/Amd/comments/4m692q/concerning_the_aots_image_quality_controversy/[/url<] Ashes uses procedural generation based on a randomized seed at launch. [b<]The benchmark does look slightly different every time it is run[/b<]. But that, many have noted, does not fully explain the quality difference people noticed. At present the GTX 1080 is incorrectly executing the terrain shaders responsible for populating the environment with the appropriate amount of snow. [b<]The GTX 1080 is doing less work to render AOTS than it otherwise would if the shader were being run properly[/b<]

            • chuckula
            • 4 years ago

            Yeah, I trust a marketing drone from AMD to do a proper technical analysis of the inner workings of the GTX-1080 running a complex piece of software that AMD purportedly didn’t write [snort] about as much as I trust Jen-Hsun to single handedly rewrite AMD’s driver stack.

            There’s a lot more to this story, and nobody from AMD has given an intelligent response.

            • NoOne ButMe
            • 4 years ago

            Did you miss Robert Hallock’s explanation of what AMD did? It is terrible what they did. Took whole test information and than took part of the test for utilization to show. Sad, pathetic even.

            In other news, AMD claims 83% GPU scaling means 1 Rx480 should get around 34-35fps.

        • _ppi
        • 4 years ago

        Back in January they hinted their target is VR-minimum spec at around $200 … where GTX 960 is now their competition GTX 960 (and will be for a few months). Polaris 11 will take care of 950 and 750 Ti.

        Since they are now claiming VR performance is close to former $500 cards, they might have implemented something like nVidia’s single-pass VR. Power consumption is going to be fine as well, 150W is maximum possible draw.

        There are two question remaining, though:
        1) Will GloFo let them down once again?
        2) Will there be 480X?

        Obviously, their comparisons to 1080 were childlish.

          • NoOne ButMe
          • 4 years ago

          Their technical comparison was. The comments about a truly premium design and a cool card was pretty funny given how the Founders Edition has played out. If AMD reference ends up the same garbage cooling as FE than it will end up pathetic.

      • Freon
      • 4 years ago

      NV is using the same memory config on their $380/450 card…

      I think they’ll be happy to produce the 1050 or 1060 to compete head to head at the same price/performance and let AMD bleed itself out on margins trying to maintain their small market share.

        • Spunjji
        • 4 years ago

        With a much larger chip, remember. I’m not sure this pricing would exactly be killing AMD.

      • Klimax
      • 4 years ago

      I am sorry, but conclusion is quite bit premature. It strongly depends on what GP106 will be. (And at what price point)

      Not to mention there is no evidence to back up AMD’s claims. Aka there are no reviews. PR is nice, but with AMD it is almost worthless.

        • NTMBK
        • 4 years ago

        Don’t know why you’re getting downvotes for this. PR is nice, but no substitute for reviews.

        Bring on Inside The Second, 14nm edition!

          • Klimax
          • 4 years ago

          People generally hate intrusion of reality.

        • NoOne ButMe
        • 4 years ago

        As I said to chuckula* (Fing autocorrect), I qualified my statement. If GP106 ends up with over 1280CC than it will turn into a real fight. Otherwise I expect a 5.5TFLOP Rx480 will preform similar to a 5TFLOP 1060*, which would be 1280CC just shy of 2Ghz.

        I hope you understand what I mean now πŸ™‚

          • Klimax
          • 4 years ago

          Numbers are meaningless as they are from AMD PR and just using core count for GP is definitely not good idea. (or generally)

          Still jump too far using PR and dodgy “extrapolation”.

            • NoOne ButMe
            • 4 years ago

            GF106/116 was half of GF104/114, GK106 was just over half of GK104, GM206 was half of GM204. GT216 was half of GT215 (there was no GT214). G96 was half of G94. G86 was half of G84,

            I would say my guess of 10SMM for GP106 is very safe. It might be 12, in which case we will see a real battle in the mid-range πŸ™‚

    • chuckula
    • 4 years ago

    By the way, go back and watch this video and compare it to the Tom & [s<]Jerry[/s<] [u<]Jen-Hsun[/u<] Show a few weeks back. Obviously both are cheesy. However, Jen-Hsun was clearly excited about what was happening. Hell, he was even a little nervous like he was giddy. These guys? I think they couldn't wait to get off the stage and eat those salads they had attached to their lapels. The most excitement that anybody generated was when some empty suit from Microsoft polled the audience to find out that Windows 10 ain't that popular. Applause? Yes, there were like 5 guys who sort of clapped. Lisa Su demonstrated Zen for the first time anywhere ever, and while it was a rather lame demo... the audience was almost comatose when it happened. You want lessons in how to introduce a product and how to not introduce a product? Watch those videos. People made fun of Jen-Hsun and Tom but guess what.. people remember Jen-Hsun and Tom.

      • NoOne ButMe
      • 4 years ago

      Which could give +100000

      • the
      • 4 years ago

      The Tom and Jen-Husn wasn’t scripted or rehearsed which made it genuine. Cringe worthy but genuine.

      From your description, if something similar happened, the audience was so asleep no one would have noticed…

      • spanner
      • 4 years ago

      DreamHack and Computex have slightly different cultures.

        • nanoflower
        • 4 years ago

        There’s also the problem that many of the people in the audience may have been at the Macau presentation so what AMD showed concerning Polaris was more of a rehash than something new and exciting.

          • chuckula
          • 4 years ago

          I sincerely hope that AMD divulged a LOT more information in Macau than they did here.

            • nanoflower
            • 4 years ago

            From some of the tweets that I’ve seen tonight they apparently did show much more in Macau that isn’t being revealed now. Hopefully we will get more information at the PC Gaming Expo during E3 in a couple of weeks. It would make sense to come out with more information while at a big gaming convention in the USA and especially when you are only a few weeks from AMD’s hard launch date.

            (Frankly I had been thinking we wouldn’t get much new information at this time given how silent AMD had been and that I just saw the prices on R9s going down this past week. So they need time to clear the shelves of the old product (or as much as possible) before coming out with a better/cheaper product. E3/PC Gaming Expo seemed the perfect place for a real launch since this is a gaming card.)

      • muxr
      • 4 years ago

      Car salesman who sold me my car was also really excited about it.

        • chuckula
        • 4 years ago

        You’re saying a used car salesman appreciated Raj’s benchmarking spin?

      • anotherengineer
      • 4 years ago

      Can’t beat Jen-Hsun in his leather!!!

    • chuckula
    • 4 years ago

    I watched the video.

    That demo of two 480s “beating” a single GTX-1080 while both 480s purportedly were only running at 50% utilization? Yeah, anybody who tries to defend that joke just permanently lost all rights to even suggest that Nvidia launch events* do anything shady, and I’m including the wood screws.

    * Not that I’m suggesting this was a launch event. Nothing launched. They didn’t even give the date of when Polaris will launch.

      • Tirk
      • 4 years ago

      I’m pretty sure at the end of the stream they mentioned the APUs shipping now in OEM laptops and quickly stated June 29th for Polaris. It was almost said in passing so it was easily missed. If I missed a word and misinterpreted what they said please feel free to correct me.

      I do wonder if the 50% utilization is due to UWP and using DX 12. While Microsoft has released an update to allow games to display more similarly to Directflip from DX 11, I do not have Ashes and it might be that they have not implemented that change yet in the UWP display method that AMD uses with Ashes and DX12. This unfortunately makes the comparison less useful than it could be with an updated utilization of UWP’s “fixes”. What it does show is that the 1080 100% utilization is tuned very close to the 60FPS mark for that game. What it unfortunately leaves open is whether the lost 50% utilization of the crossfire cards can be re-cooperated in a more demanding and updated DX 12 and UWP implementation. Warhammer seems to be the next RTS releasing a DX 12 patch and it’ll be interesting to see if we’ll be able to determine more about how the updated UWP could play into utilizing the full throughput of the crossfire setup. If Warhammer can get that DX 12 patch out before Polaris launches I do hope some reviews will look into how it performs. That being said, I’m still personally dubious about crossfire optimizations until more modern games consistently come out with PnP type experiences out of the box.

      There is a preview build of Warhammer DX 12 floating about: [url<]http://www.pcgamer.com/total-war-warhammer-benchmarks-strike-fear-into-cpus/[/url<] Nvidia and AMD both see increases in DX 12 which is nice although AMD still seems to benefit more seeing the Fury X surpass the 1080's performance in its namesake 1080P resolution. Which is a weird reversal for the Fury X to scale better than Nvidia's cards in 1080P. Its great for all you AMD 1080P players still floating about though πŸ˜‰ I would give this the slight edge to a graph that has no y axis explanation as it still gives the FPS averages of both the 1080 and the 480 CF but it is still definitely in the category of leaving more questions than it answered. I'd definitely put this in the gray zone despite the propensity to shove something into either black or white. I wonder if this would pass the litmus you set in your thought police post I am responding to however.

      • muxr
      • 4 years ago

      Lisa said June 29th “on shelf”. So June 29th hard launch. It’s also when the NDA lifts.

        • chuckula
        • 4 years ago

        OK, so if that’s actually true that’s a bad sign. Nvidia had no problem giving a 10 day lead for the NDA lift prior to the GTX-1080 actually hitting shelves. Plenty of time for reviews to be read.

          • derFunkenstein
          • 4 years ago

          There’s plenty of time for reviews to be read on June 29, too, because it’s not like the cards are going to be available one day only. Just read reviews before buying, duh. :p

          • nanoflower
          • 4 years ago

          Nothing says that AMD will stick to the 29th of June as the NDA date. They can decide at any time that the NDA is over and let reviews be posted before the hardware is available.

          • NoOne ButMe
          • 4 years ago

          Good thing that the 680, 680, 980 and 970 all had reviews out before launch. Or if they did than I don’t recall sites giving a future launch date.

          Good products can launch the same date as reviews.

          I will still prefer reviews always be at least 24 hours before first sale or preorder.

            • chuckula
            • 4 years ago

            Those were all paper launches, remember?

            • NoOne ButMe
            • 4 years ago

            the 980 was same day I believe. 970 I think also. I believe the 680 was short supply put released when reviews went up. Not sure about 670.

          • _ppi
          • 4 years ago

          I am not sure what would you be looking for in 480 review, if you were doing actual purchase decision. At $200 it is going to be some 50% faster than any competition around that price point. Game over, 960.

          480 reviews will be only interesting for people like on this forum, as we will want to compare it to 10×0. Edit: and guess whether 1060 will be better (which is pure speculation without knowing die size and memory bandwidth)

      • rudimentary_lathe
      • 4 years ago

      Yeah, that particular demo did not inspire confidence. If that’s the best way they could find to demo their cards, the final product looks like it could be underwhelming.

      • the
      • 4 years ago

      Probably needed two RX 480’s to get enough pixel/texel fill rate and geometry throughput to be competitive. Shader throughput wasn’t the likely bottleneck and thus a point they could show off.

      • nanoflower
      • 4 years ago

      There’s nothing shady about it. It’s an artifact of how Ashes of Singularity where it does three runs with a light load, medium and heavy load.

      Here, I’ll quote AMD_Robert (AMD Technical Marketing ):
      [quote<] Hi. Now that I'm off of my 10-hour airplane ride to Oz, and I have reliable internet, I can share some insight. System specs: CPU: i7 5930K RAM: 32GB DDR4-2400Mhz Motherboard: Asrock X99M Killer GPU config 1: 2x Radeon RX 480 @ PCIE 3.0 x16 for each GPU GPU config 2: Founders Edition GTX 1080 OS: Win 10 64bit AMD Driver: 16.30-160525n-230356E NV Driver: 368.19 In Game Settings for both configs: Crazy Settings | 1080P | 8x MSAA | VSYNC OFF Ashes Game Version: v1.12.19928 Benchmark results: 2x Radeon RX 480 - 62.5 fps | Single Batch GPU Util: 51% | Med Batch GPU Util: 71.9 | Heavy Batch GPU Util: 92.3% GTX 1080 – 58.7 fps | Single Batch GPU Util: 98.7%| Med Batch GPU Util: 97.9% | Heavy Batch GPU Util: 98.7% The elephant in the room: Ashes uses procedural generation based on a randomized seed at launch. The benchmark does look slightly different every time it is run. But that, many have noted, does not fully explain the quality difference people noticed. At present the GTX 1080 is incorrectly executing the terrain shaders responsible for populating the environment with the appropriate amount of snow. The GTX 1080 is doing less work to render AOTS than it otherwise would if the shader were being run properly. Snow is somewhat flat and boring in color compared to shiny rocks, which gives the illusion that less is being rendered, but this is an incorrect interpretation of how the terrain shaders are functioning in this title. The content being rendered by the RX 480--the one with greater snow coverage in the side-by-side (the left in these images)--is the correct execution of the terrain shaders. So, even with fudgy image quality on the GTX 1080 that could improve their performance a few percent, dual RX 480 still came out ahead. As a parting note, I will mention we ran this test 10x prior to going on-stage to confirm the performance delta was accurate. Moving up to 1440p at the same settings maintains the same performance delta within +/-1%. [/quote<] [url<]https://www.reddit.com/r/Amd/comments/4m692q/concerning_the_aots_image_quality_controversy/[/url<] As I read the benchmarks it looks like the benchmark does the three runs and then averages the results together to give the final FPS. This would appear to be one of the runs that AMD did of the dual 480s. [url<]http://ashesofthesingularity.com/metaverse#/personas/b0db0294-8cab-4399-8815-f956a670b68f/match-details/ac88258f-4541-408e-8234-f9e96febe303[/url<]

        • Zizy
        • 4 years ago

        In the thread, he also says dual GPU is 51% beyond single GPU.
        This gives about 15% slower than 1070 in this test for the single GPU. However, which GPUs were used isn’t confirmed, except that they are <=250$ (as the pair is below 500). I assume 8GB version, as they would state <400 if the 200$ ones were running the show.

        • Ninjitsu
        • 4 years ago

        I dunno, I’m looking at the 1080 results from the same source, the score for both is about the same.

        The utilization angle is confusing me, not sure what it’s supposed to imply – the end result is that both are tied.

    • Prion
    • 4 years ago

    >it’s been quite some time since we’ve seen major leaps in performance in that segment

    That just reminded me that the last time I paid $199 for a new graphics card, it was a Radeon 9800 Pro.

      • muxr
      • 4 years ago

      Same, and what a glorious card that was.

      • KeillRandor
      • 4 years ago

      I’ve still got one of those in my old/2nd computer (AMD Athlon-64 3200). I remember buying it via my friend in NJ, (I’m in the UK), because the exchange rate at the time meant it was still cheaper to order it over there, (including p&p).

    • the
    • 4 years ago

    I have the nagging feeling that they’re holding back. The die likely has 2560 shaders and support for GDDR5X if I had to guess. RX 480X or RX 485 perhaps?

    As for clock speeds, we’re likely looking around 1.2 Ghz based upon a 5.5 TFLOP figure I’ve seen posted elsewhere. That’s far lower than were nVidia has gotten on 16 nm FinFET but this board also consumes less power.

    The 150W board doesn’t leave room for overlocking either.

      • jts888
      • 4 years ago

      Is this all gut feel, or do you have some math behind it?

      Remember that Polaris 10 is only 230 mm^2 in contrast to GP104’s 312 or so.

        • the
        • 4 years ago

        5.5 TFLOP on 2304 shaders would imply a 1.2 Ghz clock speed from raw calculation.

        As for 2560 shaders, it is a guess but it would kinda be a lop sided design. Then again, Hawaii really did turn out to have 2816 shader despite many thinking it natively had 3072. On the flip side, it has been confirmed that Tonga has a native 384 bit wide memory bus but hasn’t been used in any shipping Tonga product.

        150 W is the max spec for a PCIe card and a single 6 pin connector. If that’s the board config, then there literally isn’t room for overclocking from a power standpoint without running out of spec. Using two 6 pin or a single 8 pin would easily work around this.

          • Zizy
          • 4 years ago

          Why would be any of these configs a lopsided design? Base building block of 10×64, replicated 4 times. 2304 would mean 3 blocks of 12×64 or 4 of 9×64, which are both reasonable as well.

          2304 would imply either 1152 (9) or 1536 (12) SP Polaris 11 with 576 (9) or 768 (12) APU part. The next step up is 2880+ (9) or 3072 (12) SP Vega 10.
          2560 would imply 1280 SP Polaris 11 with 640 SP APU and 3200 SP Vega 10.
          Yet again, all these numbers seem completely reasonable to me.

          2304 and 3072 might not be as nice round numbers as 2560 and 3200, but if the building block is more nicely made as 3×4 or 3×3 array of 64 SPs than 2×5, why not? People buy performance and AMD just wants to squeeze as many flops as they can.

          As for 150W non-overclocking – tons of people and OEMs are interested in a single 6 pin card. OEMs can still easily put 8 pin or 2x 6 pin on their factory overclocked cards if they want to.

          EDIT: I just realized that 2560 is also 5 blocks of 8×64. AMD used it on Tonga and Fury (4x8x64 and 8x8x64), so this one might be the most likely? This implies 1024-1536 SP Polaris 11 with most likely 512SP APU, while Vega could end up at 3072 SP. Again all reasonable options, with a tiny issue that 1536 and 3072 SP are equal to the possible 12×64 configurations, so we couldn’t figure it out from these 2 parts.

      • chuckula
      • 4 years ago

      If you are AMD: Why would you hold back? This isn’t some 600 mm^2 die where turning off some units for yield reasons is to be expected. Polaris 10 is not a very big part as these things go, and Nvidia already has bigger parts on sale.

      In this situation, you don’t artificially hold anything back. That means either Polaris has everything turned on and that’s what it is or if something is being “held back” it’s not artificial, but instead there are problems where AMD can’t produce enough of the chips with everything turned on.

        • the
        • 4 years ago

        Same reason the held back on Tonga. It has 2048 shaders and it took awhile to see that config in a main stream part (all 2048 shader parts were allocated to 5K iMacs for months). Even then, they didn’t release a version with the full 384 bit wide memory bus.

        Holding back now does permit them to re-release the chip in a faster configuration down the road.

        • Zizy
        • 4 years ago

        They might be just not showing all their cards – releasing just 480’s details, while actual launch sees 485 as well.
        But yeah, it is more likely a large customer once again grabbed all the 485s and left AMD to launch and sell just the salvaged 480.

        • BurntMyBacon
        • 4 years ago

        [quote<]Polaris 10 is not a very big part as these things go, and Nvidia already has bigger parts on sale. [/quote<] nVidia is also using TSMC. AMD is using Global Foundries. Of these two companies, which has historically done better with their new process technology?

          • chuckula
          • 4 years ago

          I’m not saying it’s GloFo!

          Then again, AMD was perfectly free to continue using TSMC for Polaris just like they have done for years & years for Radeon products. They chose not to continue that relationship (and I have absolutely no clue why).

            • NTMBK
            • 4 years ago

            Partly because of the Wafer Sales Agreement, and partly because having it on the same process as the CPU saves the cost of having to port it later for the APU.

      • muxr
      • 4 years ago

      Different architectures clock differently. We know Nvidia has a higher clock lower IPC design. Tonga’s stock clock is 970 Mhz a 30% uplift puts it at 1260 Mhz which some leaks have identified. That sounds reasonable and exactly what I would expect from this process transition. 30% is in line with what Nvidia got from Polaris as well.

      • ptsant
      • 4 years ago

      Of course they are. They are strategically targeting the $200 market, where there is no opponent. A slightly bigger version would probably have thinner margins, if it gets too close to 1070 territory.

      Now, you should not compare GHz speeds. The GCN architecture is far more complex and has higher “IPC”. All recent nVidia cards clock higher than their AMD competitors. It’s far more interesting to compare with the previous generations, that run at ~1GHz. So, there is a clear improvement there.

    • CScottG
    • 4 years ago

    “AMD says its internal research shows that many people consider the $100-$300 price point a sweet spot for graphics card updates..”

    -and that’s the problem.

    Internal research is largely self-indulgence.

    The real “sweet-spot” is $99, AND it needs to perform at a comparative level to Nvidia’s best value. (ex. 1070 vs. Titan X.) Basically an RX??? for $99 that has raw performance as good as an R9 390 with a few “extras” available (like improved VR performance) and the right connectors (HDMI 2.1). THAT would sell well.

    EDIT: and now some sobering numbers..

    [url<]http://store.steampowered.com/hwsurvey[/url<] Please scroll down and click on the Video Card Description line item and total-up the percentage of integrated graphics and older (or just plain lower) performing graphics cards that would almost certainly have a huge adoption rate for a performance card in a price-class that's readily accessible to the other 85 % of the market. Really the target market isn't trying to compete integrated graphics to integrated graphics, it's a matter of moving integrated graphics users to a faster add-on card. PRICE is KEY! ..and the notion of a $100 card as being crap is astoundingly ripe for change.

      • invinciblegod
      • 4 years ago

      Except until graphics improvements are as unnoticeable to the layman as sound card improvements are, there will always people who will pay $500 for the doubling in performance.

      • Veerappan
      • 4 years ago

      The only people I’ve known who’ve bought graphics cards for under $100 are those who don’t care about gaming performance in the first place.

      I’d concede that $150-200 is generally a good price-point for someone who wants to do some gaming on their system, but $99 is cutting it down a bit much.

      The cards that I’ve bought over the last 8 years have all been in the $150-225 range, and they’ve all lived good/productive lives. The people I’ve recommended cards to as well for some light/medium gaming have mostly fallen into that range as well, and they’ve been satisfied with what they bought… I’d be hard-pressed to see someone doing more than casual gaming on a $99 card that lasted them more than a year or two at most.

        • CScottG
        • 4 years ago

        Yes, for people who don’t care about gaming – but can get gaming performance for the same amount (or slightly more)along with those very important “extras”.

        Value isn’t hard to sell.

        As for the enthusiast crowd – do you honestly think that wouldn’t sell? How about in multiples? DX 12/Crossfire with the equivalent of *2* R9 390’s for $198.

      • spanner
      • 4 years ago

      The sub-100 USD space is pretty much reserved for OEM junk these days now that integrated graphics can handle non-gaming/light gaming/media use cases. Meanwhile the sub-200 USD space has seen such excellent value propositions as the GTX 750 Ti, GTX 960, Radeon 7770 and Radeon R9 380. The 200-300 USD space has been a little less well-served lately but that was the home of value classics like the Radeon 4870, Radeon 5850 and GTX 560 Ti, and both the GTX 970 and Radeon R9 390 launched just above that range at 329 USD.

        • CScottG
        • 4 years ago

        Yes, such a product at that price-point wouldn’t be expected. Nobody really expected Nvidia’s latest offerings to so-outperform their prior gen. cards, even when told at launch that they would. Now people are practically salivating at the value, despite the cards being so expensive.

        If you can’t lead at the high-end, then lead at the lower-end – and there is always more money to be had at the lower-end because the market is so much larger. Mass-market PC vendors in particular would be ecstatic with a card like that, maybe even enough to actually improve PC sales.

      • NTMBK
      • 4 years ago

      For the <$99 market, they make APUs.

        • ptsant
        • 4 years ago

        That’s true. Actually, an APU is a better value for 512 GCN cores, if you are building a whole system. You don’t have to pay the overhead of the card.

        If we’re comparing AMD, a dGPU only makes sense if you’re getting 1024 GCN cores.

      • Kretschmer
      • 4 years ago

      Sure, and they should ship each card with a free pie of your choice (while we’re asking for absurdities).

      I prefer apple pie.

        • BurntMyBacon
        • 4 years ago

        Wouldn’t that be an iPie?

      • Freon
      • 4 years ago

      Yeah and a Corvette for $12,995 would sell well, too, but it just ain’t in the cards.

        • Anonymous Coward
        • 4 years ago

        Even a corvette at $13k would only capture a small-ish percentage of the vehicle market.

      • ptsant
      • 4 years ago

      GPUs are getting quite complicated now. I think it’s normal that the price shifts a little bit upwards for what is, essentially, a whole computer in a card. It has long been normal to expect $150 for entry to something that changes your GPU experience. It’s not normal when the high-end becomes $850 (1080 price where I live).

      So, the current normal seems to be:
      – Below $120, mostly junk
      – $120-180, entry point (think R7 370 or 950)
      – $180-250, sweet spot (Polaris)
      -$300-450, want high end, can’t afford it (1070)
      -$500+, high end (1080)
      – $1000, more money than sense (Raden Duo, new Titan, whatever)

        • Anonymous Coward
        • 4 years ago

        I expect silicon to get cheaper even as it gets more complicated.

      • TheMonkeyKing
      • 4 years ago

      Srsly_Bro, is this you? If so, you’ve done it pretty well here to get the down votes.

        • CScottG
        • 4 years ago

        Nope. (lol)

        It is however an interesting reaction.

          • NoOne ButMe
          • 4 years ago

          Not really. How many computer is steam installed on that do the survey? How many GPU sales are there per year? How often to enthusiasts upgrade versus mainstream market?

          Also should be noted one of the two big GPU sales watchers said that the $350+ market is only around 20% of total sales. So AMD’s numbers seem pretty close to reality.

            • CScottG
            • 4 years ago

            Given the huge array of video cards presented in the Steam list in addition to the percentage distribution – the sample size is *vast*: FAR beyond what a gpu manufacturer could generate on their own, and far more realistic when representing a *total* market.

            Remember, we aren’t talking about a current market here – that’s the myopic viewpoint that AMD is currently in. Instead we are talking about “taping into” a very old market that’s been languishing for about 20 years that represents a massive group of potential purchasers. No one takes it seriously because it’s “business as usual” – and we already know how well “business as usual” is working for AMD.

            Basically AMD’s offering is only a large part of the *current* gpu market, and will only last until Nvidia “rolls-out” it’s 1060. Historically that’s been 4 months after the “80/70” – and that’s only provided that AMD actually has product ready for sale, which they don’t – so shrink that time period by at least a month. 3 Months of good sales of the current market-model just aren’t going to “cut-it” for a Corp. that is hemorrhaging cash.

            • NoOne ButMe
            • 4 years ago

            The total sold in market doesn’t matter. 84% of people looking to buy new cards going at the $100-300 market seems within reason. Probably a little high, but not to far off reality.

            And of the cash AMD is hemorrhaging as you say, how much is GPU related? Maybe since the 2nd generation of Maxwell hit it has been a drag, but AMD’s problems have mostly been on the CPU side.

            The GPU division would lose money counting R&D/SG&A, maybe 100-200m a year. But the big issues for AMD have been CPU and Global Foundry related. And Seamicro was just so stupid.

            • CScottG
            • 4 years ago

            It’s not about how many are currently looking to purchase new graphics cards – that’s the current market. Instead it’s about generating a “new” market, ie. people who otherwise wouldn’t be making a new video card purchase, which is in reality FAR larger than the current market.

            It doesn’t matter how much of AMD’s GPU section is costing AMD, it’s still AMD. In point of fact it’s really their debt that is crippling them, they can’t meet the payments on the interest with their current profits. (..it’s close to $200 million just to service this debt.)

            It’s sad, but their business model is broken on both the CPU/APU and GPU sides, the loss of manufacturing isn’t quite as bad as it seems, but their loss of IP (quality R&D for both architecture and software/drivers)and a persistent management focus of trying to be competitive in several markets they can’t afford to be competitive in – is literally costing them their Co.

            • Voldenuit
            • 3 years ago

            [quote<]It's not about how many are currently looking to purchase new graphics cards - that's the current market. Instead it's about generating a "new" market, ie. people who otherwise wouldn't be making a new video card purchase, which is in reality FAR larger than the current market.[/quote<] Yeah, this is why the 750Ti was so popular; it opened 1080p gaming to a whole new class of users - ppl with compact mATX cases and OEM desktops and PSUs without PEG cables. I'm not sure the 480 will have that same impact; it's the same (if not higher) TDP as a GTX 970, and while the price point is nice, it's unlikely that people with 1440p displays won't already have a decent GPU to go with it, and despite AMD's "VR-capable" marketing, it's not like people who buy $200 graphics cards will be in the market for a $599 headset, and vice versa. I think the 480 will be a damn fine $200 card; I don't think it will transform the market/landscape, and that's perfectly okay.

    • Laykun
    • 4 years ago

    So what’s up with the new GPU naming scheme? Are we going to have an RX 480 and R 480 or will it be RX 480 and RX 480 X? It couldn’t possibly be the latter, otherwise we go back down that dark path that lead to cards like the X1900 XTX.

    Also, doesn’t 150W seems a bit high for a card of this performance class? Are we seeing the actually differences between 16nm TSMC and 14nm GF? Or is this a difference between architectural decisions?

      • thedosbox
      • 4 years ago

      “Also, doesn’t 150W seems a bit high for a card of this performance class?”

      Yep, especially compared to the 150W GTX 1070.

        • Magic Hate Ball
        • 4 years ago

        Keep in mind, the RX480 has a 6-pin which actually limits it to 150watts.

        The 1070 has an 8-pin which theoretically limits it to 225watts.

        I have a feeling we’re seeing high-end conservative guesses from AMD. I don’t think they would put a power connector at the identical power rating as their max TDP.

      • NoOne ButMe
      • 4 years ago

      Given how badly AMD was beat on 28nm the pref/watt difference isn’t that crazy anymore. Pascal does pref/watt a it better. supposedly Nvidia rejected using Samsung process for their first FinFET GPUs due to voltages variation*. That likely plays a part

      • meerkt
      • 4 years ago

      It’s going to be a prescription only card.

        • JustAnEngineer
        • 4 years ago

        Presumably, R ten should replace R nine. Like many business types, they’ve chosen to use a Roman numeral.

      • James Harvey
      • 4 years ago

      The ‘X’ after the ‘R’ just means ’10’, I reckon they’ll do an RX 480 and a RX 480X, I would have preferred if they had called it the R10 480, sounds better and is less confusing if they were to make a RX 480X.

        • derFunkenstein
        • 4 years ago

        It “sounds” the same, though, right? Or do you pronounce the X in OS X as the letter?

    • rudimentary_lathe
    • 4 years ago

    So my read is that the 8GB version of the RX 480 will be around $240 or so, with the 4GB version coming in at $199. Was looking for a moment there that the 8GB version would be $199, which would have been a pretty strong competitor based on what we know so far.

    I still have my fingers crossed that the RX 480 will provide a solid experience at 1440p. Maybe it has enough overclocking headroom to hit a stock 1070?

      • LocalCitizen
      • 4 years ago

      you win. 4GB verion for 199$, and 8GB version for 229$

      • Ninjitsu
      • 4 years ago

      I somehow doubt the “solid experience at 1440p” part, since that’s basically what the GTX 1080 is. Should be really good for 1080p, and should manage 1440p well in SLI, provided AMD’s drivers hold up. I mean heck, really puts a question mark on a $379/439 GTX 1070. (assuming the performance really holds up).

        • Spunjji
        • 4 years ago

        All depends on what you mean by “solid performance”. I get what I’d call that at 1440p from my GTX 970 – some tweaking needed here and there, sure, but I can easily get ~60fps in most games with decent settings and a mild overclock.

        • rudimentary_lathe
        • 4 years ago

        The early reviews I’ve read of the 1070 suggest it provides a good experience in most games at 1440p at stock clocks. Hopefully the RX 480 is close, especially if it can overclock a solid 10-20%.

    • ronch
    • 4 years ago

    No Zen news at Computex from AMD? One would expect them to be putting out some official stuff by now, this close to launch.

    EDIT – When I posted this I couldn’t find anything about Zen being shown at Computex. Cut me some slack, guys.

      • tipoo
      • 4 years ago

      Half a year is kinda far. Makes sense to focus on Polaris.

        • ronch
        • 4 years ago

        AMD used to make noise roughly a year before major CPU launches.

          • derFunkenstein
          • 4 years ago

          Well, they “made noise” about Zen [url=https://techreport.com/review/28228/amd-zen-chips-headed-to-desktops-servers-in-2016<]over a year ago[/url<], so it's not like they're waiting to spring it on us.

          • derFunkenstein
          • 4 years ago

          OK so they’re talking about Zen RIGHT NOW HURRY GO LOOK

      • chuckula
      • 4 years ago

      I watched the live stream.

      They showed a canned demo where they claimed Zen worked well enough to edit a lame marketing video and then spew the lame marketing video onto a screen (where a GPU did most of the hard work anyway).

      No benchmarks at all.

      No new information other than this rather negative tidbit: Lisa Su confirmed that Zen has *not* started sampling to customers yet. Let me reiterate what I’ve said before: Zen will *not* be on sale in 2016, and right now we are talking Spring 2017, not January.

        • ronch
        • 4 years ago

        Wow. No samples yet? Sure dampened my enthusiasm.

        • the
        • 4 years ago

        Well if they follow the time line of [url=https://techreport.com/news/26345/amd-seattle-is-sampling-on-track-for-fourth-quarter-release<]sampling[/url<] before [url=https://techreport.com/news/29602/say-hello-to-seattle-with-amd-opteron-a1100-arm-soc<]release[/url<] that Seattle took, we'll be seeing Zen in late 2018.

          • chuckula
          • 4 years ago

          Hopefully Seattle was a (very bad) outlier. However, 9 months from initial samples to full launch for a very large and very complex part from AMD isn’t unrealistic. 6 months for an early 2017 launch is highly optimistic (probably too optimistic) and I could pretty much guarantee 2017 when Lisa Su said that Zen taped out earlier this year (and not in 2015).

      • JustAnEngineer
      • 4 years ago

      [url<]https://techreport.com/news/30223/amd-teases-zen-silicon-at-its-computex-2016-press-conference[/url<]

      • Srsly_Bro
      • 4 years ago

      You didn’t look to see if news was posted and said no news was posted, then mad when we called you out for not looking before you posted no news was posted about zen?

        • ronch
        • 4 years ago

        I did look. Googled ‘AMD Zen computex’.

    • LocalCitizen
    • 4 years ago

    2 x RX480 < 500$

      • LocalCitizen
      • 4 years ago

      while Jeff is under NDA (that may or may not exist), it does not prevent us gerbils from guesstimating.

      anyone else find the name rX 480 strange? and do we really think there will be only 1 card based on polaris 10?

      i think rx 480 is the upper end product, going for 250$, and a r480 selling for 200$

        • nanoflower
        • 4 years ago

        The RX 480 is the $200 product (likely with the 4GB of memory.) It’s very likely there will be higher performing products but whether that is just a minimal performance boost RX 480X or something more significant like a Rx490(x) is unknown. Perhaps a full P10 with ~2560Stream Processors that gets closer to competing with the GTX 1080?

          • AJSB
          • 4 years ago

          RX 480 4GB is $199
          RX 480 8GB is $229
          …and i bet that there is/will_be a RX 480X2 card with dual GPU (wonder about RAM is they will stuck in it 16GB)…because PowerColor shown already their water cooling solutions that will be used in Polaris cards and in those solutions there is a specific for a card with dual GPUs.

      • Deanjo
      • 4 years ago

      2 x RX480 = 2 x headaches + 50% of games playing properly with crossfire.

        • 223 Fan
        • 4 years ago

        Purportedly DX12 obviates the need for Crossfire and/or SLI. Or did I misread the DX12 specs?

          • Deanjo
          • 4 years ago

          Yes it could be done purely with DX12 but that as well has some huge hurdles.

          [url<]http://wccftech.com/dx12-nvidia-amd-asynchronous-multigpu/[/url<] So it does not "obviate" the need for crossfire/sli and still heavily reliant on developer to support it. A single big card is still going to give you the performance and steady results without all the gotchas of multicard setups.

            • 223 Fan
            • 4 years ago

            From what I read here and elsewhere CF and SLI are headaches, though my 8800 GTX x 2 SLI setup never did give me any issues. So if DX12 is even an incremental improvement, and that’s a big if, won’t it supplant CF and SLI for DX12 games?

            • Deanjo
            • 4 years ago

            Doubtful, with SLI/Crossfire the multi gpu setups are optimized at the driver level for particular SKU’s and particular games. Even with AMD / nVidia having all the necessary resources at hand to pull off such setups, they limited the capabilities to the same family of GPUs for simplicities sake. With Lucid’s Hydra engine, it tried to do the same thing on a driver / ASIC level and ran into several issues and ran with very few titles with a performance gain.

            The DX12 route requires the optimization done at the application level and with the vast number of possibilities of card combo’s becomes a very hard thing to optimize an application to. Even within the same GPU family, the multi-card optimizations can vary differently. We have seen for years developers trying to get proper performance out setups with just one card setups and we still see at times a big separation of what are supposed to be competing SKU’s from multiple vendors.

            DX12 multi-gpu support sounds great in theory but it will require a ton of extra work done by the developer (read expensive) and for little to no financial gain for them.

            • 223 Fan
            • 4 years ago

            What about at the game engine level? It won’t be worth it for individual developers but perhaps it would be worth the effort for the Tim Sweeneys of the world.

            • Deanjo
            • 4 years ago

            Game engine level isn’t going to cut it. We already have a ton of games out there already using the same engine but still requiring a crapload of individual tweaks to the drivers addressing each game individually.

      • Freon
      • 4 years ago

      CF and SLI are not something I’d ever recommend to someone.

        • nanoflower
        • 4 years ago

        I agree with that. I don’t think even AMD expects people to do that in any great number but it was a way to only use the 480 and still show how it compares to Nvidia’s new products. Hopefully there is a RX 490 coming that will match the performance of that 2x RX 480 setup.

          • juzz86
          • 4 years ago

          Wouldn’t RX 490 just be Fiji?

          A 290/390 does not perform as two 280/380s. I think RX 490 doubling 480 performance is a bit of a stretch, only because there is a whole ‘nother product you can slip in there.

      • slowriot
      • 4 years ago

      I think it could be quite a popular setup. There are risks and potential headaches but I’m hopeful AMD realizes the importance of a high quality CF experience and makes the appropriate improvements.

        • Firestarter
        • 4 years ago

        [quote<]... I'm hopeful ...[/quote<] I'm not. Both Nvidia and AMD have had years to prove that this concept is viable and all I see are people who are fed up with it from trying to use it

      • Concupiscence
      • 4 years ago

      It’d be a cheap option for a GPU computing box, for sure. Gaming would be much more hit-or-miss.

    • ronch
    • 4 years ago

    Depending on what Nvidia puts out and how much, this could very well be my next GPU upgrade target.

    • derFunkenstein
    • 4 years ago

    We’re pretty sure that Polaris’s GPU architecture is an evolution on GCN, right? So it’s got about 1/8 more compute units than Tonga, plus hopefully a decent bump in clock speed (otherwise this is an abject failure, since it’s just a bit less power as a 380X). It’s also got a 60% bump in memory bandwidth over the 380X (8GT/s vs 5.5GT/s).

    For fun, I’ll give it a 1400MHz max boost speed, which would be around 40% over Tonga, and right in line with the boost % that Nvidia got for the 1070 version of Pascal. That works out to be around 55% more compute power, 60% more memory bandwidth, and assuming 32ROPs (never a given, I admit) a linear 40% bump in fill rate. Well, now you’re pushing GTX 970 speed for $200.

    That’s a lot of guesswork and assumptions, but if anything like that comes close to passing, AMD will have a pretty solid lock on the high-volume dGPU market. That’s good news.

    /me awaits flagrant flaming

      • rudimentary_lathe
      • 4 years ago

      970 performance for $200 is not terribly exciting, at least for me. GTX 980 or Fury performance for $200, that definitely whets my appetite.

        • derFunkenstein
        • 4 years ago

        Everything I’ve laid out made no assumptions about architectural improvements (assuming the inverse, a direct 1:1 comparison per clock). If Polaris has anything over GCN 1.2 (likely) then pushing 980 (or even a tiny bit higher) performance for $200 is a possibility.

          • Dudeface
          • 4 years ago

          Based on the single precision numbers, can we reasonably expect this to fall around the R9 390/390X level of performance? And around 10-20% below the 1070?

            • derFunkenstein
            • 4 years ago

            The 28nm GCN cards were all hampered by something other than single-precision compute performance. We’re all pretty sure about that, and have been for some time. I’m hopeful (but not overly optimistic) that they’ve addressed the bottlenecks after recognizing that the problem wasn’t in the compute.

            • Dudeface
            • 4 years ago

            A good point, and one we likely won’t know the answer to until we have cards in hands (or AMD decides to release a deep dive on the GCN4 architecture).

            As you say, if we make the reasonably safe assumption that some progression has been made with GCN4 (as opposed to regression, which seems unlikely), we should therefore expect better games performance per quoted TFLOP of compute. Reduced memory bandwidth compared to 390X needs to be taken into account, but better memory compression may help here too, if that is something coming with Polaris.

            All things considered, my prediction is performance around 390X/GTX980 levels. Maybe a bit more.

            A mid range card which equals or betters the performance of the previous gen high end card? It’s like the good ol days!

            • nanoflower
            • 4 years ago

            The benchmarks that have been spotted would suggest you are correct. I guess we will have to wait for reviews or more information from AMD to confirm that.

        • EzioAs
        • 4 years ago

        I disagree. The mid-range segment hasn’t seen a lot of improvements since the GTX 600 series and Radeon HD 7000 series. Mid range products of later generations didn’t really give much incentive for people to upgrade (upgrade is the keyword here). I mean sure, for the upcoming gen, more performance is even better but getting at least the performance of a GTX 970 at $200 is still a great value for a mid-range upgrade. I know I’ll be looking at Polaris or something equivalent from Nvidia as an upgrade from my GTX 660.

          • derFunkenstein
          • 4 years ago

          Yep. If you had $180-200 to spend and had a GTX 760, you’re JUST NOW getting something that’s faster than what you have, but still not really worth jumping on.

            • willmore
            • 4 years ago

            This HD7850 owner is vigourously shaking his head ‘yes’.

            Finally. In time for my fall GPU buying season.

          • rudimentary_lathe
          • 4 years ago

          I don’t know, my ~$200 dollar card (that I got for less on sale) from a couple years ago already provides a very good experience on 1080p. I’m not interested in buying another 1080p card. I of course welcome lower pricing, and for those without a decent 1080p card already this looks to be a no brainer.

          When I can get comparable performance on 1440p for the same price I paid on 1080p, I’ll look to upgrade. I’m still holding out hope this card will end up doing that, but I need to see impartial benchmarks first. The leaked chart from videocardz suggests the RX 480 will have better than 980, and just below Fury performance. If that ends up being the case, I’m sold.

        • anotherengineer
        • 4 years ago

        Well it makes me hungry.

        Here in Canada lots of gtx 970’s are still around $450!! So a Radeon 480 for $230 with DP1.3/freesync, hdmi 2.0, 14nm, etc. sure sounds good to me.

          • rudimentary_lathe
          • 4 years ago

          After exchange, the 4GB will be closer to $260 after exchange, and the 8GB closer to $300. That’s assuming the AIB’s stick with the MSRP.

          Agreed though, it’s odd to see the 970 still going in the $400-$450 range. I guess a lot of people just don’t know what’s going on with the transition to 16/14nm.

      • chuckula
      • 4 years ago

      Raj said 5.5 Tflops of compute performance on stage.
      Which puts it between an R9-390 and R9-390X.
      You can work the number of GCN units and clockspeeds to arrive at 5.5 Tflops anyway you want.

        • Leader952
        • 4 years ago

        [quote<]Raj said 5.5 Tflops of compute performance on stage.[/quote<] The table in this article shows: Peak single-precision compute performance over 5 TFLOPS. Does Peak mean boost clocks or base clocks? Also why "over 5 TFLOPS" and not 5.5 TFLOPS?

          • derFunkenstein
          • 4 years ago

          Surely that means at boost. Nobody gives away lower numbers when they have peak figures to share.

        • the
        • 4 years ago

        Spec sheet says 2304 ALU for the RX 480 which would imply a ~1.2 Ghz clock to reach that 5.5 TFLOP figure.

        • derFunkenstein
        • 4 years ago

        But we all know that GCN parts were handicapped somewhere other than the Almighty TFlop. Go back to the post-Fury X podcast, and Scott says so in just as many words. As I wrote elsewhere in this thread I hope (but am not overly optimistic) that the actual bottlenecks were addressed.

          • NoOne ButMe
          • 4 years ago

          Indeed. I see that most GCN cards (basically not Fiji) are around 15% slower when hitting equal FLOPs (in game FLOPS, as Nvidia boost normally seems to go over their rates boost). Async seems to get about 2/3rdscof the missing flops back for GCN 1.1/1.2 parts.

          Fiji is super bad though. Something like preforms as if it has 30% lower FLOPs. If Vega large is just Fiji with feeding properly than at same clocks could see a 30% performance boost for most games.

      • NoOne ButMe
      • 4 years ago

      The boost in Pascal was almost all Nvidia optimizing the architecture for higher clocks enabled by the FinFET process lowering power drastically.

      • nanoflower
      • 4 years ago

      1266MHz seems to be the current speed according to this image [url<]http://www.techpowerup.com/223043/amd-radeon-rx-480-clock-speeds-revealed-clocked-above-1-2-ghz[/url<] Whether that's the clock AMD will actually ship at is unknown. But it fits with the various benchmark leaks from Sandra Sisoft, Ashes of Singularity and 3DMark benchmark databases.

      • AJSB
      • 4 years ago

      I heard that base clock is 1266MHz, so 1400MHz for boost clock seems possible.

      • PrincipalSkinner
      • 4 years ago

      I’d say it’s nothing more than a Tonga die shrink.

        • NTMBK
        • 4 years ago

        Except for having way more shaders, updated shader design, updated geometry processor, updated cache, updated memory controller, updated multimedia decode, updated display engine, updated command processor… oh yeah, it’s totally just a Tonga die shrink.

          • nanoflower
          • 4 years ago

          LOL. It does seem like AMD put in more work to changing/improving the design than Nvidia. Though Nvidia putting in their effort on maximizing clock rates worked out well for them in getting a high performance part. Whether AMD or Nvidia is getting better yields is the unknown factor.

            • anotherengineer
            • 4 years ago

            I don’t think Nvidia focused on clocks, I think that is more than likely a result of different silicon on a different process from a different fab.

            • the
            • 4 years ago

            The thing is that Pascal wasn’t supposed to exist in the first place. With TSMC’s 200 nm not being good for GPU designs, nVidia had to shift some products around. nVidia divided up the feature set of the original Maxwell and Volta to spread across three generations instead of two. What could reasonably be done on 28 nm was moved into the Maxwell we eventually got. What didn’t make it was moved to Pascal along with HBM support which was originally targeted for Volta.

            The big redesign for nVidia is going to be with Volta.

            • NoOne ButMe
            • 4 years ago

            At last check (many months ago) Samsing had best yields followed by TSMC and Global Foundries at the back.

            Although for a 232 v. 314 chips I would think AMD has better yields.

          • PrincipalSkinner
          • 4 years ago

          Taken from AT
          “RTG was upfront in telling us that on average the node shrink will probably count for more of Polaris’s gains than architecture improvements, but this will also be very workload dependent.”
          So yeah, “totally” mostly a die shrink.

    • appaws
    • 4 years ago

    There has been a lot of doom and gloom lately about AMD. Justified really. Working the low end seems a decent strategy.

    But I wonder if the “mind share” generated by capturing the high end of the spectrum has too big an influence over even lower level consumers. The greatness of the 1080/70 will have a lot of influence, even over people who would never spend that much on a graphics card.

    “Haven’t you heard…Nvidia cards are the best. Look at these 4K benchmarks,” he explained to his friend who was deciding to buy a 1060 or an RX480.

      • ronch
      • 4 years ago

      Yep. You ever wonder why the big car makers participate in F1? They can use technology derived from their F1 development and at the same time it really gives them some serious prestige points.

        • travbrad
        • 4 years ago

        I agree about the prestige points but when it comes to technology it tends to go the other way around these days in F1. The current hybrid/turbo style engines in particular were in their road cars long before they were in F1.

        A lot of the technology in F1 has been banned as well, so in some ways road cars are actually more advanced (ABS, traction control, active suspension/stability control, etc). A huge portion of the budget for F1 teams goes into creating downforce with more efficient wings/underbody/diffusers which is completely irrelevant to most road cars.

        /END pedantic F1 rant

        • Deanjo
        • 4 years ago

        Car manufacturers participate also to increase sales. There is an old saying in the car world “Win on Sunday, sell on Monday”.

        [url<]http://www.motorauthority.com/news/1047945_study-win-on-sunday-sell-on-monday-still-holds-true[/url<]

      • NovusBogus
      • 4 years ago

      Yes, a mainstream card that people actually buy is a lot more useful than repeatedly failing at a halo product that only a handful of hardcore enthusiasts even know about. Most buyers don’t read that much into the reviews and will just go with whatever seems decent at the time, so as long as it’s not totally smoked by the 1060 it’ll be a solid product.

      • Spunjji
      • 4 years ago

      I think you’re sadly right. Some people just don’t get how one end of the market has very little to do with the other, but their little fragments of knowledge still get passed off as tech genius to their unsuspecting friends.

      • orik
      • 4 years ago

      that’s called the halo effect and it’s why cards like the Titan and sli on a stick came to be

      • Kretschmer
      • 4 years ago

      The engineering that wins you the high end will also win you the low end. Unless your competitor is so desperate to offload its silicon that it does so without margins.

    • tipoo
    • 4 years ago

    There’s Raja on stage. Now throw out Wasson!

    [url<]http://i.imgur.com/vI8DM0u.gif[/url<]

      • Longsdivision
      • 4 years ago

      No need, Scotts probably in the in the DJ booth dropping his mixtape down on the latest DJ DAMAGE technicals of techno…and messing with the lights.

      [url<]https://techreport.com/blog/27367/finally-light-bulb-tesla-tech-gives-leds-a-worthy-rival[/url<]

    • DrDominodog51
    • 4 years ago

    This generation battle escalated quickly. Until TR’s review of both teams, I wait! If by some chance this matches full Hawaii, I will definitely buy one. It would complement the 4690k I ordered this week well.

    • mark625
    • 4 years ago

    So is the RX 480 based on Polaris 10 or Polaris 11? I’m guessing 10, and Polaris 11 will be the R 470 line. If so, that leaves room at the top for an RX 490 (GDDR5X) and RX Fury (HBM2).

      • Voldenuit
      • 4 years ago

      Yeah, it’s going to be Polaris 10, because it’s unlikely that 11 will have a 256-bit memory bus, for cost reasons.

      I’m disappointed AMD didn’t make Polaris 11 in time for the OEM annual laptop refresh; instead, we got a bunch of rebadged GCN 1.0 (!) parts for the M400 line.

    • EzioAs
    • 4 years ago

    There are two (at least) Polaris-based GPUs, right? Is this the faster one or the slower one?

      • Rurouni
      • 4 years ago

      The faster one. The slower/smaller one would be for notebooks and/or lower end cards.

      • Hattig
      • 4 years ago

      As far as people can ascertain, the RX 480 family:

      4GB – 2304 CU – 1266 MHz – $199
      8GB – 2304 CU – 1266 MHz – $229 (?)

      and there is a $299 SKU that hasn’t been talked about apart from the price.

      8GB GDDR5X? – 2560 CU? – >1266 MHz? – $299

      And then in a short while, the likely 2048 CU Polaris 10s will appear in the RX 470 family.

      4GB – 2048 CU – 1150 MHz? – $169?

      And then Polaris 11’s SKUs will slot in below and nothing is known about those configs.

        • 0x800300AF
        • 4 years ago

        In order to justify a 50% markup .. going to need to supply at least 25-35% more performance. (unless your nV.. where +15-20% Perf = +60% $, AMD = No nV)

        I don’t see how adding 11% more shaders/cu even with a mild (15%) clock increase would get close to that. Instead wound either need substantially more CU or much higher clock (1600mhz)+.

        The problem with going higher clock, is need for higher power thus eating into the perf/w target. If 14nm proves to be an excellent OCer and able to achieve such clocks with minimal V bump then unless AMD builds in a limitation to the 480, nothing really to prevent massively OC’d 480s. This MAYBE where the 150W (75W from PCIe 75W from 6pin) comes into play, though 3rd parties would surely exploit this simply by adding a single 8 pin (225W).

          • NoOne ButMe
          • 4 years ago

          4870 was only about 15-20% lead over 4850.
          with a 20% clockspeed boost and 82% more bandwidth.

          Make the Rx480x/Rx485/Rx490 be 11% more CU and 15% more clockspeed should be able to get 20% more performance overall.

          AMD might try to make the more expensive Polaris be $270/300 for 4GB/8GB which could make more bearable.

Pin It on Pinterest

Share This