AMD sets a new course for Radeons with its Polaris architecture

At AMD’s Radeon Technologies Group (RTG) tech summit last month, corporate vice president, product CTO, and corporate fellow Joe Macri described fabrication process advances as being “like Christmas” for the company’s engineers. The last time the company took the wrapping paper off a graphics card benefiting from a process tech advance was late 2011’s Radeon HD 7970 and its 28-nm Tahiti GPU. Since then, GPUs from both AMD and Nvidia have been fabricated on 28-nm process technology.

That long period of process stagnation is coming to an end. Around the middle of this year, AMD will release GPUs built with a new architecture it’s calling Polaris, and those chips will be made using FinFETs, a type of 3D transistor. FinFETs bring some tantalizing improvements for performance and power efficiency to the table. Judging by the company’s excitement over Polaris, FinFETs are just the kind of Christmas present the company’s engineers have waited so long for.

Before we talk about FinFETs and their implications for Polaris’ performance, though, let’s examine some of the guiding lights that AMD is using to design its future products.

Charting the course

AMD’s Raja Koduri, senior vice president and chief architect for the RTG, says the division is working on graphics products that can deliver both “fast pixels” and “deep pixels.” The fast-pixel problem comes from the demand for graphics cards to deliver more pixels at ever-greater speeds, a challenge made more difficult by the demands of present and future VR hardware.

For example, Koduri says twin 4K displays in a head-mounted device will require addressing 16.6 megapixels per refresh. If that figure isn’t intimidating enough, the company would eventually like to deliver enough pixels to drive a VR headset with twin 16K displays for a “truly immersive VR experience.”

For that to happen, Koduri says we’ll need an increase in graphics processing performance of at least a thousand times that of today’s chips. Furthermore, he believes that photo-realistic rendering for such a device would require graphics processors to deliver over one million times the performance of the hardware we have now. Koduri estimates that we’d need over 20 years of performance increases from Moore’s Law alone to reach that point, and he isn’t content with waiting that long.

The RTG believes that bridging this enormous gap is going to require new, more efficient rendering approaches born from techniques like light-field rendering, texture-space rendering, and foveated rendering in tandem with more powerful hardware. Koduri hopes that AMD’s GPUOpen initiative, intended to foster more collaboration and information-sharing among game developers, will help those devs to begin cracking some of the challenges of that grand goal on the software side of the equation.

We’ve already seen some of the company’s plans for “deep pixels” in its plans for FreeSync and ultra-high-definition content on Radeon graphics cards this year, and Polaris chips are one way the company will begin to deliver on those promises.

We aren’t going to be diving deep into the microarchitecture of the Polaris GPU today, but we do know a few things about it. Polaris includes a number of fourth-generation GCN graphics cores with improvements for primitive discard accelerators, hardware schedulers, and instruction pre-fetch. AMD also says it’s improved overall shader efficiency in fourth-gen GCN, and Polaris chips will get some form of memory compression, too. Polaris’ display block will support features like HDMI 2.0a and DisplayPort 1.3, and its multimedia features will include H.265 Main 10 decode at resolutions up to 4K, and 4K H.265 encoding at up to 60 FPS.

Getting finny

AMD says Polaris chips have been designed specifically for fabrication with FinFETs. Unlike planar transistors that are laid down in flat layers, a FinFET is built by “wrapping” the transistor’s gate around a three-dimensional fin of silicon. For some more information about the general principles behind FinFETs, check out this Intel presentation (PDF) and video.

Normally, we’d describe those transistors using a feature size like 28 nm or 14 nm, but AMD didn’t share details of the exact processes it’s targeting for the production of Polaris chips at its summit. Even if we don’t know the process that AMD will use to produce Polaris chips right now, FinFETs have a lot to recommend them. The increased surface area of the FinFET’s gate-channel interface offers much more control over the transistor. AMD says this design means a FinFET transistor can switch on and off faster, carry more current when it’s operating, and consume less power when it’s not.

FinFETs have other useful characteristics that make them appealing versus planar transistors. Joe Macri told us that FinFETs are easier to build, easier to characterize, and more uniform than their 2D cousins. That uniformity is important. Macri noted that the performance of a chip is set by its slowest device, and power usage is set by its leakiest. FinFETs exhibit higher overall performance and lower overall leakage compared to 28-nm planar silicon, as illustrated by the hypothetical distributions of the “clouds” above.

All told, AMD promises that graphics chips built with FinFETs will offer a big performance-per-watt increase. The company says it can keep the performance of a given FinFET chip the same versus a 28-nm planar chip while reducing its power consumption by up to 50%-60%. That performance-per-watt improvement means Polaris GPUs could provide what AMD calls “better-than-console” performance in devices like thin-and-light gaming notebooks and small-form-factor desktops. 

The company did show working Polaris silicon at the RTG event, and there does appear to be something to its performance-per-watt claims. Although we weren’t permitted to see any graphics cards with Polaris chips up close, the company brought out a pair of small-form-factor systems built with Intel motherboards and Core i7-4790K CPUs to show what one form of its new chip can do.

Those PCs appeared to be inside tiny Cooler Master Elite 110 cases. One held a Polaris graphics card, while the other used an Nvidia GeForce GTX 950. The Polaris PC used about 90W to run Star Wars Battlefront at 1080p with a 60-FPS cap, while the Nvidia-powered system needed about 153W for the same load. 

We wouldn’t apply or recommend a 60-FPS cap in games that don’t require one, so it’d be interesting to see how AMD’s fight would play out without that restriction in place. Even so, that small peek at Polaris’ performance has us eager to dig deep when those next-gen Radeons arrive later this year. That day can’t come soon enough. 

Comments closed
    • BaronMatrix
    • 4 years ago

    The most interesting thing about this is misdirection… Because CPUs have been at 32nm, the increase is potential for a general purpose process is even greater… More transistors are used for flushes, cache misses, SMT,etc…

    Zen may be the fulfillment of what AMD marketing said (at 32nm)… We could build it but you couldn’t afford it… A Zen APU similarly configured with dense packing as Carrizo at 14nm would be CRAZY…

    Not to mention what Nano will be… They can probably go to 1024 or 1280 shaders for the high end APU (65W)… Perf\Watt is complex… But AMD should be ready for the handheld gaming space… Much less the convertible… Once I upgrade my 8800P laptop w\SSD, I can say how it performs compared to my FX8370…

      • BaronMatrix
      • 4 years ago

      I just hope they haven’t abandoned Bobcat… At 14nm it could have SMT… Not that I don’t think Zen can go .5W…

        • TheRealSintel
        • 4 years ago

        The entire cat team left AMD after jaguar.

    • stefem
    • 4 years ago

    I know it’s just a bunch of PR statement and also I remember some AMD’s “supposed” technical guys expressing equally bogus statement in the past but…

    If you declare that 14nm FinFETs compared to 28nm bring alone a 50%-60% reduction of power consumption and then you show a comparison between your “next gen” GPU (made on 14nm) compared to an “current gen” GPU (made on 28nm) with similar performance and despite your particular testing methods the power meter shows “only” a 58.8% improvement over the competition…
    Does this mean that your new architecture perform just on par on performance-per-watt with the old one of your competitor? where is the improvements that come from the new architecture?

      • ImSpartacus
      • 4 years ago

      “Fiji will be an overclocker’s dream.”

      #JustAMDPRStuff

        • stefem
        • 4 years ago

        Yea, I forgot that, it was epic!
        I don’t understand why some people are blaming NVIDIA’s PR, come on… you can’t tell your costumer that Santa is coming with a bag full of gifts without expecting that when they woke up they will be disappointed.
        Hype is a double edged sword…

      • GTVic
      • 4 years ago

      Your second paragraph says “59% improvement”, third paragraph equates that to “on par”? Quite a trick. And you also somehow manage to imply that 59% is not compatible with the 50%-60% claims. Reaching back to my Grade 3 math skills, 59 is greater than 50 and less than 60.

      Also, the GTX 950 is current generation, not “old gen”.

        • stefem
        • 4 years ago

        Thanks herr professor… 😉
        Sorry if I wasn’t clear but Enlish is not my native language, still you should read better, AMD claimed 59% less power consumption than a GTX 950 (and I’ve correctly reported that), “on par” was an estimation, you can’t compare perf/W efficiency of two architecture on different nodes.
        Also I haven’t implied anything, 59% is perfectly compatible with the 50/60% claim but it left no room for improvement that come from the design itself (still talking about perf/W).

        Many people do this mistake but you must understand that there is architecture efficiency and process node or transistor efficiency. Porting the GTX 950 “Maxwell” from 28nm to 16/14nm does it make a better architecture? I don’t think so… you will just see improvement by the more advanced process node.
        Taking AMD’s number seriously a die shrink from 28nm HKMG to 14 FinFETs offers (with fixed performance) a 50/60% reduction of power, hence my question if there is any performance-per-watt improvement brought by the new design.

        Anyway, my observation where made more from a PR perspective, if you say that the new node gives a 50/60% reduction of power consumption and then do a comparison current design/old node vs new design/new node and the improvement is the very same 50/60% that (according to you) should come just from the shrink, I will infer that your next architecture is just on par ([b]in a performance-per-watt metric[\b]) with the current one of the competition since an hypothetical GTX 950 GPU made on the new node will benefit of the same 50/60% power consumption reduction.

        Yea, current generation is more appropriate (I don’t like old/new gen terms but I thought it was more clear) but it doesn’t change anything.

        Hope to be more clear now

          • Voldenuit
          • 4 years ago

          If i understand correctly, AMD claimed that ~60% of the power savings they saw came from the process shrink/finFET.

          Assuming a 60/40 split, for the roughly 60 watts of power saving being shown, around 60% (or 36 W) would have come from the process shrink, and the remaining 40% (24W) from architectural improvements. I don’t see any logical disconnect in their claims.

            • stefem
            • 4 years ago

            AMD said that going from 28nm to 14nm reduce power draw by 50/60% at the same performance (or 25-30% performance increase at the same power) so with those number they where referring to the total power consumption and not to the saved portion.
            TR quote: “The company says it can keep the performance of a given FinFET chip the same versus a 28-nm planar chip while reducing its power consumption by up to 50%-60%”
            I was asleep and I made a mistake, a big one to be honest (something the guy/gal with Grade 3 math skills missed to note… 😉 ), the system with Polaris absorbed 38.6% less (not 59% as I wrongly said) power than the system equipped with the GTX 950 but we are talking about the total machine consumption, GPU vs GPU will of course show better numbers.

            Anyway they are just marketing claims, I don’t want to give more importance than it have but would be interesting to have this new chip and a shrink of a same class GPU to see the difference.

            • Voldenuit
            • 4 years ago

            True. Lots of unknowns.

            1. We don’t know what the relative performance of the Polaris chip they showed was.
            2. AMD was probably comparing power reduction to their own chips, but compared power draw against an nvidia chip.
            3. As you said, the 88W and 155W are total system draw numbers, so the card deltas would be different.
            4. We don’t know by how much v-syncing the chip reduced its TDP vs max TDP.

            I’m not saying Polaris doesn’t sound promising, just that there is not enough information out there to start analyzing the available numbers with any form of precision.

      • Fursdon
      • 4 years ago

      nm.

      • _ppi
      • 4 years ago

      Well yes, but whether being architecture-wise on par with Maxwell (that has some stuff cut down intentionally in order to streamline DX11 performance only – which is a good thing right now) is good enough or not really will be visible only when comparable Pascal is out.

      Catching up with Maxwell, that has shown revolutionary efficiency improvement, is a good thing, IMHO. I would not expect Pascal to deliver another 40% architecture-related improvement over Maxwell.

      Obviously, with process shrink, you can never tell exactly, how much can be attributed to process and how much to architecture.

    • USAFTW
    • 4 years ago

    Wait a minute… When a GPU is not fully utilised it’s power consumption drops dramatically. How do we know that wasn’t Greenland at 10% load and GTX 950 at 110%?

      • maxxcool
      • 4 years ago

      Hence my DEEP skepticism of this Blurb. Until we see 100% load on Amds best vs Nvidia’s best this reeks to me of marketing fat fingers and carefully picked BS.

    • TheMonkeyKing
    • 4 years ago

    I look for other key words in the documentation that I draw basic assumptions. That is, until the hardware actually appears and tested to pieces.

    The one word I see from all the review sites is: laptop.

    My assumption is that this new chipset, at least V 1.0, won’t push any big boundaries with flagship cards they already have — AMD or nVidia. No, they are looking for the small (size) market of laptops and mobiles. Wattage consumption and heat loss are big things for any end-use device but this (again my assumption) is aimed for laptops and perhaps tablets.

    Perhaps building on the victories of a V 1.0, 1Q17 could see a redefined flagship card that could power a 4K monitor/TV @ 120Hz with variable rates 60-100 fps? I dunno but it is something different and they are learning from Intel.

    • brucethemoose
    • 4 years ago

    I wonder of AMD/Nvidia will skip the next node, like they have in the past.

    32nm and 20nm didn’t work out, but I don’t think either company wants to wait 7 years for 8nm (or is it 10nm?) to ramp up.

      • Airmantharp
      • 4 years ago

      That depends on TSMC, and in AMD’s case, also on Samsung.

      Given that Samsung’s business has started to rely on having the very best, as Intel’s has for much of their existence, we can generally expect Samsung to keep up. The question then for AMD would be whether Samsung continues to give them priority in their fabs.

    • Parallax
    • 4 years ago

    [quote<]Polaris’ display block will support features like HDMI 2.0a and DisplayPort 1.3[/quote<] Finally!

    • UnfriendlyFire
    • 4 years ago

    If it took about 5 years for the die shrink from 28nm to 16/14nm, how many years would it take for the next die shrink to occur?

      • Chrispy_
      • 4 years ago

      [spoiler<]Over 9000[/spoiler<]

        • UnfriendlyFire
        • 4 years ago

        I guess we might as well as wait for the post-silicon GPUs then…

      • brucethemoose
      • 4 years ago

      It’s not just waiting for the die shrink. After that, you have wait for the big-chip yields not to suck, then you have to wait until Apple and the Android OEMs are done hogging all the fab capacity.

        • UnfriendlyFire
        • 4 years ago

        I recall reading that 20nm wasn’t considered because it was specifically tailored for smartphones and tablets (because mobile OEMs), which meant that desktop GPUs and high end laptop GPUs would’ve gotten shafted.

          • DancinJack
          • 4 years ago

          Yeah, ARM smartphone/tablet process aren’t generally the same used for big x86 chips.

    • rom0n
    • 4 years ago

    I thought something was not right when I first read gigapixels. It’s a magnitude off.
    twin 4k resolution per frame is 3840*2160*2=16588800 pixels == 1.65888 megapixelspixels not Gigapixels.

      • Jeff Kampman
      • 4 years ago

      You’re correct—I’m not sure how that figure made it into my notes. I’ve edited the article accordingly. The actual figure is 16.6 megapixels, though 😉

        • chuckula
        • 4 years ago

        There’s some vagueness in there as well as to the precise meaning of the word “pixel.”

        There’s on-screen physical pixels that actually get rendered to the display, in which case about 16.6 MPixels for each frame on a double-4K display is the right number. Then there’s the internal representation of pixels that are processed by the GPU, which is a much larger number due to a variety of operations going on inside the GPU including multiple rendering targets (normal map, stencil buffer, multiple texture maps, etc.). All of these get put together during rasterization, then go through any post-rasterization processing before being pushed to the screen.

        The Gigapixel number may not be correct, but internally to the GPU there are far more “pixels” being processed than the raw number you see in the finished product on the screen.

      • Pitabred
      • 4 years ago

      Uhh… that’s 16 million pixels, not 1.6 million.. You’ve got an order of magnitude off. A 4K display is an ~8.3 megapixel display. Still not gigapixels (which I’m not sure where you saw it?), but it’s a lot more than the 1.6megapixels that’d be a bit larger than a 1600×900 screen.

      Edit: Slow on my response… whoopse

    • maxxcool
    • 4 years ago

    So a little more detailed examination makes less sense for this promo slide .. and adds more nonsense to this blurb-vertisement from AMD

    [url<]https://techreport.com/review/29061/nvidia-geforce-gtx-950-graphics-card-reviewed/10[/url<] As reviewed here, a FTW nonstandard version of the 950 running a little over 90+% load pulls almost 123 watts, not 140w. (65-189) [url<]http://images.anandtech.com/doci/9886/Radeon%20Technologies%20Group_Graphics%202016-page-016.jpg[/url<] The slide lists a 4790k running dd4.. which It cannot. Donwvote all you want but in the 1st blurb the post they fudged the numbers already by overstating their competitions power usage by 15% in the WORST case scenario card i could find on a reputable site (Tech report) has reviewed with the 950. Edit: a video that TR did not link (from the Reg) : [url<]http://www.theregister.co.uk/2016/01/04/amd_polaris_14nm/[/url<] Edit: the specific talks on gating make me think even more that this 50% comes in from idle time and time where the majority of the GPU-proper is twiddling around while the co-processors are doing other things like x265 and what not ..

      • chuckula
      • 4 years ago

      It’s marketing B.S. so you shouldn’t take it too seriously, just like nobody takes Jen-Hsun’s talk about a 10X performance boost for Pascal (in bizarro 16-bit computations that nobody on this site cares about) seriously.

      AMD knew full well that all the deeper subtleties would be lost in the noise. The main “fact” that has been revealed today is rather pedestrian: AMD has an upcoming chip that’s superior to the GTX-950. Whoopee doo. I don’t think anybody here is shocked to hear that.

        • maxxcool
        • 4 years ago

        😉 But 50%!

      • namae nanka
      • 4 years ago

      “The slide lists a 4790k running dd4.. which It cannot.”

      Of course it cannot run dd4, do you get what your ‘detailed examination’ missed now?

      “a video that TR did not link”

      The comment section would have been far less interesting otherwise.

        • maxxcool
        • 4 years ago

        you wasted an alt for that ?

          • namae nanka
          • 4 years ago

          An alt? Who do you think I am?

          And considering the inanity you’ve thrown around in this thread, you don’t deserve any better.

            • maxxcool
            • 4 years ago

            Well you used the word ‘inanity’ so you are not that plebeian waste of epidermis snakeoil..

            • maxxcool
            • 4 years ago

            Google says your a angry feminist.. or a squash.. meh..

      • remon
      • 4 years ago

      They’re showing system power draw, not the cards power draw, and the system specs in that slide are wrong, the real specs are shown at 2:26 of that video from the Register, it’s 16 GB of DDR3.

    • Bensam123
    • 4 years ago

    Cool stuff, AMD is finally catching a break… Wonder if Nvidia is fabbing on something similiar with their next gen.

    [quote<]Those PCs appeared to be inside tiny Cooler Master Elite 110 cases. One held a Polaris graphics card, while the other used an Nvidia GeForce GTX 950. The Polaris PC used about 90W to run Star Wars Battlefront at 1080p with a 60-FPS cap, while the Nvidia-powered system needed about 153W for the same load. [/quote<] So they're trying to hide whatever performance these chips have. If they didn't have a cap you'd be able to make a more accurate comparison... It may also cause that power envelope to spin wildly upward as well.

    • _ppi
    • 4 years ago

    [quote<]We wouldn't apply or recommend a 60-FPS cap in games that don't require one[/quote<] Now, without taking into account whether Polaris is better than 950 or whatnot, this statement caught my eye for purely practical point of view. If you are: a) Really playing a game and not benchmarking; b) Sitting in front of 60 Hz fixed refresh rate monitor (as most people do); and c) Your system can consistently deliver 60fps (maybe even if it does not) Why is having 60 fps cap a bad idea? I can actually see a few reasons why it should be a good one! 1) Limited tearing, since system does not try to draw multiple frames on screen. Actually this works very well with VSync on. 2) Lower power consumption, heat and noise Am I missing something?

      • Bensam123
      • 4 years ago

      It starts lowering the GPU clock due to the FPS cap and low load, which causes stuttering when there is demand and the GPU takes a bit to bump back up to full speed.

      A lot of games update based on how fast the FPS is drawn. Even if the FPS can’t be drawn that fast on your monitor, it will still keep the game engine up to speed and provide a more ‘fluid’ experience, even if you can’t see it.

      Vsync isn’t the same thing as a framerate lock and AFAIK you can still get tearing at a 60fps cap with 60hz monitor (especially considering most games wont sit at a steady 60fps).

        • tipoo
        • 4 years ago

        Is it “a lot of games”? Bethesda was mocked for that in Fallout 4. Thought it was a bug and not a common game design feature.

          • auxy
          • 4 years ago

          There’s a difference between linking the gamestate (physics, AI, etc) to the framerate (which is bad) and linking the user input to the framerate (which is almost universal).

            • Bensam123
            • 4 years ago

            Yup, game physics being tied to the FPS is different. They can use a timestamp for stuff like that.

      • Anovoca
      • 4 years ago

      Performance to watt usage is not an even scale, gains past a certain point inevitably come slower. Setting a hard cap on performance provides an unrealistic test of peak performance comparisons. It is better to let them both go and see how consumption and temps scale the harder they are pushed.

      (probably a terrible metaphor here but) it would be like racing a car with low 0-60 but very high top end speed vs a car with fast 0-60 and a much lower top speed. Setting the finish line 10 meters away from the start line isn’t going to accurately tell you which is the fastest.

      • derFunkenstein
      • 4 years ago

      In some games with configurable cap, setting it to match the monitor still results in tearing. Diablo 3 for example. In the Nvidia control panel (maybe in the AMD one too?) there’s an option that basically turns vsync on when the frame rate hits the monitor’s native refresh rate and turns it off when the framerate dips. That’s how I prefer to play, assuming I can’t hit 60fps all the time. That will become a non-necessity once I get a VRR display, but that’s a long ways out.

        • _ppi
        • 4 years ago

        I did not try D3 with that setting yet, but I see where you’re coming from – if the initial frame is not vsynced, having 60 fps lock will not help with tearing.

        Now, I need to find in nVidia control panel the setting you just described.

          • derFunkenstein
          • 4 years ago

          Here you go. Choose “Adaptive”

          [url<]http://imgur.com/6abOu58[/url<] Oh, a quick note: I didn't capture all the text in the window. If you go to that setting and then scroll down, there's an explanation of the options.

            • _ppi
            • 4 years ago

            Thanks!

      • maxxcool
      • 4 years ago

      Likely because of all the bad media surrounding the x-bone and ps4 both having issues achieving 60fps most of the time and in some cases struggling to hit 30fps.

      Coming out and saying ’60fps limits are bad’ subtly directs the frame issue back to the developer and “politely” puts the bus treads on them by (i might be reading into this a bit) saying “the hardware can do it” without outright blaming or pointing a finger.

      • auxy
      • 4 years ago

      Rendering at greater than 60FPS on a 60Hz display can still increase perceived smoothness in games which link the input to the framerate (most games; note that input is not gamestate) due to increased perceived responsiveness in input. I can reliably pick out a game at 60FPS vs. a game at 120FPS on a 60hz display when playing it, and I have reproducibly done so in blind testing before. (*’▽’)

        • _ppi
        • 4 years ago

        Well, yes, but I guess I can survive a bit of input lag of 60 fps compared to eg. 76.3 fps that I would have without the limiter, but with introduced tearing. Going from 60 fps to 120 fps is quite a stretch.

      • Voldenuit
      • 4 years ago

      Agreed. I think it’s perfectly reasonable to have both systems running with VSYNC on.

      Especially if you’re comparing power consumption, it’s a more apples to apples test; as with vsync off, the faster card will be putting a greater load on the CPU, and driving power consumption up.

      It would be an issue if one card was much slower than the other, and thus the CPU had a lower workload, but from the article at anandtech, the author stated that both systems were generally hitting the 60 fps limit, and thus the power draw from the CPU would be more or less equivalent, allowing us to look at the delta between the cards.

    • Anovoca
    • 4 years ago

    Isn’t Pascal 16nm FinFET? I am not saying this isn’t a good move for AMD, just that it seems more of a logical progression than ground-breaking news.

      • chuckula
      • 4 years ago

      We’ve known for years that both Nvidia and AMD were moving to finfet designs after 28nm.

      The real questions were when and who. The when is 2016 for parts of the lineups for both sides (28nm parts will still obviously be floating around) and the “who” apparently includes TSMC and now GloFo to some degree.

        • Anovoca
        • 4 years ago

        IOW, not sure why people are acting so surprised/excited by hearing this news. It hasn’t exactly been a secret that 14/16nm, FinFET, and HBM-2 are coming in the not so distant future. As you stated, the biggest question has mostly been “when?”, which even that we had good indications for sometime in 2016.

        Oh well, the year is only just starting. Plenty of time to see the end results of all the new tech coming together under one chip.

    • Geonerd
    • 4 years ago

    Dimensionless graphs are kinda, like, freaking useless. Please re-consider their use.

      • thedosbox
      • 4 years ago

      Agreed, one of the things I like about TR is that the graphs always start at zero.

      However, to be fair, these do look like they were taken from AMD’s PR slides – just as TR also uses nvidia PR slides.

        • ArdWar
        • 4 years ago

        To be fair, you can’t start logarithmic graph from zero.

        I’d be surprised if these dot in the graphic/chart/whatever actually means anything.
        “Here, place our dots slightly to the right and higher, make sure they didn’t overlap with others’ dots”

      • Sam125
      • 4 years ago

      They don’t provide numbers because then their competition ie: Nvidia would know what level of performance to expect from Polaris. So treat these as qualitative measures to show the benefit of FinFET compared to bulk Si at 28nm.

      • Anonymous Coward
      • 4 years ago

      Try to have reasonable expectations.

    • anotherengineer
    • 4 years ago

    hmmmm

    DP 1.3
    HDMI 2.0a
    14/16nm
    lower power consumption
    eyefinity
    freesync

    all good things.

    Now just have to wait for the cards and the reviews to see if there will be a worthy successor the the old venerable HD 6850?

      • BorgOvermind
      • 4 years ago

      The old venerable is the 5870…

    • Anovoca
    • 4 years ago

    Great job on the article Jeff. Scott announced he was leaving a while ago, but seeing a GPU architecture article with your name on it really makes it official.

    • Kretschmer
    • 4 years ago

    Doesn’t mention the process node = 28nm.

      • Flapdrol
      • 4 years ago

      Is there 28nm finfet though?

      Also, in the picture “finfet” is much smaller than 28nm.

        • Klimax
        • 4 years ago

        Closest is 22nm Intel’s Finfet process used for Ivy Bridge and Haswell.

        So poster is wrong. And Nvidia did say that Pascal is 16nm. There is no reason to think AMD would stick to ancient process for another generation…

      • BorgOvermind
      • 4 years ago

      Looks ~22 by the size ratio.

    • Convert
    • 4 years ago

    Well, good luck AMD. Here’s hoping your product finally lives up to the hype.

    • jessterman21
    • 4 years ago

    66 Gpixels every 12ms or so? NO PROBLEM!!! Jeez louise

    I’ma be happy with my 720p phone and a Google Cardboard, I think…

    • ronch
    • 4 years ago

    Dunno if I just missed it, but this Polaris codename only seemed to surface a few days ago. Was it even in AMD’s slides for the past year? I think there was Arctic Islands, but I wasn’t aware of upcoming AMD GPUs being named after stars. Reminds me of their previous Stars line of CPUs, which included Deneb and Thuban.

      • nanoflower
      • 4 years ago

      Yeah, it seemed to come out of nowhere with the tweets from one of AMD’s graphics guys and then here we have an entire marketing push around Polaris instead of Arctic Islands.

    • Wonders
    • 4 years ago

    I’ll be honest… this is the kind of news I wait months to hear about and hoard every scrap of info about. Just reading these kind of updates is a bit “like Christmas” for me, too.
    (Disclosure: My last 3 GPUs have been Nvidia.)

      • Airmantharp
      • 4 years ago

      I just look at it as- if AMD can do it, Intel/Nvidia can do it better 😉

      (yes, that’s a fanboish statement, but it’s also borne out in the real world- let the down-votes without refutations commence!)

        • cygnus1
        • 4 years ago

        I concur. However, to play devil’s advocate, and to borrow the cop out that investment advisers use, past performance is not a guaranteed indicator of future performance.

          • Airmantharp
          • 4 years ago

          It really isn’t- but if you had to make a bet in either market, would it be on AMD?

          I’d personally *love* to bet on AMD. Hell, the business student and capitalist in me hopes that that bet would be worth taking- but would I put my money on the line for them?

          Nope.

          Not only have Intel and Nvidia more consistently delivered- even when they were technically behind!- they both have focused on what’s important to consumers; both have focused, for instance, on performance per watt, hardware stability, and driver stability, across their platforms.

          If I had to characterize AMD on those points, it would be to say, more often than not, that they produce hardware that is likely to suffer from hardware issues, software issues, and provide poorer energy efficiency, while providing a small performance advantage for the money (and that, not always).

            • DancinJack
            • 4 years ago

            I do not understand how you are not being downthumbed. I said essentially the same things, in less words, and have like -10’s of votes.

            • Airmantharp
            • 4 years ago

            I don’t either. Maybe because I dared people to do it? I’m usually in the same boat :D.

            • Spunjji
            • 4 years ago

            AMD’s strong point used to energy efficiency, though. So in terms of past performance they have a substantial record there. That might be a reason for some downthumbs?

            It has been a very long time indeed since that was true with CPUs of course, but the past is much more recent on the GPU front – even up to GCN they had some excellent products in that regard (anything using the Pitcairn chip), but they were really rocking energy efficiency from the 3000 to 5000 series. I think they got tired of getting kicked around by Nvidia’s monstrous power-guzzler GPUs right about when Nvidia got tired of building them. XD

            Same goes for price/performance. Again, Pitcairn as a notebook GPU – it definitely had software issues on release, but boy did it fly for the money. Still does!

            • vampyren
            • 4 years ago

            Totally agree with everyting you said.
            I always liked AMD and i felt at one point their CPU performed better then Intel but in the last few years i lost faith for them thanks to bad drivers and just the feeling that they are not on top of their game.
            I’m just ordering a new PC where i select all the parts and i’m going for Intel Skylake and nVidia 980. I wish i was confident on AMD parts to bet on them but i’m not.
            I know games will have much better support for this combination of hardware.
            I hope they can do a comeback but at this point i wont put my money on them either.

        • xeridea
        • 4 years ago

        It’s a bit irrelevant now, but AMD would wipe the floor with Nvidia mining cryptocurrency. Easily 2-3x faster due to some instructions that Nvidia didn’t support. Nvidia used to be better at protein folding, not sure now haven’t kept up, but I think it has changed a lot. There isn’t usually a one company is better than another company at everything answer.

          • Airmantharp
          • 4 years ago

          Nope; it’s the company that best addresses the market. Neither cryptocurrency nor protein folding are or were good bets to support, and they didn’t affect to any great deal how Nvidia and AMD developed their products.

          Here, Nvidia has made strides precisely because they have started pushing out ‘lean’ products that are aimed at the gaming market exclusive of the HPC market, and even pushed out a lean high-end product this last generation, something they’ve never done, while AMD insists on producing parts that have more HPC capability at lower market ranges, and thus are less efficient for gaming, which is what they get used for.

          In the future, we might see Nvidia and AMD build more-developed separate gaming-focused and HPC-focused lines, as those workloads continue to diverge.

            • xeridea
            • 4 years ago

            You are talking about different things now. I was referring to your statement about Nvidia being the automatic winner of everything that can do no wrong, not necessarily about what ended up being more profitable. Predicting the future is hard, AMD was banking on OpenCL gaining more traction, and a better 3D API coming out, so they were forward looking, it has just taken more time than they would have liked. AMD has a lot of design wins with it’s GPUs, you just refuse to admit them.

            • Airmantharp
            • 4 years ago

            I’m not ‘refusing to admit’ anything; what I haven’t said, because I didn’t feel it needed to even be said until now, was that the two companies are about equal in most things. Hell, AMD has gotten their graphics IP and their CPU IP (and even other stuff!) into *both* consoles this generation! How’s that for an admission?

            So in terms of their graphics products only, as I’ve been talking about graphics and CPUs and chipsets, Nvidia has been the technology leader, from my perspective as a gamer. They’ve focused on the things I and most gamers care about and executed better than AMD has- and have thus earned my business.

            And while you consider AMD’s misforecast of OpenCL usage in gaming, also consider Nvidia’s misforecast of hardware physics usage- and realize that they both made the same mistake!

        • remon
        • 4 years ago

        AMD beats Nvidia in every price point (except the big one) with a 2 year old tech. Nuff said.

          • Airmantharp
          • 4 years ago

          And yet Nvidia outsells them. Maybe ‘beats’ doesn’t mean what you think it means?

            • remon
            • 4 years ago

            That it provides better performance. What did you think it meant, better marketing?

            • Airmantharp
            • 4 years ago

            I listed a host of reasons above; but to recap, just talking in terms of GPUs, Nvidia puts out products that have better performance per watt, better acoustics, and better drivers.

            I’ve owned cards from both companies, and many from ATi before they were consumed by AMD- and Nvidia has kept up with customer desires- my desires- better.

            That they continue to outsell AMD, even as they charge more for the same outright performance, means that I’m rather much not alone in that sentiment.

            • remon
            • 4 years ago

            Better performace/watt, sure. The rest are false.

            • Airmantharp
            • 4 years ago

            You’re entitled to your opinions, even if they go against reviews, and I’m entitled to my one vote 😀

            • remon
            • 4 years ago

            How do I go against reviews again?

            • Airmantharp
            • 4 years ago

            If you have to ask, the answer is clear- by failing to read them 😀

            • remon
            • 4 years ago

            Can you point to reviews that aren’t saying that 380>960, 390>970 and Fury≥980?

            • Airmantharp
            • 4 years ago

            Can you point to reviews that aren’t saying that the Nvidia cards are quieter, better at performance per watt, and that Nvidia provides better driver support?

            • remon
            • 4 years ago

            1. Even if I point you to a review that says that Nvidia cards are quieter, that has nothing to do with the gpu maker, but the card maker. Even so, here, at the very bottom.

            [url<]http://www.hardocp.com/article/2015/12/22/asus_r9_390_strix_directcu_iii_video_card_review/10#.Vo4y4FeUNr8[/url<] 2. I've already said that performance per watt, Nvidia is better. Are you dense? 3. I have never seen a review that compares the drivers.

            • Voldenuit
            • 4 years ago

            Most of the reviews I’ve read have 380 == 960, 390 == 970, and Fury == 980. The 380X has no comparable competitor and is my personal pick for a 1080p card, and the Fury X doesn’t catch up to the 980Ti. In every case, though, the amd cards draw a lot more power than the nvidia cards, but that’s a testament to just how efficient Maxwell is. I mean, the 970 provides 780 performance while drawing the same power as a 660Ti, that’s pretty crazy to think of.

            • remon
            • 4 years ago

            techpowerup and guru3d have the 380 at least 10% faster than the 960, and on hardocp’s latest benchmark the oc’d 390 was clearly faster than the oc’d 970 (even by a small margin), while the 970 also drew only 19 less watts than the 390, which isn’t a huge difference since we’re talking about 431 vs 412 watts.

            • Voldenuit
            • 4 years ago

            You’ll also find reviews that go the other way, and TR had the 380 essentially tying the 960, with the 380X being clearly ahead.

            And has been discussed in the forums, the MSI Twin Frozr 970 that [H] used is notorious for having greater power consumption than other 970 cards (TR’s 970 review had it [url=https://techreport.com/review/27203/geforce-gtx-970-cards-from-msi-and-asus-reviewed/5<]drawing 40W more power than the ASUS[/url<]), although even said power-hungry MSI 970 [url=https://techreport.com/review/29316/amd-radeon-r9-380x-graphics-card-reviewed/9<]drew 50+W less power than the 390 in the retest[/url<].

            • _ppi
            • 4 years ago

            [H] however tested max OC vs. max OC, among others. TR stock vs. stock (even if stock may mean factory OC)

            970 has more OC headroom, but once you pull the knobs towards 11 as well, the power advantage diminishes. This is because 290 is “more OC’d” stock.

            • TruthSerum
            • 4 years ago

            ” even as they charge more for the same outright performance ” Sounds like an admission…

            • Airmantharp
            • 4 years ago

            Meeting halfway and all that 😉

            • Klimax
            • 4 years ago

            I might have forgotten somethings… Could you please provide evidence for your assertion?

        • BorgOvermind
        • 4 years ago

        Yes, copy-paste always can correct some of the errors that may have appeared.

    • ronch
    • 4 years ago

    [quote<]Joe Macri told us that FinFETs are easier to build, easier to characterize, and more uniform than their 2D cousins. That uniformity is important. Macri noted that the performance of a chip is set by its slowest device, and power usage is set by its leakiest. FinFETs exhibit higher overall performance and lower overall leakage compared to 28-nm planar silicon, as illustrated by the hypothetical distributions of the “clouds” above.[/quote<] Whoa! That's some really interesting and leading edge technology you have there, AMD! I'm sure all of you are as excited about this new technology as a 10-year old is excited to open his gifts on Christmas day! Congrat[b<]Zzzzz[/b<]......!!!!!!!

    • torquer
    • 4 years ago

    As much as anyone, I would love to see AMD live up to its own hype in both GPU and CPU tech. However, as stated by some of the other commenters, AMD has had a bad habit in the last several years of wild eyed hype followed by products that are merely adequate for competition or outright failures.

    • tipoo
    • 4 years ago

    I’m going to cautiously turn my optimism dial up to 3. Raja has some good stuff between his ears (and we’re going to also attribute any good news to Based Scott, right?), and the 28nm GPU stalemate sucked for everyone. It’s certain that the die shrink will be a big boon for consumers, less certain is if AMD can grab back any marketshare as Nvidia won’t be sitting with their thumbs up their butts either.

    After the fab shrink will be a great time to buy GPUs though. If you wanted to stretch it, whatever comes after will probably last at least as long as this console generation. See GTX 8800 after the PS360. Heck we’re already at more than that level of performance difference, this will just add more to it.

      • nanoflower
      • 4 years ago

      So you are saying now is not a good time to buy a 380X to replace an old (but still working) 650TI?

        • Concupiscence
        • 4 years ago

        If the 650 Ti’s doing what you need right now, I would wait. If you wait, you can either snag a new GPU that sips power while delivering terrific performance, or snag a hot-running 380X-class GPU when they’re being clearanced out.

          • nanoflower
          • 4 years ago

          Yeah, it’s a bit frustrating as there are some games I have that I would like to play (The Witcher 3 as an example) that I can’t really play with the 650 TI, but I don’t care for the idea of buying a new card when there’s much better coming out in just a few months which may either give me much better performance at the same price or allow me to get a current gen card at a much better price.

          It’s the constant dilemma of buying now or waiting for something better/cheaper.

            • tipoo
            • 4 years ago

            I’m in a similar boat, and while my mantra is usually “there’s always something better coming so may as well jump in”, but in this case we’ve been on 28nm so long, and FinFETs help significantly, so I think in this case it’s worth waiting a few months. Even if AMD disappoints, they’re both moving to FinFETs and off 28nm, there should be some good competition coming shortly.

            • nanoflower
            • 4 years ago

            All too true but seeing some of the deals coming from reddit’s buildapcsales makes it oh so tempting. When you see a 970 or 390 going for around $250 with the various deals available it’s quite tempting.

      • the
      • 4 years ago

      Yeah, 2016 will be a good year for GPUs. AMD is getting a new architecture to build upon and nVidia is going to introduce HBM on their high end parts. Both are going to 14/16 nm FinFET. If VR catches on, it’ll spur demand in the PC sector like we haven’t seen in nearly a decade.

      The flip side is that we maybe waiting 2 more years before the next shrink arrives so the second half of 2017 and most of 2018 will be a repeat of what we experienced in 2015. Die sizes rapidly increasing while flexing some architectural improvements waiting on the new process node to be ready. Oh, and rebrands, lots of rebrands.

    • NoOne ButMe
    • 4 years ago

    AMD is using both TSMC and GlobalFoundries. For different chips. So chip codenamed XXXX could be TSMC and chip codenamed XXXXXXXXX could be GlobalFoundry.

    Edit: first codenamed changed.

      • PrincipalSkinner
      • 4 years ago

      Yes, but what about XXX chips?

        • NoOne ButMe
        • 4 years ago

        I mis-remembered the codename. It is 4 characters not 3. =]

        and-or I’m just spouting BS. Or maybe both.

        The 2nd name should be easy to figure out.

          • Sam125
          • 4 years ago

          lol

    • DPete27
    • 4 years ago

    After years of AMD trends, the only thing I believe anymore is that their marketing team is a bunch of slime balls that cook up niche performance numbers in attempts to generate market hype, and in the end, the actual product fails to impress.
    I’m rooting for them to get back on their feet just as much as the next guy, but I don’t trust ANY of their performance claims until the product launches and reviews have been written.

      • nanoflower
      • 4 years ago

      As others have said the performance numbers are within the realm of possibility. Sure, it’s probably not something you would see all of the time but still it gives hope that this new generation will see significant advances over the previous generation.

        • DancinJack
        • 4 years ago

        Except that is what AMD has done on CPU and GPU fronts for the past five years and failed to deliver almost every time. He isn’t saying it isn’t possible, just that the marketing is crap without a product that backs it up.

          • DancinJack
          • 4 years ago

          Lots of AMD fanboiz in the comments this morning I see. Have at it, folks!

            • ronch
            • 4 years ago

            Yeah. I got a downvote within seconds of posting some sarcastic (but true) comments. Hah.

          • xeridea
          • 4 years ago

          CPU front BD line wasn’t the greatest, but they did improve it a lot, the issue is that they didn’t have any process shrinks, and it is hard to be competitive when you are 2 nodes behind. Steamroller and Excavator would be pretty decent on 16nm, rather than 28nm GPU node.

          GPU they are fine, perf/watt is less than Nvidia, but their GPUs are a lot more flexible (a lot higher DP speed, basically supporting most of DX12 since GCN 1.0)

            • ronch
            • 4 years ago

            Look, Bulldozer is one of my favorite architectures due to its interesting design but let’s not kid ourselves here. Did you read the TR article on the FX-8150 when it came out years ago?

            • chuckula
            • 4 years ago

            On the CPU front Bulldozer and Piledriver are both massively inferior to Sandy Bridge and they are all on a 32nm process. The fact that Piledriver sorta-kinda beats a repurposed notebook chip in a few rare benchmarks [only when clocked substantially higher than Sandy Bridge though] while consuming twice the power and having a die area and transistor budge that are 2.5 times larger than the CPU portion of a dinky desktop Sandy Bridge chip is no achievement to be proud of.

            As for DX 12, Nvidia is putting out DX12 support for Fermi (the GTX-400 series) this year. Fermi launched in early 2010 and the first real GCN chips from AMD didn’t launch until 2012.

            • xeridea
            • 4 years ago

            8150 is bad at single threaded, but can beat Sandy multithreaded. The issue is that many things rely on single threaded, DX11 being a major one. Not saying it wasn’t a flop, more like Excavator on 16nm would be pretty decent since they worked out a lot of single and multithreading issues. Piledriver on 32nm in 2016, they are behind for sure.

            For DX12 I am talking of using the new advanced features, such as ACE, and being made for parallel and split workloads through and through. Nvidia cards support the base DX12, and can fake some other features with driver hacks (which haven’t panned out well so far), but the architecture is clearly tuned for DX11. As far as efficiency, Fermi was deplorable, so they had to hyper optimize for one single goal, DX11, throwing advanced processes and DP to the wayside. Overall, GCN is pretty well rounded.

            • chuckula
            • 4 years ago

            1. I’m tired of hearing about how having 8 crappy and factory-overclocked cores means a chip is “amazing” at parallel computations. It really isn’t. In a few very rare scenarios it’s adequate at parallel computations. There’s a difference between amazing and adequate.

            2. I’m tired of hearing about the miracles of DX12 making Piledriver the magical superchip. First of all, it’s 100% factually wrong: The Core i3, not any FX chip, has been proven to be the real winner for DX12 in multiple benchmarks that show the relative improvements based on CPU loads. Second of all, if DX12 is really the magical bullet that fixes all of AMD’s problems, then why the hell did they bother with Zen in the first place?

            Here, behold the Core i3-4330 beating up on the miraculous FX-8370 in a freakin’ DX12 benchmark that even uses an AMD GPU so you can’t go screaming about conspiracies: [url<]http://www.pcper.com/image/view/60370?return=node%2F63601[/url<]

            • xeridea
            • 4 years ago

            For most parallel workloads that can use 8 threads efficiently, BD was similar performance to SB. I am not saying it is earth shattering, just that it is the strong point of the chip. And uArch isn’t the only issue, being 2 nodes behind, and no performance chips released for a long time is the bigger issue.

            For DX12, the benefit isn’t necessarily performance relative to Intel, it is the lessening of CPU speed on frame times. DX11 is absurdly reliant on single threaded, and DX12 pretty much any CPU will get you good enough performance for todays games, while greatly benefiting from more cores. The reason for making Zen is progress. It will be a big leap in performance, and easier to get more gains in the future. Also, single threaded performance is still a focus so they kinda have to go back down that route.

            • maxxcool
            • 4 years ago

            Except cinebench* and lame* and transcoding x264* and pov* and Photoshop* and Gaming* .. it does really well (but does not win) at 7zip… and solitaire

            • Voldenuit
            • 4 years ago

            [quote<]. Nvidia cards support the base DX12, and can fake some other features with driver hacks (which haven't panned out well so far), but the architecture is clearly tuned for DX11. As far as efficiency, Fermi was deplorable, so they had to hyper optimize for one single goal, DX11, throwing advanced processes and DP to the wayside. Overall, GCN is pretty well rounded.[/quote<] Wait, you mean how Maxwell supports Direct3D Shader Level 12_1 and AMD (including the Fury) only supports 12_0? I think it's a bit early to be calling architectural superiority on DX12 when it's so new and so few games are using it.

            • freebird
            • 4 years ago

            Nvidia will be lucky to get more than 1 or 2 basic features of DX12 ported ot Fermi. They said they would have DX12 “support” out for it last year… if you ask me it is kinda a mute point since it won’t support any of the more advanced features of DX12 (my opinion), since it wasn’t designed for it from the start. If I’m wrong, please write to me when the GTX 400 series is doing DX12 Async Compute…

            • Airmantharp
            • 4 years ago

            The big part of DX12 isn’t the new features, though- it’s the low-overhead driver model.

            • chuckula
            • 4 years ago

            So Nvidia is getting one or two more features out of 6 year old cards and AMD is getting zero features.

            OK. So this proves that Nvidia is hopelessly behind AMD because….

            • Tirk
            • 4 years ago

            We’ll see what it gets, they still haven’t released DX12 drivers for Fermi yet, so how do you know what features it’ll get?

            • swaaye
            • 4 years ago

            NV recently released Fermi drivers with support for WDDM 2.0. That’s definitely something.

            On the other side, AMD recently essentially ended support for Cayman, a chip newer than Fermi, and a whole slew of APUs newer than Kepler stuff. I have gotten a little tired of my Radeons going EOL years earlier than the green.

            • Tirk
            • 4 years ago

            Its kind of hard to fault AMD for previous design capabilities before GCN launched in 2012 considering everything before were modified VLIW cores originally designed under the guise of ATI, not AMD. Now you “can” fault them for taking so long to integrate ATI into a functioning part of AMD and produce a new core design in line with AMD’s goals; however, faulting AMD for ATI’s planned feature capabilities for their GPUs seems a bit of a stretch.

            But like all things, why don’t we wait and see what DX12 support Fermi will receive this year instead of assuming it will happen, considering they said Fermi would get DX12 support last year “2015” and it never happened.

          • nanoflower
          • 4 years ago

          I don’t think AMD has done that badly on the GPU front. They’ve been awful on the CPU front with their marketing but they do tend to deliver on the GPU side. The only recent cock-up I can think is the noisy fan issue on the Fury cards which they tried to pretend didn’t exist.

            • DancinJack
            • 4 years ago

            I think compared to their pre-release marketing, they haven’t done well on either front.

            • the
            • 4 years ago

            There was a driver snafu for a few days with the fans speeds on the first Crimson driver release. I’d call that pretty bad but they quickly fixed it.

            While a bit more subjective, the R9 290 was loud, probably louder than it needed to be. AMD pushed the voltage a tad higher than it needed to be and the result was another dust buster. Custom coolers and the R9 390 (which is the same chip with 8 GB of GDDR5) are far quieter and can consume even less power than the original reference design. Not a total fail here but a bit disappointing in retrospect.

            I also think AMD missed the mark with Fiji. While it is fast, the chip itself appears to be imbalanced between the number of shaders, ROP, memory bandwidth and geometry throughput. AMD could have shaved off some shaders for more geometry throughput and ROPs and have had a design that would have had a clear lead over GM200. These areas are what AMD is improving for Polaris so one can be optimistic for a more balanced design this time around.

          • the
          • 4 years ago

          Most of what has been disclosed today is stuff you could have predicted six months ago with 90% accuracy. New architecture, 14/16 nm FinFET production, better H.265 support and a new display block for DP 1.3/HDMI 2.0 are things checkbox items one could see coming a mile away.

          Performance claims are indeed to be taken with a huge grain of salt but AMD has been focusing more on performance-per-watt with today’s release. This isn’t saying much either as a good chunk of the performance per watt gains stem from the new process node. How much of that is due to a new architecture is an open question and cannot be independently answered until reviewers get cards in their hands.

          Ultimately today AMD laid out a very basic road map without any surprises. It is only newsworthy due to it being confirmation of many predictions/rumors.

            • nanoflower
            • 4 years ago

            AMD has said that they think it’s about a 70/30 split with 70% of the savings coming from the new process node and finfets and 30% coming from their new design. This is mentioned over in Ryan Shrout’s article on PcPer.

            • the
            • 4 years ago

            Sounds feasible but would prefer independent verification. Though without some GCN 1.2 products on 14/16 nm FinFET, it would be difficult determine.

            • _ppi
            • 4 years ago

            Well, it is good they can at least run a full game with drivers half a year prior to release 🙂

        • ronch
        • 4 years ago

        Yeah they’ll probably be faster than the previous generation… of AMD graphics.

          • _ppi
          • 4 years ago

          Actually … probably not even that. Remember that top 28nm chips from both nVidia and AMD are now around 600 mm2. First 14/16nm chips will be <200 mm2.

          So it would cause revolution in low-to-mid end, but 980Ti owners can probably sleep well at least for another year.

      • Redocbew
      • 4 years ago

      That’s probably the way you should feel about most of the marketing departments in the industry, and honestly now’s not really the best time for them to try taking the moral high ground.

    • DrCR
    • 4 years ago

    Awesome, Scott has been on the job only for a bit, and he’s already getting a next-gen architecture pushed out.

      • chuckula
      • 4 years ago

      THANK YOU WASSON!

      • tipoo
      • 4 years ago

      He’s been on the job for weeks and Star Wars TFA was pretty good. Thanks, Based Watson!

      • Wirko
      • 4 years ago

      He gets a lot of work done inside a second.

        • Chrispy_
        • 4 years ago

        you win today’s internet.

          • Wirko
          • 4 years ago

          Can I keep the Volvo too, or just the microSD cards?

      • Jigar
      • 4 years ago

      Yesterday was his first day.

        • greenmystik
        • 4 years ago

        Just think what he can do after a week….

        • tipoo
        • 4 years ago

        I’m so curious how it went and all the meaty details! It feels like having a family friend go to a new job, but then not getting to know anything about it, eh?

      • USAFTW
      • 4 years ago

      THAT’S WHAT HE DOES!!!

      • maxxcool
      • 4 years ago

      Scott, can you take a rolled up newspapper over to AMD marketing plz …

    • vargis14
    • 4 years ago

    Regardless I am very much looking forward to 14nm fin fet for both AMD and NV. I also hope it makes Zen V1 comparable or better than Intel with performance but I know Intel has been holding back on performance increases with their recent CPU releases since Sandy bridge.

    So you know after all this time AMD has been stagnant Intel has to have been holding new designs back that have drastic performance increases so they could sell 6+ core CPU’s while AMD struggles . If they didn’t then they would have been very stupid not to R&D withheld CPU’s and AMD has a shot if they shoot true with Zen and that would be great for all of us dropping CPU prices for all.

    I hope for the latter in my last comment.

    • maxxcool
    • 4 years ago

    hmmm 50-60% less power consumption… hmmm I don’t buy it.

    They do not specify ”when” these savings occur… Especially when they later state “carry more current when it’s operating”..

    I cannot see leakage being 50+% of the heat generated on AMD’s designs atm… we will have to see.

    edit. LOL -19 🙂 .. did not know ‘skepticism’ offended the ‘delicate sensibilities’ of the Holy Red masses so much ..

    edit. consume=consumption.

      • chuckula
      • 4 years ago

      [quote<]hmmm 50-60% less power consumption... hmmm I don't buy it. [/quote<] I can. Intel has had greater performance per watt improvements than only 50-60% with its finfets*. You are also forgetting something: AMD likely showed off a GPU that is much much larger than the GTX-950. So it was the GTX-950 running flat-out at the very high end of its power consumption range (where it is less efficient) vs. a much larger Polaris part running at a much lower end of its power envelope to really exemplify the power consumption savings. * In case you don't believe me, here's what 14nm did over 22nm [b<]that was already on a finfet process to begin with[/b<]: [quote<]If you only look at the integer performance of a single Broadwell core, the improvement over the Haswell based core is close to boring. But the fact that Intel was able to combine 8 of them together with dual 10 Gbit, 4 USB 3.0 controllers, 6 SATA 3 controller and quite a bit more inside a SoC that needs less than 45 W makes it an amazing product. In fact we have not seen such massive improvements from one Intel generation to another since the launch of the Xeon 5500. The performance per watt of a Xeon D-1540 is 50% better than the Haswell based E3 (Xeon E3-1230L). [/quote<] [url<]http://www.anandtech.com/show/9185/intel-xeon-d-review-performance-per-watt-server-soc-champion/17[/url<]

        • nanoflower
        • 4 years ago

        While it’s possible that AMD put something like a 390 equivalent Polaris GPU up against a 950 I choose to believe they went for something more comparable such as a 380 equivalent chip. That would fit in with my thinking that AMD will target the low to mid range marketplace with their roll out of the Polaris line so they would need to have a working GPU in that range now. That’s the part of their product line most in need of refreshing. Then they can roll out the high end GPUs towards the end of the year. Hopefully along with some early Zen releases.

        • NoOne ButMe
        • 4 years ago

        Die size of the chip is estimated around Cape Verde. So the 380 spec nanoflower says is reasonable and likely.

        Upon doing math further down this thread calling this chip inbetween between a 7870 and a 7950 is more likely. 24CU perhaps?

          • chuckula
          • 4 years ago

          So the die sizes are similar, but the new GPU is at 14nm, so there is presumably a massive increase in density and the number of transistors in the AMD GPU is much much larger.

          Hence, the ability to run those transistors at much lower clockspeeds to target the same performance level that exemplify the power efficiency of finfets.

          Once again, I don’t really care that the physical size of two dies are similar, the AMD GPU is clearly a much larger chip from a transistor count standpoint.

            • maxxcool
            • 4 years ago

            Indeed. I’m more curious if these saving also occur under 100% load. It its ‘nit picking’ on my part I admit.. but AMD’s marketing has been on my crap-list since BD.

            • NoOne ButMe
            • 4 years ago

            No. The AMD chip has estimated 30-50% more active transistors when accounting for 950 binning.

            • chuckula
            • 4 years ago

            So according to you the AMD chip is 50% bigger than the Nvidia chip.

            Sounds like a pretty big difference to me, so you are confirming that everything I said is 100% true, but you just want to disagree with the conclusion because you feel that you are above pesky little things like “facts” and “logic”.

            • NoOne ButMe
            • 4 years ago

            The AMD chip is estimated to be around 120mm^2. Got it?
            GM206 is 227mm2 okay? I call it 200mm2 of active transistors. Got it?
            Assume a little over double density from TSMC 28 to Samsung 14. So AMD’s chip is about 260mm^2. Depending on the density increase and if GM207 is under 200mm2 than it could be as larger as 50% more transistors.

            Your statement pegs GM207 at about the same die size of this 14FF chip. It isn’t true. Unless being about about 55% of the die size is similar.

            • chuckula
            • 4 years ago

            I can’t wait for when Zen finally starts sampling and its die area is massively bigger than a 2015 era Skylake.

            I can’t wait to see you act like a d-bag hypocrite and throw out all the “logic” you just spewed in this article to come to your prejudiced conclusion that everything AMD makes is perfect because SOMETHING.

            I’ll be waiting to hang you with your own words.

            • NoOne ButMe
            • 4 years ago

            What does Zen have to do with this GPU from Global Foundries? I’ve said it is indeed more active transistors than the 950. About 45-50% is a good estimate. Your figure was that the die sizes are close implying it is over twice as many transistors.

            Sorry reality rains on your parade? Also, how is this saying AMD is perfect? I’m trying to get a number which reflects how large this chip would be on 28nm. The answer is larger than a 950, but much closer to 50% larger than 100% larger you implied.

        • maxxcool
        • 4 years ago

        Color me skeptical. But I am willing to wait and see. A cpu under load is a different animal than a gpu under load.

        • Platedslicer
        • 4 years ago

        Going by [url=http://www.anandtech.com/show/9886/amd-reveals-polaris-gpu-architecture<]Anandtech[/url<], the GPU they showed off was actually the little one. Here's the quote for the lazy: (...)while Raja’s hand is hardly a scientifically accurate basis for size comparisons, if I had to guess I would wager it’s a bit smaller than RTG’s 28nm Cape Verde GPU or NVIDIA’s GK107 GPU, which is to say that it’s likely smaller than 120mm2. This is clearly meant to be RTG’s low-end GPU(...)

          • chuckula
          • 4 years ago

          “Little” in the 14nm generation is not particularly little compared to a similar chip in the preceding 28nm generation. The die sizes might be similar, but as I pointed out above, if the “14nm” process actually works, that AMD chip should have substantially greater resources than an old GTX-950.

            • Platedslicer
            • 4 years ago

            If that’s the point you wanted to make, you might want to edit your post:

            [quote<]AMD likely showed off a GPU that is much much [i<]larger[/i<] than the GTX-950. So it was the GTX-950 running flat-out at the very high end of its power consumption range (where it is less efficient) vs. a much [i<]larger[/i<] Polaris part running at a much lower end of its power envelope[/quote<] You want to say that the AMD GPU is more powerful, not [i<]larger[/i<].

            • chuckula
            • 4 years ago

            Your Pal No One Butt Me just said that AMD showed off a chip with a 50% larger transistor budget than the GTX-950.

            50% sure as hell sounds “much larger” to me.

            If it doesn’t to you, then I’ll sure as hell remember that when Zen comes out in 2017, loses badly to 6-core Haswell parts from 2014, and then we hear screams and cries about how a 6 core Haswell with a 50% higher core count is “much larger” than a 4770K. Except for when it isn’t apparently.

            • anotherengineer
            • 4 years ago

            I thought there was an unwritten rule that if you use the word ‘hell’ in a post you have to use it at least 3 times?? 😉

            50% hmmmm well in area ya it is 50% larger, but linear dimensions it’s only about 23.4% larger 😉

        • nanoflower
        • 4 years ago

        According to Ryan Shrout over at PCPer, “For the single data point that AMD provided, they compared the GTX 950 to a similarly priced Polaris graphics. ” so apparently AMD was doing a fair comparison.

          • chuckula
          • 4 years ago

          The term “fair” assumes that the GTX-950 will be the only chip available to compete with Polaris when it launches, and that is not a foregone conclusion.

          I could just as easily take a $1000 FX-62 from 2005 and compare it to an equivalently priced Broadwell-E in 2016 and come to the same conclusion using the same “fairness” metric.

            • nanoflower
            • 4 years ago

            LOL, chuckula. You know that there’s no similar Nvidia product to compare to at this point so AMD did what every company does this early in the process and compared it to what is on the market today. When Nvidia and AMD finally release their products we will see comparisons between the two but that’s likely months away. This comparison, at least gives us some idea of the improvements being made with the new generation. Will AMD’s products be competitive with Nvidias? Maybe. Maybe not. That’s not the point of the comparison being done today.

        • namae nanka
        • 4 years ago

        From the AT article,

        “In any case, the GPU RTG showed off was a small GPU. And while Raja’s hand is hardly a scientifically accurate basis for size comparisons, if I had to guess I would wager it’s a bit smaller than RTG’s 28nm Cape Verde GPU or NVIDIA’s GK107 GPU, which is to say that it’s likely smaller than 120mm2.”

        The gtx950 is from a cut down chip from gtx960 which is like 230mm2.

      • pranav0091
      • 4 years ago

      To their credit, they did showcase a game scenario in one of the slides, so I guess you can trust their power numbers. But then again, comparing a new FF design to a 5 year old planar process isnt the most interesting power scenario anyways.

      2016 looks to be an interesting year – new VR hardware, new GPUs, new processes and more new games.

      <I work at Nvidia, but my opinions are purely personal>

        • NoOne ButMe
        • 4 years ago

        The impressive part is that their current parts are 10-20W over The 950 for the same performance I believe.

        So the power draw of the GPU itself has gone down over 50-60% speculate. Only means that Pascal should be cutting 40-50% at minimum from Maxwell. 980ti performance in 100W should be possible.

          • pranav0091
          • 4 years ago

          Even 50% off 250W is not 100W 🙂

          Food for thought – check out the GTX 950’s measured power numbers and the numbers from AMD slides :- [url<]https://www.techpowerup.com/reviews/MSI/GTX_950_Gaming/28.html[/url<] <I work at Nvidia, but my opinions are purely personal>

            • DancinJack
            • 4 years ago

            Not only that, but i’m not sure how AMD is using a 4790k with DDR4.

            [url<]http://images.anandtech.com/doci/9886/Radeon%20Technologies%20Group_Graphics%202016-page-016.jpg[/url<] BUT HEY, all the AMD shills in these comments are certain every claim they make on these marketing slides was true, is true, or will be true.

            • chuckula
            • 4 years ago

            DDR4 with a consumer-grade Haswell?

            That’s easy: AMD’s innovation is so powerful that Intel parts get upgraded just by being connected to AMD products!

            • maxxcool
            • 4 years ago

            … wth ?

            • namae nanka
            • 4 years ago

            A typo. It wouldn’t be an AMD slideshow without one.

            And it’s not merely slides, from the Anandtech article,

            “In the live press demonstration we saw the Polaris system average 88.1W while the GTX 950 system averaged 150W.”

            You can see the system setup here, it’s DDR3.

            [url<]http://cdn.overclock.net/7/7e/7e93be80_Untitled.png[/url<]

            • NoOne ButMe
            • 4 years ago

            50% should be from FinFETs only I believe. Architecture should be able to get more down. I also believe the 980ti uses closer to 225W in real usage. Could be wrong.

            • NoOne ButMe
            • 4 years ago

            And, a second reply, the power figured AMD used were for the whole system. Which means the CPU and rest of the system is probably around 30-40W. 140W is the number AMD claims directly from the system and 91W average + 50 from the rest seems fair for the 950.

            The CPU must be doing less work somehow for AMDs card because I cannot see it using only 30-35W of power by AMDs claimed numbers.

      • anotherengineer
      • 4 years ago

      I wish I could find an electric dryer like that, that would still be able to dry my clothes just as fast.

      mmmmmmm 2500W less power mmmmmmmmmmmmmmm

    • chuckula
    • 4 years ago

    AMD’s reticence about quoting a nanometer number might be good news. If it means that these Polaris chips are coming from TSMC (and not GloFo) then there’s hope that AMD will beat Nvidia to market or at least have products out in a similar timeframe.

      • Platedslicer
      • 4 years ago

      I know you must be getting tired of [url=http://www.anandtech.com/show/9886/amd-reveals-polaris-gpu-architecture/3<]this[/url<], but... [quote<]As for RTG’s FinFET manufacturing plans, the fact that RTG only mentions “FinFET” and not a specific FinFET process (e.g. TSMC 16nm) is intentional. The group has confirmed that they will be utilizing both traditional partner TSMC’s 16nm process and AMD fab spin-off (and Samsung licensee) GlobalFoundries’ 14nm process, making this the first time that AMD’s graphics group has used more than a single fab. To be clear here there’s no expectation that RTG will be dual-sourcing – having both fabs produce the same GPU – but rather the implication is that designs will be split between the two fabs. To that end we know that the small Polaris GPU that RTG previewed will be produced by GlobalFoundries on their 14nm process, meanwhile it remains to be seen how the rest of RTG’s Polaris GPUs will be split between the fabs.[/quote<]

Pin It on Pinterest

Share This