Radeon Pro specs hint at a full-fat Polaris 11 GPU in MacBook Pros

Apple's latest MacBook Pros house a new line of Radeon Pro graphics processors: the Radeon Pro 450, the Radeon Pro 455, and the Radeon Pro 460. AMD has given us some more info on those graphics processors in the wake of Apple's unveiling. While the red team didn't say as much, the fact that it's touting the thinness of these chips and the GPUs' spec sheets point to a Polaris 11 chip underpinning these products. First, though, let's take a look at the specs of these new chips (along with a couple educated guesses about their performance). I've marked my guesses with question marks.

  Boost

clock

(MHz)

ROP

pixels/

clock

SP

TFLOPs

Stream

pro-

cessors

Memory

path

(bits)

Memory

transfer

rate

(GT/s)

Memory

bandwidth

(GB/s)

Thermal

envelope (W)

Radeon Pro 450 ~800? 16? 1 640 128? 5? 80 <35
Radeon Pro 455 ~850? 16? 1.3 768 128? 5? 80 <35
Radeon Pro 460 ~900? 16? 1.86 1024 128? 5? 80 <35

From these specifications, we can draw a number of interesting conclusions about the graphics subsystems in the 15" MacBook Pro. Given the memory bandwidth on tap, it seems likely Apple is using GDDR5 RAM clocked at 5GT/s in the new machines. Furthermore, the 1024 stream processors across 16 compute units in the Radeon Pro 460 suggest Apple is getting the lion's share of fully-enabled Polaris 11 GPUs to cope with demand for its new systems. If that's true, it might indicate why we haven't yet seen something like a Radeon RX 465 to compete with the GeForce GTX 1050 Ti, and it may also explain why AMD is so willing to cut prices on cards with the bigger, more costly Polaris 10 die on board.

Regardless, AMD has got to be elated with this design win—we look forward to seeing how the new chips perform under the hoods of the new MacBook Pros. Perhaps our Radeon RX 460 review would be a good place to start if you're trying to get a sense of how these parts will handle.

Comments closed
    • BIF
    • 3 years ago

    I’m with end-user on one thing: There should be 8GB memory options. Period.

    My reason is for 3D rendering of complex scenes, but really…have we gotten to a point where a power user (or somebody who just recognizes that there are power users in the world) really [i<]need[/i<] to give a reason? You all can still have your 2GB solutions. But why be insulting to somebody who wants more? There's a market for it, so why not? "Won't work" is a valid reason to reject a suggestion. But in this case, I don't think anybody can make a valid case that 8GB won't work. I'm happy to be corrected if I'm wrong, but putdowns don't help anybody. Beyond that, I don't think any of this stuff from AMD will really help people who use various unbiased rendering engines. But again; I'm happy to be corrected. 🙂

      • Krogoth
      • 3 years ago

      There are already GPU solutions with 8GiB or more of video memory, but they catered towards power users and professionals.

      These units aren’t marketed towards *you*. They are marketed towards artsy types who don’t know jack about computer hardware a.k.a majority of your Macolytes. Microsoft is trying to seize part of that pie with their “Surface”.

        • End User
        • 3 years ago

        And these “artsy types who don’t know jack about computer hardware” are using software that will easily consume more than the 4GB of GPU memory available in their $4,200 Surface Studio.

          • Krogoth
          • 3 years ago

          Not necessary, it depends on the workload in question.

          If they are doing deviantart and tumbler-tier stuff. The unit is more than sufficient, but if they are doing serious publication work then they should looking at a real workstation.

      • lycium
      • 3 years ago

      > 3D rendering of complex scenes

      A laptop is really your weapon of choice for this?

        • derFunkenstein
        • 3 years ago

        To be fair, there isn’t 8GB in any iMac, and the Mac Pro is an embarrassment to the word “Pro”. Also, you can only get 8GB of VRAM in a Mac Pro because of dual 4GB GPUs. Apple in general is just kind of cheap when it comes to their $1700-and-up computers.

    • End User
    • 3 years ago

    Supports dual 5120×2880 displays in addition to the MBP’s 2880×1800 display

    or

    Supports four 4096×2304 displays in addition to the MBP’s 2880×1800 display

    Max available GPU option is 4GB.

    I can easily saturate the 2GB of GPU memory in my 15″ MBP (2880×1800) on its own (no games). I don’t see how the new MBP can handle an additional four external 4K displays with just 4GB of GPU memory.

      • Krogoth
      • 3 years ago

      That sounds like shoddy coding on UI. There’s no way the UI for an OS should be consuming GiBs of video memory.

      2D imaging (if done right) hasn’t been video memory limited for almost 15 years. Fill-rate has been the traditional bottleneck.

        • Pancake
        • 3 years ago

        A 5120×2880 screen at 4 bytes/pixel works out to be about 56MB of memory. You’d need a fair few applications open consuming a fair bit of graphics resources (back buffers, textures etc) to use even 2GB of memory. Combine that with a mentally retarded OS like OSX and you have the perfect storm.

        Of course, with the fairly small amount of memory the screens actually use for framebuffers it’s all about the applications and not the screen real estate.

          • End User
          • 3 years ago

          You two are so out of touch with real world usage it is beyond funny. I’m having this same conversation with Krogoth in another thread except there we are talking about GPU memory usage under Windows 10. It is so easy to blow past 4GB of GPU memory usage when using desktop apps on a 2560×1440 display. Blaming this on macOS (OS X no longer exists) is just plain ignorant.

          Productivity and creative systems equipped with 4K+ displays should ship with 8GB of GPU memory at the minimum.

            • Krogoth
            • 3 years ago

            UI usage =! application memory usage

            Those applications are designed to use as much memory as possible. (useful for loading up tons of lossless material) It is utilizing both system and video memory for this as a last resort it will also use swap memory on your soild-state/hard drive media (kills performance).

            It is cheaper and more effective to throw in a ton of system memory (DDR4 is bloody cheap and getting 64GiB+ is easy on a workstation) since the superior performance found in video memory is utterly wasted on 2D work. In other words, you don’t need a Quadro K6000 for Photoshop and other 2D work. A merger mid-range GPU is up to the task when paired with a ton of system memory.

            • End User
            • 3 years ago

            The 16 GB of system memory in my test environments is nowhere near fully used. The desktop apps are taking advantage of the GPU hence the video memory usage beyond 4GB.

            • Krogoth
            • 3 years ago

            It is because the software is first prioritizing GPU memory usage by default before it starts loading into system memory. If you given the chance, Photoshop (x64 version) can utilize 16GiB of system memory or more depending on how much content are you loading up.

            • End User
            • 3 years ago

            That is really good information but it has nothing to do with how easy it is to saturate 4GB of GPU memory.

            • Pancake
            • 3 years ago

            Do you not comprehend what I wrote? I repeat, frame buffer size and what graphics memory applications use are different things. This is supposed to be a “tech” site and yet you double-down on your ignorance. Unbelievable. Pull your head in and stop posting until you learn a little about how computers work. That way you won’t get so upset with the lack of performance of your computer. You might even learn to use your computer properly.

            I suspect mental retardation is not confined to your choice of operating system whatever it’s called.

            • End User
            • 3 years ago

            I find it incredibly amusing that you think a discussion about frame buffer size is going to help someone who, within 10 minutes of use, has saturated the measly 4GB of GPU memory that shipped with their $4,200 Surface Studio.

            • Pancake
            • 3 years ago

            POST DELETED – Personal attacks gone over the top.

            • JoeKiller
            • 3 years ago

            Ouch man did you really have to go all bigot? You had it on your technical points.

            • End User
            • 3 years ago

            While he totally lost it with the bigotry he also failed to grasp that I was talking about application GPU usage.

            • End User
            • 3 years ago

            In my first post I stated that I can saturate the 2GB of GPU memory of my MacBook Pro outside of gaming. That kinda implied application GPU memory saturation. Everything I discussed in my posts was referring to application GPU memory usage. I can help it if you got lost on your own tangent.

            BTW, you “lost face” comment is getting you flagged for posting a racist comment in addition to your “mental retardation” comment. Grow up.

            • End User
            • 3 years ago

            BTW, what is up with your obsession with mental health?

          • the
          • 3 years ago

          OS X is a bit different when it comes to how it handles the frame buffer. It is known to use higher resolution buffers and then down sample them to provide anti-aliasing. So that 5120 x 2880 buffer internally can be as large as 10240 x 5760 and consume ~225 MB.

          The other thing is that OS X keeps individual buffers for each application for compositing. Case in point, according to the OpenGL Driver Monitor (one of Apple’s developer applications), I’m currently using ~1.4 GB of memory on my 4 GB GTX 770 I currently have in my Mac Pro. Currently I have a bunch of tabs open in Safari, and a few GUI applications.

        • End User
        • 3 years ago

        I remind you of our chat about GPU memory usage under Windows 10. I can easily blow past 4GB of GPU memory usage by just using productivity apps and Photoshop running @ a lowly 2560×1440.

          • Krogoth
          • 3 years ago

          Because those applications by default are designed to use as much memory as possible. It is not cost effective to use video memory for this (similar performance over system memory it offers is utterly wasted). You better off just loading up on system memory.

            • End User
            • 3 years ago

            What the heck are you talking about. I’m talking about apps that are using video memory (dedicated video memory for the dedicated GPU). This has nothing to do with system memory.

            • Kraaketaer
            • 3 years ago

            Photoshop and other applications that can leverage video RAM for increased performance do so if they can – and use as much as they can (/are allowed to), to boot. If that fills up, overflow goes into system memory. Overflow from that goes into page files and whatnot. That’s what Krogoth is saying. For GPU compute, system and video memory can indeed be “related.”

      • ptsant
      • 3 years ago

      It can also draw on system memory if necessary. Usually, the cards can see 4GB dedicated + 4GB via PCIe.

      Note that they don’t have to store texture and shaders if you’re not gaming. Even with a lot of abuse, a 2GB framebuffer is enormous for pure 2D use.

      • travbrad
      • 3 years ago

      Be careful what you wish for. They’d probably charge $500 to bump the VRAM up to 8GB.

      There is something really wrong that the OS alone would be using that much memory though. I have 2 1080p monitors (about 4 million pixels vs 5 million pixels on your MBP) and Win10 uses about 200MB.

    • Hattig
    • 3 years ago

    1.86 TFLOPS in 35W is pretty impressive to be fair. That’s over 50 GFLOPS/W.

    Of course these are clearly the best dies that have been binned, but that’s an impressive result.

      • ImSpartacus
      • 3 years ago

      The Tesla P4 will do 73-110 GFLOPS/W.

      [url<]http://www.anandtech.com/show/10675/nvidia-announces-tesla-p40-tesla-p4[/url<] Now THOSE are some heavily binned parts as the regular 1080 is only good for about 50.

        • tipoo
        • 3 years ago

        Well matching the 1080 per watt is an interesting data point as well. I assume Apple particularly asks for the best power binnings, I wonder if other optimizations went on.

          • chuckula
          • 3 years ago

          These laptop parts are clearly being undervolted from their desktop counterparts with careful binning taking place to ensure they remain stable.

          On top of that, there’s no gurantee that these parts (especially the big 460 part) won’t spike above that 35 watt number during heavy operation even if it does not operate that high 100% of the time. Given the dynamic nature of how complex CPU/GPU setups operate these days, a single power number is usually not telling the whole story.

            • adisor19
            • 3 years ago

            Apple is paying top$ for preferential treatment from AMD for these so I doubt that the 460 will spike above 35W.

            Adi

      • Flapdrol
      • 3 years ago

      Up to* 1.86 tflops.

      Just means a max turbo clock of a little over 900 MHz. The base speed is unknown.

    • brucethemoose
    • 3 years ago

    Isn’t Pascal more power efficient?

    As I’ve said before, this probably has something to do with OpenCL support. With some vapoursynth video filters I’ve tried, normally “equivalent” AMD cards significantly outperformed their Nvidia counterparts.

    In fact, that could be an interesting benchmark for TR’s future GPU reviews… I’d be happy to set up a benchmark with NNEDI3 and KNLMeansCL for the TR staff, as those are pretty pure and not optimized for a specific vendor.

    But to be honest, I’m probably the only one that thinks about motion interpolation, denoising, and upscaling performance when I’m shopping for a GPU :/

      • tipoo
      • 3 years ago

      Yeah, even if Nvidia had fully double the gaming performance per watt as AMD, I don’t think Apple will switch back so long as Nvidia doesn’t give a crap about OpenCL. The Haswell Iris Pro 5200 was actually outstripping midrange Nvidia chips in OpenCL in certain tests, which was nuts, and AMD was further ahead.

      Apple really seems intent to support OpenCl.

        • derFunkenstein
        • 3 years ago

        I think Nvidia is figuring that out, finally. Which is good since Hackintoshes with Nvidia GPUs are easier to get running than Radeon-equipped Hacks.

        [url<]https://techreport.com/news/30711/rumor-nvidia-and-apple-may-reunite-for-future-mac-gpus[/url<]

          • the
          • 3 years ago

          Also nVidia cards are great in an older 2010/2012 tower Mac Pro. I have a GTX 970 (and previously a GTX 770) in my old Mac Pro and they were pop it in and go affairs.

          Only annoyance is that use the nVidia web driver which generally needs an update with every OS X update, major or minor.

            • derFunkenstein
            • 3 years ago

            Yeah you can really hotrod those things in every way except the CPU (which is always pre-Sandy no matter what). Wish Apple still made Macs with slots.

        • brucethemoose
        • 3 years ago

        Well they did create OpenCL, after all.

      • EzioAs
      • 3 years ago

      Hey, you’re not the only one.

        • brucethemoose
        • 3 years ago

        There are dozens of us!

          • EzioAs
          • 3 years ago

          Who doesn’t enjoy optimizing videos, amirite?

          • derFunkenstein
          • 3 years ago

          Excellent Tobias Fünke reference.

    • the
    • 3 years ago

    Not surprising considering that Apple was able to get Tonga chips with all 2048 ALUs enabled long before the R9 380X reach store shelves.

    This also makes me wonder if Apple is waiting around on Vega GPU hardware to ship before launching a new Mac Pro. Even then, they could just wait on shipping systems with the option and use Polaris as a baseline to at least get an upgrade out on the market. It has been nearly three years Apple….

      • Topinio
      • 3 years ago

      Vega in the Mac Pro sounds good, the launch window could make it be based on Skylake-EP and DDR4 too…

      and 3Y is nothing for the Mac Pro, after all it was a nearly 5Y run for the Nehalem/Westmere version.

        • the
        • 3 years ago

        The thing is that Nehalem -> Westmere was an actual update during that 5 year period. Apple was a couple of months late compared to HP and Dell who upgraded their workstations when Intel started shipping those chips. Apple did another ‘refresh’ of the Westmere based Macs in 2012 by slightly upgrading CPU clock speeds and build to order options but it was at least [i<]something[/i<] while the rest of the industry had moved on to Sandy Bridge-EP I'm just really surprised that Apple hasn't offered any sort of updates to the Mac Pro over the past 3 years. They skipped over Hawaii, Tonga, Fiji and now Polaris for the Mac Pro. Even if they stuck with Ivy Bridge-EP, those would have been worthwhile updates on the GPU side.

    • tipoo
    • 3 years ago

    Slightly interesting factoid that if Polaris’s memory compression makes up for the gap, the 460 would be almost a wash with the PS4 GPU. I’m glad the thin and light category is getting that capable.

      • the
      • 3 years ago

      It probably does. Remember that the PS4 only has the one GDDR5 memory bus that has to fed its GPU and CPU. The CPU in the new MacBook Pro gets its own dedicated bus to main memory so the discrete GPU is freed of that bandwidth burden.

        • NTMBK
        • 3 years ago

        But on the flip side, the game engine now needs to shuffle data between two separate memory pools across a PCIe bus.

        • tipoo
        • 3 years ago

        Also a mixed CPU/GPU memory load decreases how much bandwidth the RAM can provide.

        [url<]http://images.eurogamer.net/2013/articles//a/1/7/8/9/7/3/8/PS4_GPU_Bandwidth_140_not_176.png/EG11/resize/600x-1/quality/80/format/jpg[/url<] [url<]http://www.eurogamer.net/articles/digitalfoundry-2015-vs-texture-filtering-on-console[/url<]

    • derFunkenstein
    • 3 years ago

    Sweet, my math [url=https://techreport.com/news/30880/apple-latest-macbook-pros-ditch-the-f-keys?post=1006839#1006839<]doesn't suck[/url<]. These extra specs also explain Apple's chart showing performance differences in various video and graphics-related tasks. Dumping iGPUs for real dedicated chips should make a huge difference all by itself.

      • tipoo
      • 3 years ago

      That they brought dGPUs back to the base model 15″ is something that most people seem to have glossed over in regards to the price hike. I think the price fell when it was only integrated (Iris Pro)

        • adisor19
        • 3 years ago

        Indeed even I missed that part. Still, not digging the price hike.

        Adi

Pin It on Pinterest

Share This