Real World Technologies dissects Apple’s A10 GPU

David Kanter over at Real World Technologies has released his ruminations on the GPU portion of Apple's latest A10 SoCs. Apple has licensed Imagination Technologies' PowerVR graphics IP since the earliest days of the iPhone and iPad. However, after comparing developer documentation from both Apple and Imagination Technologies, Kanter concluded that Apple has been slowly replacing off-the-shelf components of PowerVR GPUs with its own proprietary designs. He argues those components have been appearing in Apple SoCs since the A8 chip that powers the iPhone 6 and 6 Plus.

Roughly speaking, Kanter describes a GPU as having three parts: fixed-function hardware, shader cores, and a software driver. The fixed-function hardware manages API commands and rasterization. The shader cores perform programmable graphics computation, and the driver translates API calls into commands for both the fixed-function hardware and the shaders. According to Kanter's research, the GPUs in Apple's A8 and newer SoCs employ still employ Imagination's fixed-function graphics hardware, but the shader cores and driver are unique to Apple.

Kanter speculates that a portion of Apple's GPU improvements in performance and power efficiency stem from the use of smaller half-precision floating-point registers. Half-precision is considered "good enough" for graphics, image processing, and machine learning. Kanter says that Apple's GPUs provide free conversion between different data types, enabling compilers and encouraging programmers to use "minimum data" when possible, rather than worrying about the computational cost of data type conversion.

Apple claims the "Portrait mode" feature on the iPhone 7 Plus uses machine learning to work its magic. Even if machine learning isn't involved, the faux-bokeh effects certainly involves the graphics and image processing points of the reduced-precision-math triangle. Kanter told us that many of Instagram's filters execute on the GPU for efficiency and performance reasons, too.

Kanter goes on to speculate about how Apple might benefit from exclusive GPU designs in their iOS devices. In-house GPU design might allows Apple to retain the benefits of research and develpment, rather than technology filtering back to competitors through Imagination Technologies. The article is a great read, and we are just skimming the surface. Go check it out here.

Comments closed
    • Timbrelaine
    • 6 years ago

    What you’re describing isn’t possible. Chars, ints, uints, floats, etc. all use the exact same values to represent different things; there isn’t any way to infer the type of a bunch of bytes from their value, at least not for primitive types where every value is valid.

    I think what you’re referring to is Swift inferring types from the literal syntax used to describe the value- that is, it knows 1 is an integer literal, 1.0 is a floating point literal, and ‘a’ is a character literal; but that’s due to syntax, not their values.

    Further, Swift is a statically typed language. All types *must* be known at compile time. It probably has some kind of dynamic programming features as an escape outlet because many statically typed languages do, but it is by no means a core language feature.

    • RAGEPRO
    • 6 years ago

    There are an awful lot of Intel chips that only show up in Macs, actually. Basically anything with Iris graphics is a non-starter outside of a MacBook. Iris Pro showed up in a few other machines and in some desktop chips, but it’s rarely-seen, and the “regular” Iris graphics are—as far as I know—completely absent outside of macs.

    • cygnus1
    • 6 years ago

    I would definitely be interested to see how they would do an ARM transition. I was around when they did the PowerPC/Intel transition, but I think unlike the PowerPC/x86 transition where they were arguably moving to more powerful systems, I don’t think an ARM powered Mac would be powerful enough to run emulated x86 software fast enough. Nor do I think Apple will invest the time into creating that.

    I think if they were to leverage the Mac App store they could set requirements that submissions be in an intermediate format (instead of fully compiled) that would allow Apple to finish the compilation and have versions for ARM or x86. That could at least eliminate a potential chicken and egg 3rd party software situation in a Mac conversion to ARM.

    • blastdoor
    • 6 years ago

    Maybe… very hard to say, though.

    Intel doesn’t want to treat Apple so well that Apple gains too much market share relative to Windows OEMs, because then Apple would have them over an even bigger barrel. Yet of course you’re right that Intel doesn’t want to lose Apple as a customer, either.

    I suspect Apple and Intel are in an ongoing game of chicken, and my view is that Apple is the one chickening out right now. They appear to be getting nothing from Intel — no early access to chips, no custom chips, and (though we can’t know for sure) I see no evidence that Apple is getting a huge price cut relative to other OEMs.

    If Apple really wants to show us some courage they should ditch Intel.

    • cygnus1
    • 6 years ago

    That warchest of cash is probably what makes them not need to do it though. Not that Apple represents a huge chunk of Intel’s revenue, I’m pretty sure Intel knows that if they don’t treat Apple well enough they will just stop purchasing. So Intel treats them well enough and it most likely remains cheaper/less hassle for Apple to not build ARM Macs.

    • tipoo
    • 6 years ago

    The GPU seems like a better inflection point to me. If they can be substantially more efficient with vertical integration, they gain that chip benefit, while also not tossing out x86 compatibility just yet on the processor side. The GPU being Metal/OpenGL/DX compatible would theoretically work mostly compatibly with most programs.

    • tipoo
    • 6 years ago

    They’re more substantially architected for FP16 than either. Polaris and Pascal can take advantage but there are varying levels of implementation, reducing register pressure isn’t the same as the full pipeline taking advantage.

    So yeah, I’d say hard to compare, as desktop graphics are nearly entirely single precision right now. But the Apple+PowerVR cores certainly seem highly efficient.

    • ronch
    • 6 years ago

    Are Apple cores better than Radeon cores or CUDA cores? Or is this like comparing apples to oranges?

    • snowMAN
    • 6 years ago

    Swift is a statically typed language, so the compiler knows at all times whether something is a 16, 32 or 64 bit numeric value. (It does infer [primitive data] types based on assigned value, but it does so at compile time, not run time.)

    • tsk
    • 6 years ago

    I believe it’s on par with or better than Intel HD515.

    • crystall
    • 6 years ago

    I’d like to point out that the article is mostly speculation on Kanter’s part. First of all ImgTec GPUs have more customization options than only the number of units they’re made of. It is known that ImgTec provides whole-stack customization to its customers (e.g. [url<]http://libv.livejournal.com/26972.html[/url<]). Secondly he's using an optimization guide to infer the microarchitecture of the GPU. The optimization guide however is not an accurate description of the underlying design, it can be significantly misleading.

    • adisor19
    • 6 years ago

    They definitely have enough $ to throw at it.

    Adi

    • blastdoor
    • 6 years ago

    [quote<]I wonder if Apples own GPUs will be a thing on Macs, ever...[/quote<] I continue to believe that Apple has the *ability* to make ARM-based SOCs for the Mac that would be very compelling. This story makes it appear that they might also be developing that ability in the GPU space as well. Of course, having the ability and actually doing it are two very different things.

    • tipoo
    • 6 years ago

    Very interesting read. I had assumed Apples semi-custom meant custom configurations of existing PowerVR architectures in core counts you don’t get off the shelf, but this is much much deeper. Replacing programmable shader cores with their own for instance, and writing their own compiler for them. Or nearly free conversion making FP16 more appealing.

    Lots of stuff here.

    I wonder if Apples own GPUs will be a thing on Macs, ever…

    • derFunkenstein
    • 6 years ago

    [quote<]You don't write GPU code in Swift though[/quote<] D'oh.

    • christos_thski
    • 6 years ago

    What generation of desktop GPU (or iGPU) is Apple’s GPU core now up to, performance and functionality wise?

    I assume it is past intel HD4000, but can there be a ballpark estimation?

    • Andrew Lauritzen
    • 6 years ago

    You don’t write GPU code in Swift though 🙂 And MetalSL has explicit types (it is based on C).

    It’s still important to have cheap/free conversions though to get the most benefit from reduced precision types. There are definitely still certain parts of the computation that need higher precision so invariably you end up with a lot of mixed precision instructions. If you’re forced to waste an entire instruction on conversions in/out of these operations you lose the benefit of reduced precision really quickly…

    • morphine
    • 6 years ago

    Well, Apple’s already punished those with regular headphones and non-Type-C USB gear, so… 😉

    (That’s a joke, no killy pleasy.)

    • derFunkenstein
    • 6 years ago

    “Free” data type conversions are probably a big deal since there’s a good chance the developer doesn’t know until runtime what type a specific object is. Swift infers primitive data types based on the assigned value. Punishing a developer for using a core feature of the programming language would be, well, kinda rude.

Pin It on Pinterest

Share This

Share this post with your friends!