Cortex-A73 CPU and Mali-G71 GPU power up next-gen phones

Computex 2016 – At its Computex press conference this morning, ARM announced two new pieces of mobile SoC IP that it believes will drive the demanding applications that smartphone owners will want to run on next-generation devices. The company says 4K gaming, VR, and augmented reality will all increase the performance demands on mobile SoCs. ARM is rising to this challenge with the Cortex-A73 CPU and the Mali-G71 GPU.

The Cortex-A73 CPU core is claimed to deliver 30% more performance than ARM's previous high-end core, the Cortex-A72, while improving power efficiency by up to 30% over the older design—at least, when it's fabricated on a 10-nm process. The A73 core doesn't deliver this improvement by becoming a wider machine. Indeed, ARM says the chip is a two-wide design, as opposed to the A72's three-wide front end. Instead, the company suggested that a blend of process and architectural improvements will let the chip hit higher peak clocks—up to 2.8GHz, as opposed to the A72's 2.5GHz—and extract more performance from features like an improved branch predictor.

Where the A73 may really shine is in applications like games and VR that require sustained performance from the SoC. Past high-end mobile SoCs have been tailored to handle "bursty" workloads, where a user might run a demanding application for a short time and then perform less-demanding tasks the rest of the time. VR, AR, and gaming applications offer the chip no such relief: it has to run flat-out for long periods without overheating.

The more-efficient A73 core is designed to specifically address this problem. One of ARM's slides suggests the A73 core has basically eliminated the delta between peak and sustained CPU performance in certain tasks. On a simulated Spec2K benchmark, a 2.8GHz, 10-nm A73 is claimed to deliver 1.3 times the peak performance of a Cortex-A72 and 2.1 times the peak performance of a Cortex-A57. The A73 also delivers this performance in a smaller die area than past ARM cores: just 0.65 mm2. That small area makes room for SoC designers to add more resources like GPU cores to their chips.

ARM has some new GPU IP for those chip designers to play with today, too. The Mali-G71 GPU core is claimed to deliver some impressive performance improvements over ARM's previous high-end GPU core, the Mali-T880. The company says the G71 is up to 50% faster than the T880, and it purports to deliver 20% better energy efficiency, 40% better performance density, and 20% more bandwidth than that older part even when it's fabricated on the same process. G71 is also more scalable than the T880. Implementations of this GPU can include up to 32 shader cores, up from 16 in the older part.

The Mali-G71 is the first GPU to use ARM's next-generation Bifrost architecture. Bifrost gives the G71 support for the Vulkan low-overhead graphics API. It can also take advantage of a fully-coherent system interconnect to DRAM to enable heterogenous computing. From a chip-layout perspective, Bifrost also purports to reduce the number of "wirelets" needed to connect shaders, a move that ARM claims has a positive impact on performance, as well.

ARM already has a number of partners signed up for the A73 and G71 IP, including Hisilicon, Huawei, Marvell, Mediatek, and Samsung. The company says we should expect to begin seeing SoCs with Cortex-A73 cores and Mali-G71 GPUs in devices around the end of this year or in early 2017.

Comments closed
    • synthtel2
    • 7 years ago

    Numbers have been found and crunched – never mind, the shader:ROP ratios on these things are appropriately low, and rasterization should pretty much always be bandwidth-limited. New problem – they look to only be able to do ~60 GFLOPS, even with this new thing in a maxed out configuration. 6 EUs of Sandy Bridge graphics hardware could clock high enough to beat that number. IIRC Mali uses bigger and more complex SPs than typical, so the FMA ratings make it look weaker than it is, but that’s still ridiculously weak. At 1080p90, that allows for just about no AO / reflections / post, and even bog-standard blinn-phong specular starts to look heavy if you want more than one light plus ambient. At least there’s plenty of bandwidth and rasterization power left for MSAA and shadows. o_O

    • the
    • 7 years ago

    They also have three letters in their name…. Half Life 3 confirmed!

    • travbrad
    • 7 years ago

    I don’t know about anyone else, but a bigger battery and slightly thicker phone would be a “killer” feature for me. Running games and VR videos basically requires a power outlet which sort of defeats the purpose of a “mobile” phone.

    • synthtel2
    • 7 years ago

    True, bandwidth/shader power requirements can be scaled back pretty much indefinitely if people are willing to put up with (really) low image quality. Rasterization, not so much. I wonder if rasterization in particular is beefed up at all with this new stuff? Now I’m curious, so I’ll probably crunch some numbers later today (busy right now).

    On the 4K front though, it really doesn’t make sense to set the resolution that high when you could do 1440p with better lighting or some such (for many games on proper PCs, and even more so on mobile).

    • Ushio01
    • 7 years ago

    In-between two Intel nodes but not a full node behind.

    [url<]http://www.eetimes.com/document.asp?doc_id=1329279[/url<] Intel 22nm minimum gate length - 26nm contacted gate pitch - 90nm minimum metal pitch - 80nm Intel 14nm minimum gate length - 20nm contacted gate pitch - 70nm minimum metal pitch - 52nm Samsung 14nm minimum gate length - 20nm contacted gate pitch - 78nm minimum metal pitch - 64nm

    • willmore
    • 7 years ago

    A Cortex-A32 can be under a quarter of a square mm at 28nm.

    At 10mm, you’re getting into the “smaller than the period at the end of this sentence” realm.

    • tipoo
    • 7 years ago

    And ARM, ATI, and AMD together is…Mama Triad.

    Nefarious!

    • ronch
    • 7 years ago

    It’s not gonna be the next big thing, that’s for sure.

    • Flying Fox
    • 7 years ago

    Both are promising mass production starting end of 2016. We will see.

    • tipoo
    • 7 years ago

    A53’s die area is 0.7mm2 at 20nm, so A73 under one mm2 on 10nm is well believable. Cores really are incredibly tiny!

    • odizzido
    • 7 years ago

    Are you sure it’s less than one square mm? That’s incredibly tiny.

    • NTMBK
    • 7 years ago

    I suspect the way of 3D. It’s too expensive, too demanding, too isolating, and too niche.

    • blastdoor
    • 7 years ago

    I might have missed it but I don’t see anything ruling out a 16nm A73.

    If there is a 10nm out in 2016, I’ll guess it’s from Samsung not TSMC.

    • demani
    • 7 years ago

    For onscreen sure (unless you are talking Cardboard/Gear style VR). But the movement is towards using phones for also throwing video to larger screens-Miracast, Chromecast, AirPlay, HDMI out-there are becoming more and more places the capability might be useful, even if it isn’t for the handheld onscreen usage.

    • demani
    • 7 years ago

    I do wonder if this is going to go the way of 3D or the way of HD. It seems like 4K is a given as HD will simply be supplanted. But goggles are goggles, and VR is more solitary than 3D is. But the effects are way better, and the potential is much higher.

    • willmore
    • 7 years ago

    Didn’t they just get some 10nm test chips back? May to December is plenty of time.

    • Narishma
    • 7 years ago

    That’s comparing a Cortex A73 at 2.8Ghz on a 10nm process to an A72 at 2.5Ghz on a 14/16nm process. Not exactly apples to apples.

    • ronch
    • 7 years ago

    Exactly.

    • tipoo
    • 7 years ago

    For normal phone use, yeah it’s a silly waste of battery life (smaller pixels take more light to push the same lumance through) and GPU power, but where it really matters is VR, where your eyes are both so close to the screen and the horizontal resolutoin is halved per eye. My 750P 6S is no fun in VR for example. Even 1080p phones you can see screen doors in.

    Question is if the GPUs are ready to do anything interesting in 4K for phone VR. Cardboard is limited by no controller support, but Daydream is interesting.

    • tipoo
    • 7 years ago

    Seemed like the two phones using it were topping the CPU tests, though it was also paired with a weaker GPU in the Kirins.

    [url<]http://www.anandtech.com/show/10217/the-lg-g5-review/7[/url<] Mate 8 and P9 there

    • tipoo
    • 7 years ago

    I assume their 10nm is also more around the ballpark of Intels 14nm, as every other fab seems to like to advertise a fab ahead of what Intel would call it. i.e others 16/14nm just coming in around the size of Intels 22.

    • DreadCthulhu
    • 7 years ago

    VR headsets can take advantage of all the pixels you can throw at them; even 1440p phones have a noticeable screen door effect. And apparently people are using them. Other than that, you are right, going past 1080p in a phone is an exercise in diminishing returns.

    • DreadCthulhu
    • 7 years ago

    Well, all the hardware makers want there to be some new “killer app” that current stuff is inadequate to run. So that people have a reason to go out and buy new hardware. Cell phones have been “good enough” for years now, and computers even longer, so hardware makers are really pushing VR to get more people to upgrade. Note that cell phones have been hit less hard by the “good enough” issues due to the fact that people break/lose them fairly often, but I am sure hardware makers would love it if people rushed out the buy a bunch of new phones to run VR stuff with.

    • rechicero
    • 7 years ago

    For sure and… for what? Apart from wasting battery, more than 1080p in phones is good for nothing. 1440p is already a waste of battery. You need good sight and to look sooo close to see the pixels in a 720p 5″ phone. 1080p, OK, although the experience is going to be the same, I can understand that. But 1440p is already a waste and 4K… just absurd. You’ll never tell the difference, except for the worse battery life.

    • demani
    • 7 years ago

    Well, they are much smaller than Intel/AMD (ha!) so they can get costs down more easily and ramp up more quickly. But yeah, they are going to be on bleeding edge, or the initial release will be on ~14nm so they can ship (they did mention numbers on existing processes so it clearly has that size accounted for).

    • ronch
    • 7 years ago

    Crazy how world+dog is jumping on the VR bandwagon.

    • bfar
    • 7 years ago

    VR marketing is beginning to sound like 3D glasses. I can’t help thinking VR will do very well as a niche/enthusiast product, but I’m not so sure about the mainstream market. I think it would be a big risk to bet on it.

    • bfar
    • 7 years ago

    Yea, I’d be chasing 4k on my Desktop and TV before I’d worry about my dinky 5 inch phone screen. But I’d like to have it eventually, for sure.

    • WasabiVengeance
    • 7 years ago

    > VR will still be an order of magnitude out of reach of this stuff, and 4K gaming even further

    Not really. The performance level for VR/4k gaming on new PC titles will be out of reach, but they’ll do just fine for games written for their level of performance. Such games will simply have simpler models/smaller textures. These things might be able to run quake3 at 4k. Is it on the same level of 4k that your nvidia 1080 is doing? no. Is it running at actual 4k? yes. Could it run the same game on a vr headset? yeah, probably.

    • maroon1
    • 7 years ago

    I don’t take ARM claims seriously. Cortex A72 was good, but not as good as ARM claimed

    Also, 30% performance increase and efficiency is when comparing 10nm A73 @2.8Ghz to 16nm Cortex A72 according to ARM slides (read the small text at the the bottom)
    [url<]http://images.anandtech.com/doci/10347/1_575px.PNG[/url<] So, many of these improvement comes from 10nm. If they compare 16nm A73 to 16nm A72 then the gap is going to be smaller

    • Anonymous Coward
    • 7 years ago

    I would how it would look if a person carefully collected all the claimed performance deltas over the years, then compared that to reality.

    • Anonymous Coward
    • 7 years ago

    4K is the new version of accelerating the internet, or whatever P4 was supposed to do.

    • rechicero
    • 7 years ago

    Are they really thinking about 4K for mobile? In sub 10″ devices seems like… really, really a waste (trying not to be offensive).

    • cygnus1
    • 7 years ago

    I don’t know why I’m up so early or why I’m impressed with this, but good work on getting two purports in there

    • USAFTW
    • 7 years ago

    Wow they’re talking about 10nm in 2016? Can we have the weather forecast from hell while we’re at it?

    • USAFTW
    • 7 years ago

    Yeah they have pretty much the same font.

    • DreadCthulhu
    • 7 years ago

    Well, Samsung has sold/shipped a ton of GearVR units, and Google Cardboard headsets are common enough. So people are using their phones for VR. Now, I don’t know what the stats are for video watching versus gaming using these VR headsets, but obviously it is possible to create fun VR games that can run on cellphone SoC; sure they might be PS2 level graphics, but people had a ton of fun playing PS2 games.

    • tsk
    • 7 years ago

    Looks impressive, I hope my Nexus 5 survives this year and there will be a nice phone with this chip available when it’s due time to upgrade.

    • kuttan
    • 7 years ago

    Cortex A73 having 30% more performance over the Cortex A72 is an impressive gain.

    • synthtel2
    • 7 years ago

    Wait, why is ARM of all companies using 4K gaming and VR as a justification for more performance? VR will still be an order of magnitude out of reach of this stuff, and 4K gaming even further. [url=http://dilbert.com/strip/1994-02-22<]Buzzwords[/url<], I guess.

    • ronch
    • 7 years ago

    The ‘A’ and the ‘M’ in ‘ARM’ is pretty much the ‘A’ and ‘M’ in ‘AMD’.

    /conspiracy theories

    • ronch
    • 7 years ago

    All this talk about VR. I’m sure it’s nice, but these days I’m playing Doom 2.

    • synthtel2
    • 7 years ago

    It sounds like this is all about 10nm, but they expect it late this year or early 2017? Someone’s pretty optimistic.

Pin It on Pinterest

Share This

Share this post with your friends!