Next-gen Qualcomm Spectra ISPs bring more depth to mobile devices

Smartphone cameras aren’t just about snapping static selfies for Instagram any longer. The pixels that phone camera modules capture these days are often in motion and increasingly used to let machine-learning algorithms see the world around the device for augmented and virtual reality applications (or XR).

Qualcomm’s Snapdragon SoCs and associated hardware will more often than not serve as the platform for these increasingly sophisticated capture and processing tasks. The company’s chips power the majority of high-end Android device today, so changes to the IP blocks it offers in its SoCs will underpin the novel smartphone feature of tomorrow. Today, Qualcomm is teasing some of the features its next-generation Spectra image signal processors, or ISPs, could allow device makers to integrate into their products when the next generation of Snapdragons arrive later this year. We were given a sneak peek at the capabilities of these ISPs and the supporting efforts the company is making for them at SIGGRAPH a couple weeks ago.

The headlining feature of the next-gen Spectra is its extensive support for depth-sensing technology. Perhaps best known as the fruit of Google’s Tango program, depth-sensing tech lets devices build 3D maps of the spaces around them for use with XR applications. Depth information can be used to measure spaces and place apparently life-size objects within them, or perform what’s known as “inside-out” VR tracking for a self-contained headset with no external sensors.

The next-gen Spectra will offer support for two primary methods of depth sensing. The passive method only requires that a device have a pair of side-by-side primary cameras on board. This method relies on the inherent parallax in dual-camera setups to infer depth information from the difference betweeen two images. Qualcomm suggests this method will begin to filter into lower-cost devices using its SoCs.

For higher-end depth-sensing experiences in XR, Spectra ISPs will be able to gather information from active depth-sensing modules, as well. These modules use what’s known as “structured light,” or a patterned emission of infrared light, to build high-resolution clouds of depth points using an infrared camera module. The company says its sensor can conservatively collect over 10,000 points of data at reasonably high frame rates—about 40 FPS in the demo we were shown.

These point clouds could be used for hand tracking in VR, biometric identification using facial detection, and what’s called SLAM, or simultaneous location and mapping, for standalone VR headsets. SLAM is the key approach to performing inside-out tracking for those headsets. Of course, creative developers will likely find many more uses for depth-mapping tech, as well.

Other capabilities of Spectra ISPs will include hardware-accelerated multi-frame noise reduction, in which multiple images are computationally combined to improve image quality; accelerated electronic image stabilization; motion-compensated temporal filtering for higher-quality video; and more features that the company plans to announce later this year.

To help device manufacturers get good pixels into Spectra ISPs to begin with, Qualcomm is working with its partners on an effort called the Spectra Module Program. Qualcomm says that instead of providing companies with image-processing hardware and a driver and leaving them to figure out how to best combine those building blocks with an image sensor, it’s been working for some time to offer complete packages of image sensors, image signal processors, and pre-optimized software to hardware developers instead. The company says this approach helps its partners bring devices to market more quickly and lowers the research and development costs of tuning and integrating the sensor with a device.

The Spectra Module Program has offered device makers three sensor modules so far: a version of the Sony IMX298 with phase-detection autofocus, a monochrome-and-Bayer-sensor pair, and an optical zoom module. As just one example of companies that have taken advantage of this program, the OnePlus 5 and its dual-lens camera system uses the optical zoom module Qualcomm described.

Qualcomm is adding modules with a wider range of capabilities to its portfolio, as well. First among these is an iris authentication camera module that uses an IR emitter and sensor to provide what the company claims is a secure and reliable method of authentication.

Once users profile their eyes and enroll them in the device, Qualcomm says the module can’t be fooled by 3D models, 2D images, or other spoofing techniques. To keep this biometric info safe, the device is meant to run in tandem with Qualcomm’s ARM TrustZone protected memory. The company will also offer its device partners a dual-sensor module for passive depth sensing capabilities and a three-sensor module for active depth sensing in tandem with the Spectra ISP.

As VR and AR experiences become more and more woven into the fabric of the capabilities owners expect from their mobile devices, upcoming generations of Android devices will likely be taking advantage of some or all of the depth-sensing technologies Qualcomm showed off to us at SIGGRAPH. Expect to hear more about these technologies later this year as Qualcomm readies its next-gen Snapdragon platforms.

Comments closed
    • ronch
    • 5 years ago

    Sorry, Qually, still Ryzen for me.

    Um, yeah..

    • meerkt
    • 5 years ago

    More interesting tech than what you usually find in the endless stream of yearly model updates.

    • UberGerbil
    • 5 years ago

    Well, if they were flaunting it at SIGGRAPH they’ve probably been working with ODMs and IHVs for a while. Some of this might show up in some devices before the end of the year. That said, there’s a longer lead time with phones because of the regulatory approval, so it’s not a given. And it’ll probably take a couple of iterations (of software at least) before it really delivers.

    • Stochastic
    • 5 years ago

    Are any of these features going to make it to Android devices this year (e.g., Pixel 2)? Or are they planned for 2018 phones?

    • Bumper
    • 5 years ago

    Lol. It crossed my mind that you might be using that today. My comment will get buried in the trenches though. :^]

    • drfish
    • 5 years ago

    Hey, stop spoiling my Shortbread! 😛

    • Bumper
    • 5 years ago

    And today we celebrate the Pythagorean theorem, without which none of this would be possible. 8^2+15^2=17^2
    [url=<] Pythagorean theorem day![/url<]

    • cygnus1
    • 5 years ago

    Is structured light what Microsoft uses for depth sensing in the Kinect?

    • chuckula
    • 5 years ago

    Some background on those topics:

    Distance measurement based on phased-shift measurement as opposed to the old-school direct time of flight (like sonar): [url<][/url<] Structured light: [url<][/url<] I've always found structured light to be a somewhat non-intuitive but cool process.

    • psuedonymous
    • 5 years ago

    Assuming those ‘raw data’ views are actual views rather than mere marketing bullshots, the periodic striations (cyclic along the Z axis) indicate the depth-sensing camera is a phase-based time-of-flight system like the Kinect 2, rather than a pattern-projecting Structured Light system like the Kinect 1.

Pin It on Pinterest

Share This

Share this post with your friends!