Google wants smartphones to be able to see and understand the world around them. That's the impetus behind Project Tango, which has produced a prototype handset capable of mapping its surroundings in real time. Johnny Lee, who works in Google's Advanced Technology and Projects Group and leads the Tango team, explains the project in the following video:
The Tango team has been working with numerous partners to bring a decade's worth of robotics and computer vision research to bear on the project. The initial prototype is a 5" handset with a 4MP camera, a depth sensor, a motion tracking camera, and two "computer vision processors." These components combine with other onboard sensors to sample the environment over 250,000 times per second, allowing the device to produce a 3D map of its surroundings on the fly.
As one might expect, the prototype runs Android. Spatial awareness hasn't been built into the OS, but Google has created APIs that stream sensor data to Java and C/C++ applications in addition to the Unity game engine. These APIs are still in development, and Google is encouraging others to contribute to the project and write their own applications for it. Interested programmers can apply to receive one of 200 prototype dev kits. Those who are willing to release their code under the Apache 2.0 open source license will apparently be given priority for the kits, which are scheduled to ship out March 14.
The technology has intriguing potential for the visually impaired, and it should aid Google's efforts to map indoor environments. Augmented reality applications could become a lot more compelling with better spatial awareness, as well. I have a feeling that smartphones aren't the real targets for computer vision, though. Google has acquired several robotics companies in the past few months, including industry giant Boston Dynamics. Surely, our new robot overlords will need to be able to see.