Imagine this… in just a few years society will have a phone that can digitally map out our surroundings in pinpoint 3d accuracy. Our homes with every nook and cranny, our workplace, our shopping centres and favourite cafes that we frequent, ALL the places we visit, mapped out. All this information on our phones. In reality, this is basically a world-wide grid. We the people are doing all the footwork in building the world-wide grid. Think of that!
We imagine drones flying through trees in a forest (instead of bumping into them), or telepresence robots picking their way through a crowded convention hall. Some robotscan already do this, but perhaps Tango would sharpen their abilities.
Truth is, no one knows what applications might grow out of Project Tango. Google is offering 200 Project Tango prototypes and an API (to write apps with) to developers.
While the tech would be cool in smartphones, and maybe even cooler if it’s linked up to a virtual or augmented reality interface like Glass or Oculus Rift—it’s already being used in robotics, and could be most powerful there.
Computer vision combined with rudimentary machine learning algorithms are already giving robots more autonomy.
If Project Tango further miniaturizes computer vision sensors and makes computer vision processors more efficient, we could see improvements crossing over into the firm’s new robotics division. Indeed, one of Google’s recent robotics acquisitions, Industrial Perception, already integrates similar tech into robotic arms.