This is how the 3D tech in Google’s Project Tango works

Jun 5, 2014
2
This is how the 3D tech in Google’s Project Tango works

Google’s Project Tango is one of those ideas that seems cool enough to want, but a bit mystifying to grasp. We’ve seen the 3D capability before, but how is it done? Google has partnered with Mantis Vision to bring their MV4D technology to both Project Tango devices. Making use of the various cameras throughout, MV4D technology could someday make its way to your smartphone or tablet, too.


The way MV4D works is as you might imagine, and once you think back to some of those videos we saw from early adopters of Project Tango, you’ll get it. A sensor first casts a light pattern onto objects in the area, kind of like a fishing net we can’t really see with the naked eye. The aim here is to lay out a physical grid, where the other cameras and sensors can do their job.

A secondary (or more) camera, which syncs to the light casting sensor, captures the grid as it’s laid out. The camera feeds the info to an algorithm that produces an accurate map of the scene on the device. The light grid and capture is what gives the images that Robocop-esque look.

Mantis Vision also has a patent-pending driver for their light sensor, which can be customized per device. The Project Tango tablet is the first to use MV4D in its entirety, and will allow Developers to quickly scale the data to provide applications we can use. MV4D is serving as the core engine for Project Tango, with the aim in bringing professional grade 3D capability to everyone.



Must Read Bits & Bytes