Computational photography trailblazer Light has quietly given up its smartphone camera ambitions, despite inking deals with Nokia, Sony, and others to use complex multi-sensor tech for better images. The company’s approach eschewed the “one big sensor” angle that was most prevalent in smartphones, replacing it with a cluster of multiple sensors the data of which would be combined to produce a final frame.
In the case of the Nokia 9 PureView, for example, Nokia and Light worked on a cluster of five separate camera sensors. The Android smartphone would simultaneously capture with its dual 12-megapixel color cameras and trio of 12-megapixel monochrome cameras, each tuned differently and using different lenses.
That raw data would then be combined, software massaging it into an image that promised better dynamic range, more accurate colors, and greater detail. On the one hand, that could indeed be the case: some of the images the Nokia 9 PureView could take were notably improved over what we’ve come to expect from most smartphone cameras. However it wasn’t consistently better, and the post-processing involved turned out to be time-consuming and occasionally glitchy.
Despite that, Light cut deals with both Sony and Xiaomi to build on its homegrown technology. Sony and Light planned to develop camera sensors that would combine image data from four or more sensors, the two announced in early 2019. Only days later, Light and Xiaomi said they’d be making multi-camera devices together.
Now, though, that looks much less likely. With all quiet from the Sony and Xiaomi partnerships, and Nokia’s successor to its Android phone seemingly looking to Zeiss alone for its camera talents, Light’s presence in the segment was conspicuous by its absence. Sure enough, Light is “no longer operating in the smartphone industry,” the company confirmed to Android Authority.
Instead it will be focusing on how its camera tech could be used in autonomous driving applications. “Light’s 3D depth perception platform will redefine how automobiles perceive the driving environment,” the company’s updated website suggests. “Using our breakthrough innovations in multi-view stereo and hybrid signal processing, Light’s sensing technology will provide incredibly rich, accurate, and reliable depth at a higher resolution, and, in real-time.”
Certainly, it’s an area seeing significant growth right now, and that’s only likely to intensify over the next few years as self-driving vehicles shift away from prototype stage and into actual production and deployment. Even in partially-autonomous applications, meanwhile, 3D depth perception could be useful for driver-assistance technologies and active safety, such as lane keeping, obstacle avoidance, and emergency braking.
Light first made headlines back in 2015 with the L16 camera. Taking on point-and-shoot and micro-four-thirds cameras, it combined sixteen different sensors that would all work in tandem. Reviews were mixed, however, crediting the L16 as innovative but criticizing its speed and consistency. Support ended for the L16 at the end of 2019.
Though the shift in attentions is unexpected, it does make some sense. Light counts Alphabet’s GV investment fund among its backers – along with SoftBank, Leica, and others – which is regularly on the hunt for new technologies its Waymo self-driving car division could implement.