This camera tech could make Android the photographer's choice

Never mind swiping through endless filters trying to find the perfect Instagram effect on your latest photo: Google and MIT want to build that right into the viewfinder. Teams from the search giant and the Massachusetts Institute of Technology have been collaborating on a system that promises automatic retouching of images in real-time, giving pro-photographer style results before you've even hit the capture button. It's the latest example of how machine learning can not only improve everyday tasks but cut our current reliance on the cloud.

The collaborative goal was to cut post-processing out of the photo taking sequence. There are already many effects and filters that can be added to improve the raw output of a smartphone's camera, but they're almost always applied after the shot is taken. Meanwhile, though automatic filters exist – and in the case of things like hit app Prisma, can be very popular – they either demand cloud processing or system-intensive on-device processing.

This new system, though, addresses those shortcomings. It's starts with a well-trained machine learning intelligence, which was trained on five thousand pairs of images, both the raw and the retouched versions edited by one of five professional photographers. "It's so energy-efficient, however, that it can run on a cellphone," MIT says, "and it's so fast that it can display retouched images in real-time, so that the photographer can see the final version of the image while still framing the shot."

It's actually the second generation of some work Gharbi and his colleagues started earlier at MIT. Then, the system relied on cloud processing, with a phone sending a low-resolution version of the image to a remote server whereupon it was processed. That would produce a "transform recipe" to be sent back to the mobile device, which in turn would apply it to the original, high-resolution image.

According to Gharbi it was that system which caught Google's attention, but despite its business being in the cloud, it wanted to figure out a way to do everything locally. The two teams eventually ended up collaborating on this new system, which cuts out the remote processing by streamlining the heavy-lifting on-device. Thanks to machine learning, and clever paring back of the original image, it's a system which promises not to chew through your battery life in short order.

The most efficient aspects of the original approach are retained. The machine learning works on a low-resolution version of the photo – or the feed from the camera that would usually be shown on the in-app viewfinder – which has been split into a 16 x 16 x 8 three-dimensional grid. Each 16 x 16 layer corresponds to pixel locations; each of the eight stacked layers relates to different pixel intensities. Formula in each cell of the grid determines the modifications of the colors of the raw image.

"Roughly speaking, the modification of that pixel's color value is a combination of the formulae at the square's corners, weighted according to distance," MIT explains. "A similar weighting occurs in the third dimension of the grid, the one corresponding to pixel intensity."

NOW READ: Former Google SVP praises the iPhone camera

The end result is a recipe for how the original image should be modified. Using different sets of machine learning training according to different photographers' source images, the system can mimic how each would approach photo editing. Google also used its own new high-dynamic range (HDR) system to train the editor, and found tat "the new system produced results that were visually indistinguishable from those of the algorithm in about one-tenth the time."

That's fast enough to show in real-time in the camera app without degrading into unusable lag, and without putting such a strain on the processor that battery life suffers. It's unclear when – or if – such a system will make it to the Android camera app, though it's clear that computational photography is shaping up to be the next big battleground in mobile devices. Earlier this week, it was leaked that Apple is working on a new, smart camera system in iOS 11 that could better identify different scenes and instantly adjust its settings to suit.