Adobe built an AI to keep Photoshopped faces honest

Photoshop has undoubtedly changed expectations of modern beauty, but a new research project could help pull back the tech and help make faked images more honest. The handiwork of two Adobe researchers along with a team from UC Berkeley, the new image forensics tool can spot where Photoshop's Face Aware Liquify feature has been applied.

If you're not familiar, that's the tool which allows image editors to make quick tweaks to facial features. It automatically detects the eyes, mouth, and other parts of the face, and then offers sliders to control things like eye size and position, the shape of the nose and mouth, the width of a smile, and even the overall proportions of the face.

While it's a useful tool for fixing inconvenient squints or grimaces in family photos, the potential for misuse is also present. Making almost imperceivable adjustments to people in an image can have a huge impact on the overall tone of the shot, not to mention altering interpretations of the story behind it. How adept humans are at spotting those changes is questionable.

The researchers tested using a set of thousands of photos scrapped from the internet, on which Face Aware Liquify had been applied. A subset of photos was then selected, along with a set of images which a human artist had modified. "This element of human creativity broadened the range of alterations and techniques used for the test set beyond those synthetically generated images," Adobe says.

Human participants tasked with identifying which shot of image pairs had been edited had a 53-percent success rate. A Convolutional Neural Network (CNN) trained on the image set, however, could identify with as much as a 99-percent success rate.

More than that, it was also able to identify where the warping tool had been applied, and even roll back those changes.

Obviously, it's in Adobe's best interest to make photo editing tools as capable – and convincing – as possible. After all, that's what the company's customers are paying for. Nonetheless, it says it's also looking at the next stages of this research, to improve how neural networks can perform at keeping digital images honest – "and to identify and discourage misuse."