It’s easier than ever to modify an image and that’s a big problem. In the days of social media and instant sharing, any given image can quickly go viral, gracing millions of eyes and, potentially, shaping the perceptions, beliefs, or attitudes of the people who see it. This makes it more important than ever to find and denounce fake photos — ones altered to show something that didn’t happen — and artificial intelligence may be the solution.
Though edited images aren’t inherently bad, they present a huge problem when sold to newspapers or news broadcasters. Increasingly sophisticated image-editing software like Photoshop, however, enables just about anyone to tamper with a photo. The end result, assuming the user has at least an average level of skills, may be an image that can’t be identified as “fake” by the human eye.
This doesn’t mean the edit is a perfect one, though. Traces of edits remain and, with tools that know how to identify them, it’s possible to isolate those imperfections to reveal the truth. Such is the nature of Adobe’s recently discussed neural network, which was trained with manipulated images to learn how to spot other tampered photos.
How does it figure out which ones are authentic and which ones are modified? By analyzing the image’s noise profile — the grain or colored specks that result from the image sensor — and determining whether any given portion of them have a different pattern. A section of an image with a different noise profile can be deduced as spliced in.
The system doesn’t end there, though, also looking for signs of tampering via other issues the researchers refer to as artifacts. These can include imperfect edges where a visual element was spliced into the original photo, as well as contrast levels that don’t match up. Though the company hasn’t revealed plans to release software that utilizes this technology, it wouldn’t be surprising to see it offered at some point as a forensic tool.