Fawkes photo tool lets anyone secretly 'poison' facial recognition systems

The Internet is packed full of billions of photos, many of which are lost into the depths of abandoned accounts and old hosting services. A huge number of this media is readily discoverable and easily downloaded, however, and due to the rise of social media platforms, it is easier than ever to link them with the people featured in the photos. Unknown to the public, some companies are quietly downloading this content en masse and using it to create secret facial recognition models of the subjects...but there's a solution, and it is called Fawkes.

Facial recognition models are, quite obviously, used by facial recognition systems to identify a particular person. The more images the AI is trained on, the more accurately it will be able to identify the person in random videos and images, including in real-time surveillance and more. Many people are unaware that these facial recognition models of them exist and the privacy implications are truly terrifying.

Companies and governments are able to train their AI models by simply harvesting the images that people freely post of themselves online — often alongside mentions or tags that directly link the person in the image with a specific identity. This mass of information is fed to artificial intelligence systems, creating an albatross that won't be easy to stop.

Researchers with the University of Chicago's SAND Lab have taken a step toward addressing this issue by giving the average person an element of sophisticated control over their images. The team has publicly released Fawkes, a simple, free tool that uses AI to imperceptibly tweak an image so that while it appears unchanged to a human, it will be 'highly distorted' to the AI systems trained with it.

The researchers explain that by feeding the system these images, someone is essentially participating in a type of 'attack' that poisons the system by corrupting its understanding of what that person looks like. Each 'poisoned' image fed to the AI reduces its ability to accurately recognize that person in images and videos, making it all but useless to any company or government that may want to utilize it.

Fawkes isn't a simple image manipulator, but rather a more sophisticated AI-driven application that creates changes that go undetected by the target facial recognition training systems, meaning they won't be able to identify and remove the offending 'poisoned' images. Being unable to figure out which images are cloaked, as the researchers call it, these companies will have to put considerable effort into sorting through all of the harvested content to find the images causing the distortion.

Testing of the AI revealed a very high level of accuracy in cloaking images, according to the study, which can be found here. The Fawkes tool was recently updated and is available for anyone to download on Windows, Linux, and macOS here.