This Terrifying AI-Powered Tech Can Practically Read Minds

It's an inescapable fact that AI is going to continue becoming more sophisticated. When Dr. Chris Arnold of LinkedIn's blog asked ChatGPT whether AI will "take over the world" in January 2023, the chatbot reassuringly responded that "While AI has the potential to greatly impact many industries and aspects of society, it is ultimately a tool created and controlled by humans." While this is true, it's also inevitable that AI-human interactions are going to become more sophisticated too.

In May 2023, Geoffrey Hinton spoke of the dangers AI poses, and with the implications of certain projects in mind, it's clear that Hinton has a point. Two months before his warning, news emerged of a Stable Diffusion technology that scientists may eventually be able to use to "read" what people are thinking.

Yu Takagi and Shinji Nishimoto, of the Center for Information and Neural Networks in Osaka and Osaka University, used Stable Diffusion to interpret a signal from an MRI machine. The resultant images were uncannily similar to objects and pictures the owners of the scanned brains have previously seen. From SlackGPT making our workdays more convenient to "seeing" the same images human volunteers saw, generative AI seems to be able to do just about anything.

How does Stable Diffusion read MRI-scanned minds?

Takagi and Nishimoto's December 2022 paper "High-resolution image reconstruction with latent diffusion models from human brain activity" notes that generative AI has been used to try and investigate the connections between brain signals and images produced before. The issue with this has been that "reconstructing realistic images with high semantic fidelity is still a challenging problem."

To begin tackling this issue, the scientists were fortunate to have access to the Natural Scenes Dataset, a University of Minnesota work that saw participants study around 10,000 different images of objects and landscapes and answer questions about them. This study provided extensive data on the ways our brains react to certain sights and thought patterns, and it also provided Nishimoto and Takagi the information they needed to create a pair of AI programs that worked in tandem. Between them, the software analyzed the scans of some participants, the images, and text descriptions of the latter, and could then generate a new image similar to the first.

Using data from 1000 of the original images, the tech replicated a majority of them incredibly well. The training process for these elements was extensive, though, and only the data from the specific participants' brains involved were relevant. This sort of technology and study is in its infancy, then, but with future refinement, its scope could be increased and its potential to tell us all manner of frightening things about our brains dramatically boosted too.