AI Develops A 'Secret' Language That Researchers Don't Fully Understand: Here's What It Means For The Future

Artificial intelligence is already capable of doing things humans don't really understand. For instance, a team of Google researchers could now be in hot water due to the emergence of a supposedly "sentient" artificial intelligence called LaMDA — the very same AI which compelled a Google engineer to risk his job to ensure its autonomy just a few weeks ago. If that sounds like a cutout from science fiction, you're certainly not alone in thinking so. It seems like the future is already here to stay, regardless of how some might feel about the proliferation of artificial intelligence across the modern world.

AI is now improving at incredible speeds. Take for example the several AI products that are able to convert practically any text into an array of images — created from scratch — by way of complex processes that tie words and images together in a string of data points that exist in relation to one another. This is why it makes sense that such an AI would require a way to quickly and easily communicate information to itself. This is already resulting in new languages springing up, according to The Conversation's Aaron J Snoswell, who claims that the DALL-E 2 AI is already using a secret lexicon with its own words for nouns like "bird" and "vegetable". That's already a long way forward from another recent story of an AI that blew everybody's minds by writing its own beer and wine reviews.

DALL-E 2's secret language is more like a window into AI's limitations

If an AI were to be able to create its own language entirely, this could surely spell uncertainty for the future. After all, nobody wants to let loose a self-replicating, language-encrypting AI that could go rogue and begin shutting down critical parts of our infrastructure (such as the internet). The good news is that researchers don't seem to believe that's the primary threat with the experimental and largely inaccessible DALL-E 2 (which already has a counterpart version available for the general public called DALL-E Mini).

Snoswell noted in his report that forcing the AI to spit out images with captions attached resulted in strange phrases that could then in turn be inputted to create predictable images of very specific things. There are a number of reasons why this could be happening. Snoswell suggested that it could be a mixture of data from several languages informing the relationship between characters and images in the AI's brain, or it could even be based on the values held by tokens in individual characters. 

Snoswell went on to say that the concern isn't about whether or not DALL-E 2 is dangerous, but rather that researchers are limited in their capacity to block certain types of content. It sounds like anyone could bypass banned words to generate offensive content based on the computer's secret language, which isn't well understood yet, and that could be the problem that keeps DALL-E 2 from reaching the public's hands in its purest form, at least for now.