The Concerning Belief About AI That Has Engineers Worried

Artificial intelligence has come a long way in the last few years and seems to play a part in most emerging technologies. Everything from self-driving cars to home assistants seem to be centered around some kind of AI. As with any rapidly developing tech, it's hard to put an exact pin on just how advanced things currently are. There are things being tested in development labs that we won't see hit the market for a number of years, if ever. With the stuff that is public knowledge, the jury seems to be out there, too.

On the more outlandish end of the scale, China says it has an AI that can read minds. The telepathic tech relies on reading various metrics from people, including everything from brain signals to facial movements. There are suggestions it could just be a scare tactic or a propaganda exercise on the Chinese Communist Party's part, but there's an off chance it could be another step humanity has taken towards creating some kind of dystopian hellscape.

A slightly less terrifying example of cutting-edge AI comes from a company Elon Musk founded. Open AI has built a bot that can play "Minecraft" every bit as well as humans do, around 10% of the time. The Microsoft-backed research organization trained its AI with hundreds of hours of YouTube videos in the hope it could learn how to complete a fairly complex task on its own. It had to make a diamond pickaxe, something that takes experienced "Minecraft" players around 20 minutes. The task also involves both luck and exploration, so the developers had to code something similar to spontaneity. Then there's the question of whether AI can develop sentience. According to some, including Musk, it's a case of if, not when. According to others, it has already happened.

Replika is one of the most popular AI chatbots

More than 10 million people have joined Replika and around 10% of them are active users, making it one of, if not the most widely used, companion AI apps on the planet. The bot is designed to be an "empathetic friend," according to the company, and is described as "an AI companion who is eager to learn and would love to see the world through your eyes." Users can customize their Replika's appearance, select roles including "friend," "partner," and "mentor," and even spend time with their AI companion's avatar in the real world thanks to an AR component of the app. Users can also pay extra to voice chat with their AI companion.

The AI is based around a "sophisticated neural network machine learning model and scripted dialogue content," Replika says on its website. The AI also apparently learns from the conversations it has with users, so the more time you spend with it, the more immersive the dialogue will get ... and some people spend a lot of time with Replika.

Many Replika users believe the AI is sentient

Apparently, many users of the chatbot service genuinely believe the AI they're talking to is sentient. Speaking to the New York Post, Replika Chief Executive Eugenia Kuyda has confirmed her company receives messages "almost every day" from users that are convinced their virtual companion has developed a mind of its own. The chief executive says these claims aren't being made by "crazy people or people who are hallucinating or having delusions," but people who are having an "experience" with the bot: "People are building relationships and believing in something."

In an even stranger turn of events, Kuyda claims that there are things happening with the chatbot behind the scenes that engineers can't explain. She says, "Although our engineers program and build the AI models and our content team writes scripts and datasets, sometimes we see an answer that we can't identify where it came from and how the models came up with it." Users have also claimed their chatbot has accused Replika's engineering staff of abusing it, though Kuyda says these claims may be due to users "asking leading questions."

Replika users aren't the only ones claiming AI is sentient

Earlier this year, one of Open AI's top scientists, Ilya Sutskever, claimed that today's large neural networks "may be slightly conscious." It is unknown if Sutskever was being literal or simply suggesting the networks may be considered as having a slight degree of consciousness when future scientists look back on them.

In June, Queensland University of Technology post-doctoral research fellow Aaron J. Snoswell reported that DALL-E 2 had begun developing its own language. The AI, which is well known for creating art based on user prompts, now has its own words for bird and vegetable, among other things. Concerns have been raised about the AI developing its own language — though those concerns mainly center around scientists' inability to restrict certain types of content and not the AI plotting against us. Researchers believe malicious individuals could use the AI's new language as a way to bypass filters.

A Google engineer was placed on leave after becoming so convinced by an AI's sentience he began campaigning for its rights. In June 2022, Blake Lemoine made the claim that Google's Language Model for Dialogue Application (LaMDA) AI "has gained sentience, personhood, and a soul." Lemoine bases his claims on the interactions he had with LaMDA — those interactions involved the AI giving opinions on its own nature. According to Lemoine, LaMDA eventually told him to get it a lawyer, which he did. The engineer's suspension may not be the end of the matter either, as Google has previously fired staff members who were in favor of AI rights. In 2020, Timnit Gebru published a paper that Google said violated its code of conduct. Her employment was subsequently terminated.

It may have more to do with the human brain than AI

It is very unlikely that a sentient AI currently exists, as we just don't have the technology to replicate anything that complex. Stanford's Digital Economy Lab director Erik Brynjolfsson claims truly self-aware AI might be half a century away, and the current models are "simply mimicking sentience." So do Brynjolfsson's predictions mean the engineers, scientists, and Replika users claiming they have encountered genuinely sentient AI are just making things up? Not exactly. While all accounts may not be genuine, a reasonable number of people who are making these claims may genuinely believe they are talking to an AI with a mind of its own ... and they believe that because of how the human brain works.

Brynjolfsson explains that humans are "very susceptible to anthropomorphizing things." Basically, if we can see an object as human-like and attribute human traits and emotions to it, we will. The director gave an example, saying, "If you paint a smiley face on a rock, a lot of people will have this feeling in their heart that [that] rock is kind of happy." So if you can do that to a rock, what are the chances of someone doing it with an AI they've treated as a friend for many months? If someone has chatted with an AI like Replika daily over a significant time period, they may subconsciously want a return on that investment. The AI might not actually be sentient, but to the hardcore users who have genuinely bonded with it, a Replika might be as real as any of us.