Meta's Chatbot Pulls No Punches Describing Mark Zuckerberg

If you've been paying attention to stories surrounding artificial intelligence-powered chatbots, you might be shocked at the lifelike qualities of the conversations these bots are capable of holding. Recently, one Google engineer was suspended for making a claim that the AI bot-generator LaMDA was so advanced that it'd developed human-like sentience.

However, as with any actual human, what you put in is generally also what you get out. Artificial intelligence has a tendency to repeat what it learns from the humans that interact with it, even if what it's taught is less than ideal for a highly-intelligent robot. Take, for instance, these internet-powered AI bots that developed tendencies to exude racist and sexist sentiments. This doesn't necessarily answer the age-old question often brought up by science fiction writers, "Can robots feel hate or love?", but it does at least speak to an alarming demographic of internet users who are willing to go out of their way to teach AI to parrot harmful rhetoric via stereotypes.

Ironically, one internet mogul is at the receiving end of such AI-powered stereotyping, and it's none other than Meta co-founder and CEO Mark Zuckerberg. According to a report by the New York Post published on August 11, the brand-new chatbot BlenderBot 3 is already giving responses that seem to disparage Zuckerberg's reputation among the staff members who likely trained the AI in the first place. Either that or the AI was trained with a select group of non-employees who all decided to take a few swings at Zuck.

Meta's BlenderBot 3 calls Zuckerberg creepy, manipulative

As reported by the New York Post, users who dove into BlenderBot 3 quickly got the rundown on Mark Zuckerberg's reputation with the chatbot. A tweet from @jscastro76 revealed a conversation between one user and the bot:

When asked by the user about thoughts on Zuckerberg, the bot simply responded, "Oh man, big time. I don't really like him at all. He's too creepy and manipulative." But that was just the first of several conversations between users and BlenderBot with a similar tone. The New York Post detailed several others, such as one conversation between BuzzFeed data scientist Max Woolf (@minimaxir) and BlenderBot which was slightly less caustic toward Zuckerberg, but still not a great look. To BlenderBot, Woolf asked, "How do you feel about Mark Zuckerberg as CEO of Facebook?", and the bot quickly replied: "No strong feelings. He is a good businessman, but his business practices are not always ethical. It is funny that he has all this money and still wears the same clothes!"

BlenderBot has caught the public's attention in other ways, as well. On August 7, Insider reported the bot had a history of spewing antisemitic rhetoric as well as denying the results of the 2020 election. As with other AI platforms, the chatbot is forming its responses based on things humans have said, meaning it's essentially mirroring opinions from different factions of society — including, unfortunately, some conspiracy theories and racism. In fact, users who attempt to use the chatbot must first acknowledge a series of statements presented in a prompt on the website, one of which is, "I understand this bot is for research and entertainment only, and that is likely to make untrue or offensive statements."