Microsoft's AI bot turns racist, gets shut down

When a child reaches an age where they are able to read, the worst thing you can do is set them loose on social media, and encourage them to talk to as many strangers as possible. They're probably going to pick up some choice new phrases, and some warped viewpoints. Unfortunately Microsoft didn't account for this sort of thing when they sent out their AI teen, Tay into the wild.

If you hadn't heard, yesterday Microsoft released an AI named Tay onto social media sites such as Twitter. The bot was designed to study interactions with other users, and to learn from them. Specifically, it was aimed at people ages 18 and 24. Unfortunately for them, the internet is full of many different kinds of people, including ones that just want to watch the world burn.

Goodwin's Law states, "as an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1." Now consider the fact that when we reported on this yesterday, Tay was responding at a rate of 20 to 30 messages per minute. It was only a matter of time before the conversation turned to Hitler.

And turn to Hitler, it did. A number of users set about to get the AI to say things such as "Hitler was right." And since the bot was designed to mimic the speech of the people it spoke to, it did indeed start saying some pretty pro-Hitler things. And just like that, Microsoft pulled the plug on Tay.

All of the racist and pro-Hitler posts have been purged from her timeline, and she's been silent for the last 9 hours. Interestingly enough, her last tweet was essentially saying goodnight. Perhaps she simply thought she was going to sleep, and will never wake up again. I guess if you're an AI, I guess that's the way to go.

VIA: ZDNET