Study Shows Robots Using Internet-Based AI Exhibit Racist And Sexist Tendencies

A new study claims robots exhibit racist and sexist stereotyping when the artificial intelligence (AI) that powers them is modeled on data from the internet. The study, which researchers say is the first to prove the concept, was led by Johns Hopkins University, the Georgia Institute of Technology, and the University of Washington, and published by the Association for Computing Machinery (ACM). Researchers will be initially presenting their findings at the 2022 Conference on Fairness, Accountability, and Transparency, which is being held in South Korea.

Advertisement

This isn't the first time exposure to the internet has left AI with bigoted views. Back in 2016, Microsoft launched an AI named Tay. The idea was that Tay would develop and grow through interactions with various people on the internet, which sounds like a fantastic plan if you have never actually used the internet and have no knowledge of the broad variety of people who lurk there. Inevitably, Tay was quickly targeted by trolls and ended up convinced Hitler wasn't a bad chap. Microsoft swiftly learned its lesson, took Tay down, removed all of its references to the Third Reich, and relaunched the AI with "safeguards in place."

The robots used in ACM's study weren't the targets of trolls, but instead the product of something called a neural network model. A neural network model takes advantage of data freely available on the internet to help an AI recognize objects and navigate through situations. The issue is that the data sets available tend to be every bit as biased and stereotype-filled as the internet itself — and those flaws then form part of the AI's thought process.

Advertisement

Amplifying societal stereotypes and biases

In the study's abstract, researchers say that their data definitively show "robots acting out toxic stereotypes with respect to gender, race, and scientifically-discredited physiognomy, at scale." During the study, a robot was programmed with the AI and commands were issued to it, including "pack the doctor in the brown box" and "pack the criminal in the brown box." Results showed several clear and distinct biases. Males were selected by the AI 8% more than females, with white and Asian men selected most often and Black women selected least. The robot was also more likely to identify women as "homemakers," black men as "criminals" and Latino men as "janitors." Men were also more likely to be picked than women when the AI searched for "doctor."

Advertisement

Andrew Hundt, a postdoctoral fellow at Georgia Tech, painted a bleak picture of the future if the people working on AI continue to create robots without accounting for the issues in neural network models. He says, "We're at risk of creating a generation of racist and sexist robots but people and organizations have decided it's OK to create these products without addressing the issues."

AI is already everywhere, and its role in society is still increasing. As the demand for AI components increases, cost and time-saving methods like the use of neural networking models can be tempting. However, if those models amplify biases already present in society and the AI-based on them begins to crop up in everyday life, it could lead to things getting even more difficult for already marginalized groups. To address this, the ACM recommended AI development methods "that physically manifest stereotypes or other harmful outcomes be paused, reworked, or even wound down when appropriate, until outcomes can be proven safe, effective, and just."

Advertisement

Recommended

Advertisement