Robots

Machines Learn To Stereotype Humans Just Like Humans

Researchers find AI systems are ready and willing to adopt racial and gender biases

Robots
Photo Illustration: Tara Jacoby
Apr 17, 2017 at 2:00 PM ET

Scientists are more convinced than ever that artificial intelligence systems exposed to racial, cultural, and gender stereotypes will adopt and reflect those biases in their decision-making.

In a new report published in Science, researchers at Princeton University’s Center For Technology Policy present evidence that algorithms learn stereotypes from word associations the same way humans do. Specifically, they found that a test that has previously shown race and gender bias in humans produced the same results when applied to machine learning systems trained on text containing those biases.

The original bias test, called the Implicit Association Test, or IAT, has been used to document human biases by recording the way test subjects respond when asked whether sets of words are similar or different. For example, one IAT study found that female names are more commonly associated with words that relate to family and home life than those that relate to careers. Another found that names of European-American origin were more often associated with “pleasantness” than “unpleasantness,” while the reverse was true for African American names.

Using a method based on the IAT called a Word-Embedding Association Test, the researchers were able to produce the same results in machines. In other words, the AI systems associated words within the text to learn stereotypes associated with gender and race. The researchers also note they used AI systems in use today, meaning those cultural stereotypes can easily propagate in technologies that are already widespread.

“Our work has implications for AI and machine learning because of the concern that these technologies may perpetuate cultural stereotypes,” the researchers conclude. “Our findings suggest that if we build an intelligent system that learns enough about the properties of language to be able to understand and produce it, in the process it will also acquire historical cultural associations, some of which can be objectionable.”

More broadly, the researchers say their results also hint at the nature of bias and stereotypes themselves. But the findings don’t “prove” that race and gender stereotypes are true. As co-author Joanna Bryson explains in a blog post, it instead suggests bias is derived from historical patterns of language that have become culturally embedded through repetition over time.

“It creates a credible hypothesis that stereotypes are just the regularities that exist in the real world that our society has decided we want to change,” wrote Bryson.