Computer Science: Reprogramming Bias
As imperfect creators, can humans keep prejudice out of artificial intelligence?
For the study, the researchers used a word-association tool called the implicit association test (IAT). For nearly two decades, psychologists have measured implicit biases with IAT, a method in which people are presented with two different categories — for instance, African American and European American, or pleasant and unpleasant — and are then asked to sort names, words, or photos that appear into each category in rapid succession. The results of these tests have been used to demonstrate a variety of biases, including that most Americans have an implicit preference for white versus black faces, young versus old people, thin versus fat bodies, and straight versus gay people, based on the words and images they associate with positive categories.
“Many people have the common misconception that machines might be neutral or objective, but that is not the case because humans are teaching the machines,” says Caliskan, a fellow and postdoctoral research associate at the Center for Information Technology and a lecturer in computer science.
Having replicated several human biases with WEAT, the researchers then tested whether they could also reproduce statistics from the data. For instance, they looked up the percentage of women and men employed in different occupations and found that the degree of association between each of those professions and male and female words in their test sample of online text was very closely correlated to how male- or female-dominated each profession actually was.
“It’s astonishing how accurately these models are able to capture the world — the human biases and also statistical facts,” Caliskan says.
Joanna Bryson, a co-author of the paper and a professor at the University of Bath who was a visiting professor in 2015-16 at Princeton, says the results have important implications for people working in AI and for how we understand the role of language in passing on prejudices.
“Parts of our brains may just be picking up these biases directly from the language we’re exposed to, and other parts of our brains are consciously choosing what beliefs and biases to accept and reject,” Bryson says. She’s interested in trying to extend the work by applying WEAT to foreign languages to see if the associations vary, depending on different cultures.
“Some people think AI should be better than human intelligence,” Bryson adds. “Our work shows some of the reasons that that can’t be — because it’s bounded by us.”
1 Response
stevewolock
7 Years AgoFor the Record
The credit for a photo of Griggs’ Imperial Restaurant in Inbox June 7 was incorrect and should have read: Courtesy Shirley Satterfield.
Two researchers were identified incorrectly in a Life of the Mind story in the June 7 issue about bias in artificial-intelligence technologies. Aylin Caliskan is a fellow and postdoctoral research associate at the Center for Information Technology Policy and a lecturer in computer science. Joanna Bryson was a visiting professor at the University last year.