Sociology: Janet Vertesi on Fair Necessities

With training, a new generation of designers could reduce bias in machines

Placeholder author icon
By Bennett McIntosh ’16

Published May 2, 2019

4 min read

LJ Davids

Janet Vertesi researches research: How do scientists and technologists find new knowledge, or create new ways of solving a problem? An assistant professor of sociology, Vertesi has studied technology and social dynamics in everything from the NASA teams operating the Spirit and Opportunity Mars rovers to the growing ecosystem of digital media we interact with every day. Her undergraduate class on technology and society is cross-listed with sociology, history, and engineering. 

Image

Vertesi

JPL-Caltech

Machine learning — the automated techniques used to recognize patterns in everything from search engines to advertising to medical diagnoses — has a profound effect on our lives. In January, Vertesi and a team of interdisciplinary scholars published a study about the traps that designers can fall into, leading them to build machine-learning systems with biased results. She discussed her findings — and how the social sciences can help us build better tech — with PAW. 

Why do we need to worry about whether a machine is fair?

There are an increasing number of reports about what happens when we don’t think about fairness in the process of designing technical systems, whether it’s predictive policing algorithms unfairly targeting certain populations, or Amazon realizing its machine screening résumés had learned not to interview anyone with a degree from a women’s college. It’s easy to assume that because something was done by a machine that it is without bias, but we’re starting to realize that machines will always show the biases and assumptions of their creators. 

How can social scientists participate in this discussion?

There is an idea that the problems with technology are simply unpredicted consequences. But the decades-old field of technology studies actually has a lot of tools for anticipating these problems: If you have the right lenses for looking at these problems, they jump out right away. 

In our paper, we identify a number of tools or ways of thinking about the problem that any researcher should be able to take into their work. These are well-known problems in technology studies, so making sure they get to an audience of engineers is very valuable. 

What are some of the most important problems and pitfalls you identify? 

The first one, for me, is the “solutionism” trap: the assumption that the best solution to a problem has to be a technical one. People fall into this all the time. I think of these very popular apps now that help you turn your phone off and disconnect. Their popularity might indicate that there are other problems at play — maybe an app isn’t the solution. 

I also like to point out the “formalism” trap. This is the notion that something like fairness could be formalized into a mathematical algorithm. But the application of numbers to social concepts is always extremely messy. In science and technology studies, we talk about the social construction of technology. This doesn’t mean that technology doesn’t exist; it means that for a concept like fairness, there are lots of different groups fighting over its definition. 

So, for example, if you’re developing a system to help a judge make decisions about whom to incarcerate, it would be useful to be working with an actual court, or actual groups of offenders and offenders’-rights communities, because that’s where your notion of what fairness is will be challenged. Once you realize there are other interpretations of the problem, you realize that there are other kinds of solutions. 

We also write about failing to consider every aspect of the system you’re trying to fix, about ripple effects, and about thinking your system is portable to different contexts when it isn’t.

How can engineers avoid designing harmful systems? 

Engineers need to remind themselves of the Hippocratic oath, “first, do no harm,” and second, include experts in the field you’re applying the engineering to — that goes a long way. I think a lot of people are drawn now to computer science because there’s the promise for technology to alleviate a lot of suffering. That is the case, but to do that right we need a lot more humanistic thinking, both during the design process and as part of the way that computer scientists and engineers are taught to think about the world. 

What’s your hope for future research into technology and society? 

There’s a lot of policy-oriented work in the social sciences, and that’s extremely valuable, but there’s also been a rise in this work that’s engaged in a kind of “small-p policy,” in the sense that it’s helping to craft devices and technologists on the ground. This work, this particular paper, isn’t just about designing fairer systems, it’s about showing you can bring social scientists together with computer scientists and with law and policy to offer something new, to influence the kind of technology that we live with and the future that we’re going to inhabit.

Interview conducted and condensed by Bennett McIntosh ’16

0 Responses

Join the conversation

Plain text

Full name and Princeton affiliation (if applicable) are required for all published comments. For more information, view our commenting policy. Responses are limited to 500 words for online and 250 words for print consideration.

Related News

Newsletters.
Get More From PAW In Your Inbox.

Learn More

Title complimentary graphics