Coming off a year and a half as White House deputy chief technology officer, computer science and public affairs professor Edward Felten is deeply involved in both the technical and policy communities. He serves as director of Princeton’s Center for Information Technology Policy, which recently announced a new initiative focused on artificial intelligence and related policy. He talked with PAW about future directions for AI research and technology policy.

Professor Edward Felten
Ricardo Barros

What kinds of projects will the new AI initiative be looking at?

In many areas we have specific work already going on. I have a project that is trying to improve the forecasting of which types of jobs will be affected by automation, and on what timescale, by looking in a more detailed way at what the capabilities of AI systems are. [Assistant professor] Arvind Narayanan and some of his collaborators have done work on understanding how AI models of language incorporate bias from human use of language.

You’re serving on the new Rework America Task Force, funded by foundations, that aims to modernize the labor market. How can policy initiatives help meet this goal?

The worry to me is not so much that there won’t be jobs, but that we’ll have a workforce whose skills are mismatched to the jobs that will exist. That leads to a couple important questions: How do we make sure workers have the skills that will be demanded by the future job market, and how do we make sure workers are in a stronger position to bargain for better pay and better working conditions? 

There is concern about the potential for AI algorithms to be biased. How should policymakers address this issue?

Historically, we’ve gotten a lot of experience with understanding how to make human processes more accountable — through transparency, anti-corruption laws or due-process requirements, and a whole body of administrative law that is designed to hold human or bureaucratic processes accountable. What we don’t yet have is the corresponding theory or set of mechanisms to hold algorithms accountable. It’s important that government play that role in a way that is technically sophisticated. But also, it’s important to recognize that the alternative to these AI systems is to have decisions made by people, and people are notoriously prone to bias and non-transparency as well. 

Technology research and development has an important role to play in this because it’s possible to build algorithms that are resistant to bias, and we want to create a norm that people who are building systems that are at risk of bias feel they have an obligation to make algorithms that are bias-resistant.

Recently, several of the biggest tech companies have been criticized for how much power they wield. Are more regulations on them likely?

We’ve passed the time when large tech companies can stand apart from the policy process and say, “Don’t bother us; we’re just over here innovating.” They’ve now taken on an amount of power and influence that really requires them to be part of the conversation and engage in debate about what is reasonable for them to do and what they can do. That may lead to a mild increase in regulation, but I think it has to lead to greater conversation and interchange between policymakers and the companies. These companies provide a lot of value — they are engines of economic growth — and it would be a shame if we ended up hobbling them, rather than the companies figuring out how to transform themselves into the best citizens they can be.

You’ve been optimistic about the ways AI is changing society. What makes you confident that the potential benefits will outweigh the costs? 

AI has huge potential to improve the way that we as society address some of the biggest challenges we face. For example, we’re moving toward a world in which your health care is individualized to your particular situation — to your particular genetic makeup — and a lot of that trend is being driven by advances in AI that make it possible to analyze large amounts of data and figure out how to customize treatments. One of the most important things we can be doing is just to be alert to these opportunities and take advantage of them, rather than thinking of AI as this scary thing that’s happening to us.

Interview conducted and condensed by Josephine Wolff ’10