Graphic: StudioM1/ iStock
Six Princetonians were on Time magazine’s list of the 100 most influential people in AI

A decade ago, Cynthia Rudin *04 was working with New York City power company Con Edison to help predict manhole events, such as fires or explosions, based on more than 100 years of data the company had collected. Rudin trained machine learning models using that historical data set but struggled to figure out why the models made the predictions they did. She switched to simpler models and found that not only were they every bit as accurate, but also she could actually begin to understand the predictions they were making and what they were based on.

“At that point I realized there was a real value in interpretability,” says Rudin, now a professor of computer science at Duke University, speaking of a characteristic of artificial intelligence (AI) systems that enables users to understand — or interpret — how and why the systems reach their decisions. “That’s why I started working in interpretable models, but it was really not very popular when I started working on it.”

But with a growing number of policymakers turning their attention to artificial intelligence and proposing new laws and regulations about when high-risk AI decision-making systems have to be explainable and transparent, Rudin’s work is now very relevant and in demand.

Cynthia Rudin *04
Photo: Courtesy Cynthia Rudin *04

“At that point I realized there was a real value in interpretability. That’s why I started working in interpretable models, but it was really not very popular when I started working on it.”

— Cynthia Rudin *04
Professor of computer science at Duke University

The release of ChatGPT in November 2022 spurred a wave of interest in generative AI, as many people who had never had much direct interaction with natural language algorithms were impressed by how sophisticated the text ChatGPT generated could be. The success of ChatGPT motivated new policy discussions about the best ways to govern these technologies but also revived fears about whether these tools would replace people’s jobs, render education and writing assignments obsolete, and make our most crucial decisions, about everything from who should go to jail to who has a serious medical diagnosis, for us in ways we couldn’t even begin to understand or unpack.

At the forefront of the many people working in this field to create increasingly impressive, sophisticated, understandable, and ethical AI systems and tools, are many Princetonians who, like Rudin, have carved out a niche in this industry to focus their efforts. Last year, when Time magazine released its list of the 100 most influential people in AI, six Princetonians were included: alumni Dario Amodei *11, Fei-Fei Li ’99, Eric Schmidt ’76, and Richard Socher *09, along with Princeton professor Arvind Narayanan (see On the Campus, page 11) and grad student Sayash Kapoor (see Research, December issue).

For Schmidt, the former CEO and chairman of Google, AI has been at the heart of his post-Google career. He now focuses largely on philanthropic ventures through the Schmidt Futures organization, as well as helps educate policymakers in Washington, D.C., about the promise and perils of AI through initiatives he chairs, including the Special Competitive Studies Project and the National Security Commission on Artificial Intelligence.

Schmidt says he believes that AI will transform everything, mostly for the better. “Imagine that each and every one of us is twice as productive in what we do as adults. Better teachers, doctors, philosophers, entertainers, inventors, and even CEOs,” he tells PAW by email. “The advent of an intelligence that sees patterns we don’t see, and analyzes choices we can’t do in our lifetimes, and generates new content and systems, is a profound shift in human history. The ability to rapidly advance in science, especially climate change, is a huge boon coming in the next few years.”

One of the Princetonians looking to AI for sustainability purposes is Ha-Kyung Kwon ’13, a senior research scientist at the Toyota Research Institute, who is using AI to design new polymers that can help build better batteries to fuel green tech. “The reason AI is particularly attractive for this is that when you’re designing new polymers, the number of things you can vary is really vast,” Kwon explains. “It’s a needle-in-a-haystack problem and usually we’re trying things based on what we know from what other people have already tried. Sometimes that’s a good approach, but a lot of times those approaches don’t necessarily lead to breakthroughs, and breakthroughs are really what we need.”

Ha-Kyung Kwon ’13
Photo: Courtesy Ha-kyung Kwon ’13

“The reason AI is particularly attractive for this is that when you’re designing new polymers, the number of things you can vary is really vast.”

— Ha-Kyung Kwon ’13
Senior research scientist at the Toyota Research Institute

Other alums have chosen to apply AI to different problems, including James Evans ’16, the co-founder of CommandBar. Together with co-founders Richard Freling ’16 and Vinay Ayyala ’16, Evans raised nearly $24 million for CommandBar in 2023 for their platform that uses AI to help people navigate apps and software more easily.

“Instead of requiring users to figure out this maze of menus and buttons and toolkits when they use a new piece of software, we wanted to allow them to just describe what they’re trying to do in words,” Evans says. “We’re all so good at using Google to find things, we thought, let’s make a tool that when a user plops into a product for the first time you can just use whatever the words are you’re used to for what you’re trying to do. Like Clippy [Microsoft Word’s virtual assistant, which was an animated paperclip], but less annoying and more accurate.”

Some alums are also working directly to develop new AI systems. For instance, Amodei, the CEO and co-founder of Anthropic, is at the helm of one of the main competitors to OpenAI — for which he previously worked. Anthropic distinguishes itself in the AI ecosystem by touting a safer, more ethical, and more responsible approach to AI technologies, for instance, by offering developers a way to specify values for their AI systems.

Socher also founded and runs the AI company You.com, a chat-search assistant that combines elements of search engines with personal assistants to help people find information and answer questions. Socher is committed to making sure You.com provides its users with more privacy than other search engines, and to that end the service does not show personalized ads. Instead of selling ads, Socher’s plan is to eventually monetize the service through subscription fees. Like Amodei, his vision for AI is linked to specific values such as privacy, not just making the most advanced technology possible.

Li, a computer science professor at Stanford, has taken a similar approach to her work with AI. In addition to her pioneering research on image recognition, she co-founded AI4ALL, a nonprofit that strives to increase diversity in the field of AI by launching outreach programs for students interested in the field. And like Schmidt, Li has spent time in Washington, D.C., lobbying policymakers to provide more computing resources for AI research and more support for research in areas like AI safety.

By Schmidt’s estimation, the U.S. government has so far “done a pretty good job” of regulating AI “by not prematurely freaking out and regulating this new powerful technology.” Policymakers in the United States are “working with the industry to understand the most important issues while not slowing it down,” Schmidt says. But, he cautioned, “this is all going to happen very fast compared to governments, cultures, and normal industries.”