Professor Arvind Narayanan and Sayash Kapoor Explain AI

Collage of Princeton Professor Arvind Naryanan and Sayash Kapoor

David Kelly Crow

carlett spike
By Carlett Spike

Published Sept. 23, 2024

4 min read

The book: Since AI has entered the scene there’s been a lot of curiosity and confusion. As a result, society’s understanding of AI’s usefulness and capabilities is fuzzy. In this new book, Princeton professor and computer scientist Arvind Narayanan along with Princeton graduate student Sayash Kapoor unpack the misleading claims about AI and the harms this could ultimately lead to. AI Snake Oil (Princeton University Press) warns users of the potential risks of AI including why we should be worried about what people and big tech companies will do with AI.

The authors: Arvind Narayanan is a professor of computer science at Princeton and director of the Center for Information Technology Policy. He is the author of a number of books including Bitcoin and Cryptocurrency Technologies. 

Sayash Kapoor is a Ph.D. candidate in computer science at Princeton. Before studying at the University, he worked as a software engineer at Facebook. 

Excerpt:

 

Imagine an alternate universe in which people don’t have words for different forms of transportation — only the collective noun “vehicle.” They use that word to refer to cars, buses, bikes, spacecraft, and all other ways of getting from place A to place B. Conversations in this world are confusing. There are furious debates about whether or not vehicles are environ- mentally friendly, even though no one realizes that one side of the debate is talking about bikes and the other side is talking about trucks. There is a breakthrough in rocketry, but the media focuses on how vehicles have gotten faster — so people call their car dealer (oops, vehicle dealer) to ask when faster models will be available. Meanwhile, fraudsters have capitalized on the fact that consumers don’t know what to believe when it comes to vehicle technology, so scams are rampant in the vehicle sector.

 

Now replace the word “vehicle” with “artificial intelligence,” and we have a pretty good description of the world we live in.

 

Artificial intelligence, AI for short, is an umbrella term for a set of loosely related technologies. ChatGPT has little in common with, say, software that banks use to evaluate loan applicants. Both are referred to as AI, but in all the ways that matter — how they work, what they’re used for and by whom, and how they fail — they couldn’t be more different.

 

Chatbots, as well as image generators like Dall-E, Stable Diffusion, and Midjourney, fall under the banner of what’s called generative AI. Generative AI can generate many types of content in seconds: chatbots generate often-realistic answers to human prompts, and image generators produce photorealistic images matching almost any description, say “a cow in a kitchen wearing a pink sweater.” Other apps can generate speech or even music. Generative AI technology has been rapidly advancing, its progress genuine and remarkable. But as a product, it is still immature, unreliable, and prone to misuse. At the same time, its popularization has been accompanied by hype, fear, and misinformation.

 

In contrast to generative AI is predictive AI, which makes predictions about the future in order to guide decision-making in the present. In policing, AI might predict “How many crimes will occur tomorrow in this area?” In inventory management, “How likely is this piece of machinery to fail in the next month?” In hiring, “How well will this candidate perform if hired for this job?”

 

Predictive AI is currently used by both companies and governments, but that doesn’t mean it works. It’s hard to predict the future, and AI doesn’t change this fact. Sure, AI can be used to pore over data to identify broad statistical patterns — for instance, people who have jobs are more likely to pay back loans — and that can be useful. The problem is that predictive AI is often sold as far more than that, and it is used to make decisions about people’s lives and careers. It is in this arena that most AI snake oil is concentrated.

 

AI snake oil is AI that does not and cannot work as advertised. Since AI refers to a vast array of technologies and applications, most people cannot yet fluently distinguish which types of AI are actually capable of functioning as promised and which types are simply snake oil. This is a major societal problem: we need to be able to separate the wheat from the chaff if we are to make full use of what AI has to offer while protecting ourselves from its possible harms, harms which in many cases are already occurring.

 

This book is a guide to identifying AI snake oil and AI hype. In it, we’ll give you essential vocabulary to tease apart generative AI, predictive AI, and other types of AI. We’ll share common-sense ways of assessing whether or not a purported advance is plausible. This will make you read news about AI much more skeptically and with an eye toward details that often get buried. A deeper understanding of AI will both satisfy your scientific curiosity and translate into practical ideas on how to use — and when not to use — AI in your life and career. And we will make the argument that predictive AI not only does not work today but will likely never work, because of the inherent difficulties in predicting human behavior. Finally, we hope that this book will get you thinking about your own responsibilities — and opportunities for change — with respect to the harmful implications of these tools.

 

Excerpted from AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference. Copyright 2024 by Arvind Narayanan and Sayash Kapor. Reprinted by permission of Princeton University Press.

 

Reviews:

 

"A worthwhile read whether you make policy decisions, use AI in the workplace or just spend time searching online. It’s a powerful reminder of how AI has already infiltrated our lives — and a convincing plea to take care in how we interact with it."— Elizabeth Quill, Science News

 

"[A] solid overview of AI’s defects."— Publishers Weekly

0 Responses

Join the conversation

Plain text

Full name and Princeton affiliation (if applicable) are required for all published comments. For more information, view our commenting policy. Responses are limited to 500 words for online and 250 words for print consideration.

Related News

Newsletters.
Get More From PAW In Your Inbox.

Learn More

Title complimentary graphics