Experimenting in the Classroom, Professors Weigh Whether AI Is Friend or Foe
Researchers in the new group Princeton Societal AI are pondering the relationship between humans and computers
Eighty-six percent of college students have used artificial intelligence academically, according to a 2024 global survey. But students aren’t the only ones grappling with when and how to use the technology. Faculty across Princeton have questioned whether AI is a powerful collaborator that can enhance student learning or something that makes students think less critically — and have debated whether to use it in their courses at all. Fears of cheating are top of mind, and the University is currently weighing a proposal to require proctoring for all in-person examinations, a shift from a 133-year-old tradition of unproctored exams under the Honor Code.
Princeton Societal AI, a new transdisciplinary group of researchers across campus in the humanities, social sciences, natural sciences, and engineering, has been thinking about the relationship between humans and computers.
One of them, Janet Vertesi, an associate professor of sociology and specialist in science, technology, and society, says she tries to create a classroom environment where students feel like they’re the creative engine behind their work and feel ownership over their ideas.
In one course she taught in the fall, she and her colleague, whom she described as “very pro-AI,” created an AI policy document to make clear to students that they are responsible for their own work, and if they choose to use AI, they have to disclose it. In addition, students must share how they protect their work against well-known problems with AI, including hallucinations and factual inaccuracies.
In a design course Vertesi taught, she says, students conducted a “cultural probe” to explore how their peers felt about AI. Rather than taking a quantitative approach, they asked questions about what pressures push students to use it. The responses revealed that students are navigating heavy course loads, distribution requirements that may be out of their realm of expertise, and intense career pressures — conditions that can make the “choice” to use AI feel less voluntary than it appears.
Andrés Monroy-Hernández, an associate professor of computer science and co-leader of the Princeton Human-Computer Interaction Lab, says that he is “on team humans rather than team machines.”
Monroy-Hernández designs his courses so that AI cannot replace student thinking. He creates semester-long projects for students connected to real-world partners, including nonprofits and international organizations. The assignments are complex, requiring interviews, quantitative data analysis, and building prototypes, so large language models can only be used in part of the process.
He tries to “create opportunities for the kinds of problems that people are trying to solve in the classroom to be so big and complex that the AI can only be a partner.”
More broadly, Monroy-Hernández argues that AI is revealing deeper problems that already existed. In education, he says, learning has become “industrialized,” focused on memorization and completing tasks rather than meaningful intellectual engagement. When AI can automatically complete assignments, the issue is not just misuse of AI but how courses are structured in the first place. He worries that automated systems may worsen existing inequalities and wonders how AI should work in entry-level courses. If students never learn foundational skills, he says, it may be hard for them to use sophisticated tools later.
A similar tension is shaping conversations in the humanities, where professors are experimenting with AI’s potential while teaching students to approach it critically. Meredith Martin, a professor of English and director of the Center for Digital Humanities, says her courses incorporate AI directly into the classroom, but as one tool among many.
In her Data and Culture course, students spend part of the semester learning the history of natural language processing and the other part using tools to break down a humanistic and computational question and write about it. “They really become fluent in the language that underpins AI development, but also fluent in the humanistic and historical skills that are really necessary to contextualize and critique it,” she says.
Her goal is to prevent students from using AI as a form of “cognitive offloading,” and help them build judgments to effectively analyze it and acknowledge when models might be wrong, incomplete, or misleading.
Whether professors should incorporate AI into their courses largely depends on their field and whether it meaningfully supports a class’s pedagogy, Martin says. But she believes that universities should treat this moment as more than a policy debate.



No responses yet