As Generative AI Advances, Princeton Tries to Keep Policies Updated

PAW talked to students and faculty and heard all kinds of ways the tools are being used

Robot hand in a crosswalk

Graphic: Moor Studio / iStock

Placeholder author icon
By Julie Bonette with reporting by Anika Asthana ’25

Published Jan. 24, 2024

4 min read

Since ChatGPT launched in November 2022, the world, and Princeton, have been attempting to understand — and stay on top of — ever evolving generative artificial intelligence (AI) software. University administrators are working on a second update to Princeton’s guidelines, while some students are confused and unclear on policies.

In January 2023, Dean of the College Jill Dolan and Dean of the Graduate School Rodney Priestley encouraged faculty to embrace generative AI, at the same time public schools in New York City and top universities in France and India were banning the software. Their email provided guidelines, such as designing assignments with care to minimize the risk of academic dishonesty, and encouraged faculty to avoid misunderstandings by being explicit about their AI/ChatGPT policies in course syllabi.

“We can’t ignore these tools, and I think that’s become that much more evident even in the last six months,” said Kate Stanton, director of the McGraw Center for Teaching and Learning and senior associate dean. “That might mean asking students to engage critically with them, that might mean setting a policy that actually prohibits their use in the classroom, but we can’t not respond to them.”

A recent survey of more than 2,600 college students and faculty nationwide, conducted by Tyton Partners, found that 49% of college students had used AI writing tools, compared to 22% of faculty.

Over the summer, Dolan convened a working group co-chaired by Stanton and Cecily Swanson, associate dean for academic advising, that sought to “clarify the University’s current approach,” according to an August memo sent to all faculty and undergraduates. Fundamentally, though, Princeton’s policies have remained the same as before the introduction of generative AI; for example, students cannot use outside tutors, whether that tutor is an AI bot or not.

Generative AI “doesn’t change anything” about Princeton’s academic integrity expectations, said Swanson. “It’s just helped us realize that this is a moment, an opportunity, to further elaborate our commitment to, for instance, extraordinary liberal arts education, and think about how this will inform and enrich teaching and learning, in ways that feel challenging but mostly exciting … .”

Some students feel Princeton’s policies, which give ample flexibility to faculty, are “not very clear,” according to Kellia Gatete ’26. “We still don’t really understand the limits to which [we] are and are not allowed to use it.”

“I feel like a lot of my professors don’t even address it,” said Genevieve Shutt ’26, and most faculty “don’t have a clue” if students are using the software improperly.

This academic year, a case study on generative AI was added to freshman orientation programming and an academic integrity unit for juniors, and the McGraw Center offered a series of informational sessions and workshops on generative text, image, and coding software, to give Princeton faculty “a chance to play around with and test out these tools,” according to Stanton.

It was “a safe space in which to explore the technologies and to understand their power, their limitations,” according to Jessica Del Vecchio, McGraw’s senior associate director of teaching initiatives and programs for faculty, who organized the workshops.

According to Del Vecchio, the fall 2023 semester was a time when “people were really trying things out, for the first time maybe,” and faculty showed a strong interest in the topic at McGraw’s sessions.

In interviews, PAW talked to students and faculty and heard all kinds of ways the tools are being used, from generating different styles of writing to analyze genre, to writing introductions to lab reports, to helping with mundane tasks.

“I use it for everything, for emails, for general knowledge — instead of Googling stuff, I’ll just ask ChatGPT,” said Jeremiah Giordani ’25, a computer science major, who believes he is one of the biggest ChatGPT users at Princeton. “I really try to use it to its fullest extent, and when allowed.”

To help understand difficult material, Venezia Garza ’25 uses Quizlet to generate custom practice quizzes and asks ChatGPT to provide practice problems.

Swanson cautioned, though, that not all students are experts. “It’s not necessarily like [students are] arriving at Princeton uniformly understanding what this tool can and cannot do and how you should and should not use it.”

But it’s clear AI is here to stay, as the University continues to make investments in AI and machine learning research (see sidebar). And a “pretty significant” update to Princeton’s guidelines may come as soon as the end of January, according to Stanton, in order to be “more robust and cover more examples from across the disciplines.”

Thinking long-term, Stanton said generative AI “require[s] us to ask hard questions — like,  what is it we want students to learn? … It asks us to be more explicit and more persuasive about demonstrating the value and purpose of our classrooms.”

0 Responses

Join the conversation

Plain text

Full name and Princeton affiliation (if applicable) are required for all published comments. For more information, view our commenting policy. Responses are limited to 500 words for online and 250 words for print consideration.

Related News

Newsletters.
Get More From PAW In Your Inbox.

Learn More

Title complimentary graphics