Professor Brenden Lake Zeroes In Where Machine and Human Intelligence Meet

‘One of the biggest scientific mysteries is how our own minds work,’ Lake says

Brenden Lake’s daughter, Luna, participated in a study that recorded her experiences for one hour each week. The data was collected by a camera attached to a helmet worn by Luna. Lake hopes to use this data to train an AI model.

Courtesy of Brenden Lake

carlett spike
By Carlett Spike

Published Dec. 19, 2025

2 min read

Brenden Lake first encountered the use of technology to understand human intelligence during a scientific research opportunity in high school. It opened his world to new possibilities.

“I learned about neural networks, this computational approach for trying to understand the mind and the brain, and I was just completely captivated,” Lake says. “It just seemed like there was this field of cognitive science where you could ask big questions.”

Image
Brenden Lake

Brenden Lake

Wai Keen Vong

Lake went on to earn degrees in cognitive science and symbolic systems from MIT and Stanford University, respectively, before working at NYU and Meta AI. In the fall, he joined Princeton’s faculty as an associate professor of computer science and psychology and taught the class Computational Models of Cognition.

His lab focuses on the intersection of machine and human intelligence in the hopes of advancing society’s understanding of both. “One of the biggest scientific mysteries is how our own minds work,” Lake says. Through this understanding, his goal is to create AI systems that process data in a way that resembles human thinking.

In 2024, Lake created the first ever AI model that could learn words from the experience of one child. To understand how children learn in the earliest stages of life, Lake’s team used video captured by a camera attached to the helmet the child wore to train the AI model. The team used about 60 hours of footage over a two-year period — roughly 1% of the child’s waking hours, Lake says.

“We showed, for the first time, that a neural network trained on just a subset of what a child could have experienced could link words to their visual counterparts,” says Lake, who began this research at NYU. For example, when typing the word “ball” into the system, it often correctly picked the image of a ball.

These findings could help explain the ongoing mystery among those who study childhood development around how children learn to associate words with objects and ultimately understand what words mean. “Today’s AI systems can start to piece that together,” Lake says.

In 2023, his daughter, Luna, who was 6 months old at the time, was part of a similar study to record her experiences for one hour each week. He hopes to use this footage to train another AI model using the sensory models a toddler is exposed to. He’s tentatively calling it a LunaBot.

There have been many concerns raised about AI’s capabilities and consequences over the past several years — from hallucinations where false information is generated to instances where troubling advice is offered, such as the case of a teen who died by suicide in April after interactions with a chatbot. When asked about the potential negative implications of training models to respond more like humans, Lake says he’s hopeful this type of research could eliminate these problems.

“My personal bet is that making progress on this joint enterprise of not just focusing on better machine intelligence sort of at the neglect of understanding our own intelligence could help to address some of those risks,” he says. “I think some of those risks come from those systems not really understanding how to help somebody, or what somebody is looking for.”

In a broader sense, he says he believes building more human-centered AI can have positive impacts in the future in areas such as education. For example, if AI models could predict how children learn, that could lead to new teaching strategies.

“There’s this huge push to try to get systems to do math and step-by-step reasoning more accurately,” Lake says, “but if we can model the types of struggles and successes, say of a middle school student learning algebra, we could potentially simulate different innovations for teaching.”

No responses yet

Join the conversation

Plain text

Full name and Princeton affiliation (if applicable) are required for all published comments. For more information, view our commenting policy. Responses are limited to 500 words for online and 250 words for print consideration.

Related News

Newsletters.
Get More From PAW In Your Inbox.

Learn More

Title complimentary graphics