One day, scientists will be able to “see” the images and memories in our minds

Professor Ken Norman leads Princeton’s Computational Memory Lab, which tracks memories in people’s brains.
Professor Ken Norman leads Princeton’s Computational Memory Lab, which tracks memories in people’s brains.
Peter Murphy

REMEMBER YOUR LAST DATE at the movies? Your memory of that night is a web of associations: The popcorn smell of the theater. Your giddiness. The blue sweater worn by your date. The aftertaste of the work you were doing earlier — a client meeting, an email from your boss. Memory is that web: the moods, sensations, and thoughts of a moment in time. All these coincident details are bound together by the hippocampus, a seahorse-shaped brain region under your temple. Later, the hippocampus plays back the memories to the cortex, stitching place, mood, images, and goals together into a pattern. Professor Ken Norman, head of Princeton’s Computational Memory Lab, explains that this is how free association happens: Remembering one strand of that web (say, the blue sweater) leads to another (the movie, or the taste of the popcorn). When you remember a moment, your brain replays a pattern from the past. Norman’s lab, using algorithms to recognize your brain at a moment in time, is fishing for those neural echoes of experience.

This is one snapshot of the growing field of computational neuroscience. Matrices of neural activity flickering across your cortex encode your every thought, mood, and memory. Scientists are aiming to crack the brain’s code. What hacking the mind may one day mean is a world in which your car would know when you’re sleepy, your dreams could appear on a screen, and should you find yourself in a coma, your brain could make your thoughts known. Today, this is science fiction — but in principle, when the brain’s code is decrypted, all would be possible. 

Some researchers at the Princeton Neuroscience Institute are using mathematical models to simulate neural processes like learning and attention, allowing them to test hypotheses that can’t be studied in a live human brain. Others apply the algorithms of machine learning to brain patterns, to predict mental states like memories and mental images. Understanding the brain’s code could help scientists develop therapies for conditions like autism, post-traumatic stress disorder (PTSD), and depression; means for amputees to operate robotic prostheses with their brains; and ways for all of us to remember, focus, and communicate better.

“The idea is to make something invisible visible by attaching something to it,” Norman says, explaining how his lab tracks memories in the brain. Snapshots of memories appear on brain scans, encoded in patterns of neural activity and recognized by algorithms tracking mental replay as subjects lie in a functional magnetic resonance imaging machine, or fMRI. 

The trick is to track an image that is clear on brain scans. Faces and scenes, for example, are represented in specialized brain regions packed with cells sensitive to them, helping with social interaction and navigation. So thoughts of faces and scenes can be identified in fMRI scans by their characteristic brain activity. By showing you face photos, scientists tag your memory with a tracer they can see on brain scans. Just as movie scenes may link to the memory of your date, these tracers can be tracked, like the GPS on a car. 

Why do people misremember? You think you heard about 9/11 on TV, when you really heard it on the radio; you think your love was at first sight, though the feelings really came later — why? In one recent experiment, Princeton researchers in Norman’s lab investigated why an event gets misattributed to a time or place. First, they showed subjects undergoing fMRI a series of images of objects, interspersed with pictures of scenes that served as the thought tags, or tracers. Later, the researchers showed the same people another series of pictures; this series showed only objects to be memorized, without scenes. Back in the fMRI machines later, the subjects were asked to recall which series each image came from. People were more likely to misremember images from the later series (objects only) as coming from the first series, in which the scenes — serving as tracers — were in their brains. The scientists found that brain-activity patterns could predict which items would be misremembered, based on when the tracer was in the brain. We experience that kind of misremembering in everyday life. For example, if you have been talking about a friend, and then you hear about a movie you’ve seen, you may recall — incorrectly — that you saw it with that friend. 

The media sometimes call this kind of research mind-reading: reading out memories, mental images, even dream content from brain patterns, as a lab in Kyoto, Japan, recently did. Norman balks at the term. “The appeal of fMRI is its non-invasiveness,” he says, and the neural decoding he works on requires the subjects’ cooperation. Here, fMRI decoding is used not for eavesdropping, but to understand how the mind is encoded in the brain. Such knowledge has clear medical potential, to help people with troubled mood, concentration, or communication — to build tools for the psyche.

Yael Niv uses computer simulations and behavioral tests to study memory and learning.
Yael Niv uses computer simulations and behavioral tests to study memory and learning.
Sameer A. Khan

In another lab, down the hall in the PNI’s new home on the southern edge of the campus, assistant professor Yael Niv applies similar technology to explore a different aspect of brain function: learning. “Learning is overwriting an old thing,” Niv explains. “Memorizing is protecting the old thing from being overwritten. There’s a stream of experience coming by all the time, and you have to decide for each new event: Do I learn or do I memorize?”

The battle between learning and memory is the balance between expansion and consolidation, exploration and safety. Memory is a trace of the past maintained in the brain. Learning is change: updating those traces.

Our brains infer patterns easily, Niv explains. If you’re waiting to cross a street, you watch the color of the stoplight but ignore the color of cars. You learn what to filter. If you’re trying to hail a cab, you see the scene differently: Yellow cars pop out. In class, you speak differently to your professor than you do when you meet him at a party. Computers have trouble seeing these patterns; robots are awful at learning this kind of flexibility.

Imagine you’re meeting with your former college adviser for the first time in years. You have an image of what he is like: bearded, smiling, with a distinctive voice. Now you show up, and he’s clean-shaven. What happens to the professor in your head?

Abrupt change tends to create a new memory, while gradual, subtle change modifies the memory already in your head. Niv’s lab has come to this insight in a series of studies using computer simulations and human behavioral tests. The computer models learn by reinforcement, updating when their predictions about outcomes are wrong. In her experiments, Niv found, sudden change prompts people to cluster their memories into two separate blocks. But when the change is gradual, memories blur into one block.

In the case of the beard, you’ll overwrite: The professor updated to have no beard is not fundamentally changed, so it’s fine to overwrite, as if saving a file on a computer. But there are examples where the thing to do is to make a separate memory. 

Remember Pavlov’s dogs? The bell, the food. In 1901, Russian physiologist Ivan Pavlov showed that if you pair something of value, like a meal, with an initially meaningless stimulus, like a bell or light, an animal associates the conditioned stimulus with pleasure or pain: Dogs salivate to bells; mice freeze to light once paired with a shock, as later studies showed. So do we. Niv’s computational work on reinforcement learning, as this process is known, is aimed at unlearning such traumatic associations.

Sam Gershman *13 studies memory and fear.
Sam Gershman *13 studies memory and fear.
Yael Niv

Sam Gershman *13, a soon-to-be Harvard assistant professor of neuroscience, recently started translating Niv’s basic findings into practical areas, focusing his work on modeling memory and fear: How does a person overwrite a toxic mental trace and learn to move on without fear, when others get stuck in a negative loop? As many as 75 percent of American adults, by one estimate, are exposed to severe trauma in their lifetimes, yet only 7 percent show symptoms of PTSD. We all go through tragedy, but each year, only 6.7 percent of adults react with a depressive episode, according to the National Institute of Mental Health. What kinds of therapy might help people recover from traumatic memories?

People with phobias and PTSD often are treated with “exposure therapy.” If a person is afraid of spiders, for example, the therapist might coax him out of his fear by showing him pictures of spiders in a safe environment. After repeated exposures, the patient stops responding with fear.

But the effect of the therapy often doesn’t last — the fear memory remains intact, buried but raw, and often returns over time or in a new place. The reason, Niv believes, is the mismatch of context: Since the atmosphere of the psychologist’s office is so remote from the trauma itself, the brain forms a new memory, and leaves the toxic one intact.

Using computer models, Gershman is out to test this prediction. Collaborating with a rodent lab at the University of Texas, Austin, Gershman came to a conclusion that challenged the way we commonly think about learning and unlearning fear. First, the animals were taught that a tone was accompanied by a shock. When scientists suddenly stopped pairing the tone with the shock, the animals’ fearful reaction to the sound eased, but only temporarily. After a delay, in a new context, or when given a “reminder shock,” the fear of the tone returned. But when the shock was phased out gradually, the animals formed a sturdy safety memory: They no longer froze when they heard the tone. Gershman’s interpretation is that the gradual approach changes the old fear memory directly — updating it to include new “safety” information, rather than forming a distinct memory. A similar approach might be used in human therapy: a gradual withdrawal of fear. 

Addiction is another space where this model is expected to apply. Abstaining from drugs, for an addict, is abrupt. “Until now you’ve had all these associations between the drugs, your mates, the cues, and the high state,” Niv says. “And now you abstain completely.” Since the context is new, the memory that’s formed when a person quits drugs cold turkey also is new — and “indeed, the great problem with addiction is the relapse rate” of 60 percent or more. 

“We’re thinking that maybe gradual withdrawal from the drug is going to be more effective, because you’re going to take whatever you’ve learned before and modify it, rather than protect the old memory by changing things quickly,” she says.

The difference between madness and imagination may lie in how well we keep track of where we are: how much our attention is driven by the outer world versus our inner moods and memories — those equations written on the private window of our minds. “We live together,” Aldous Huxley once wrote, “but always, and in all circumstances, we are by ourselves.” His view captures the isolation of mental disorder: trapped inside a broken brain, untranslatable. The Princeton Neuroscience Institute’s goal is to crack the codes that hold those moods and memories — to map this secret world, like outer space or the ocean floor. If they succeed, those isolated minds won’t be so stranded anymore.  

Taylor Beck ’07 is a writer in New York.