Will Your Thoughts Always Be Private?: Q&A - Joshua Greene *02

Q&A: Joshua Greene *02

Placeholder author icon
By Anna Azvolinsky *09

Published Jan. 21, 2016

6 min read

Joshua Greene *02, the director of the Moral Cognition Lab at Harvard University, uses fMRI — functional magnetic resonance imaging — to study how our brains make decisions. The technology “detects the concentration of oxygenated versus de-oxygenated blood, so you can see where oxygen is being used in the brain,” a proxy for the activity of neurons, he explains. That gives scientists a way to see what is taking place in the brain when someone is making decisions — assuming that person happens to be inside an fMRI machine at the time. Greene, who received his Ph.D. in philosophy at Princeton, uses fMRI to explore the process of making moral judgments, and his new book, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, was published in November. PAW spoke with him about the technology’s potential and shortcomings: Are we approaching a time when we cannot guard even our thoughts?

Using fMRI technology, can we read people’s thoughts?

The short answer at this point is: no. It does not mean that you can read someone’s brain. It just means that instead of being wrong 50 percent of the time by guessing, you would be wrong 40 percent of the time. Here’s an example: Am I thinking of a truck or a chicken? You can have a computer program register the brain patterns that happen when I am thinking truck, truck, truck and then chicken, chicken, chicken. This trains the computer to make a better-than-chance guess during the next brain scan if I am thinking of a truck or a chicken. And that is with someone who is being cooperative. If I don’t want to cooperate with you and think about what I am supposed to think about, then forget it!

What about something more complicated than truck versus chicken?

Anything you can name, there is likely to be a statistical difference. Think of it this way: Men and women are different heights on average. Suppose an average man is 5-foot-9, and the average woman is 5-foot-4. That is a statistically significant difference. If I tell you that a person is 6 feet tall, you would be wise to guess this is a man, but you wouldn’t be guaranteed to be right. It’s the same idea here. You can make a better-than-chance guess about what someone is thinking by looking at that person’s brain data, but you are still just trying to beat the odds — nothing like “reading” a mind as we ordinarily think of it.

Are we close to a time when we can read minds with fMRI?

If close is five years, then I would say we are not close. I don’t think we speak the brain’s language yet.

Can fMRI be used to detect emotions?

It turns out that tracking different emotional categories — like happiness, disgust, or pride — has been surprisingly hard. These emotions don’t have very specific neural signatures. Fear is more reliably associated with a part of the brain called the amygdala. But there are a lot of things besides fear that will increase activity in the amygdala. So there are statistical correlations between certain kinds of emotional experiences and increased activity in parts of the brain that you can detect with fMRI, but at least when it comes to emotions, things are still fuzzy.

What can scientists detect well using brain-activity patterns?

There is a part of the brain — the fusiform face area, called the FFA — that seems to respond rather specifically to faces. So if you’re looking at any face, even a smiley face or a dog’s face, you see increased activity in this region. From studies with monkeys, we know there are cells that respond only to faces. A colleague of mine has shown that specific patterns of activity in the brain happen when we see people of different races. But even the specificity of the FFA is hotly disputed.

Here’s another example, involving memory: Is a person remembering a two-digit number or an eight-digit number? Within a part of the brain called the dorsolateral prefrontal cortex, we can see pretty clearly a difference between the easier memory problem [two digits] and the harder memory problem [eight digits] — at least if we’re comparing within a given person. More generally, any mental difference you can specify in common-sense terms, you can see a difference in the brain. But that doesn’t mean you can “read” the mental state off of the brain. Many mental processes result in similar patterns.

How do you use fMRI as a tool for your research?

I primarily study the neuroscience of moral decision-making. So I try to use brain imaging and other tools to figure out how people make moral judgments and decisions. I am interested in understanding how the brain works on a psychological level as well as a more neuroscience level.

Some of your work relies on fMRI to monitor the brain activity associated with honesty and dishonesty.

What is it that makes someone lie or not? This might be an emotional response. There might be a big fear of getting caught that keeps you in line, or maybe you get a warm glow of goodness when you decide to do the honest thing. Or it could be that the automatic tendency is to lie, and then it takes a major brain effort to be honest. We didn’t know. So we did a study in which we gave people repeated opportunities to gain money dishonestly, by lying. What we found is that when people are honest, they don’t show any signs of doing anything special. They just seem to walk right by the temptation. The people who behave dishonestly seem to be doing a lot of extra effort when they are being both dishonest and honest. Thinking “I could cheat now, but I won’t,” involves a lot of brain activity — at least for people who end up cheating sometimes.

So honesty is a default state for some people, but it takes effort to lie?

For people who are consistently honest under a time-pressure test, yes. That’s what these results suggest. But other studies have shown that you can put people in distracting situations in which their ability to control themselves is disrupted, and this makes them, on average, less honest. So this is evidence that honesty can also be a result of active self-control, depending on the situation.

The fMRI brain scans have been used in a few criminal court cases in lieu of a lie detector. What do you think about that?

There have been a couple of cases, but the courts have generally rejected these as unreliable evidence, which is a good decision in my opinion. I don’t think the technology is there, and it certainly does not have the diagnostic value you would need in a criminal case. Having a scientifically meaningful, statistically significant result doesn’t mean you can just look at someone’s brain activity and read it with the kind of accuracy you would need to rely on the information in a legal case — at least one in which we want to be sure beyond a reasonable doubt.

Let’s say someone doesn’t want to cooperate in an fMRI test — is that similar to trying to cheat on a lie-detector test?

Right. The couple of studies that have looked at people trying to fool the fMRI test — they use what are called countermeasures, and the countermeasures work very well. My advice: If you want to know someone’s secrets, don’t scan their brains — go through their garbage! You can learn so much more about people by doing boring, low-tech things like following them around or reading their Facebook page. There is just so much more information about people’s inner lives out there in the world compared to what you get by brain scanning.  Interview conducted and condensed by Anna Azvolinsky *09

0 Responses

Join the conversation

Plain text

Full name and Princeton affiliation (if applicable) are required for all published comments. For more information, view our commenting policy. Responses are limited to 500 words for online and 250 words for print consideration.

Related News

Newsletters.
Get More From PAW In Your Inbox.

Learn More

Title complimentary graphics