You, Uploaded: Michael Graziano ’89 *96 on Rethinking Consciousness

Placeholder author icon
By Charles Wohlforth ’86

Published Nov. 27, 2019

4 min read

Image

Michael Graziano ’89 *96

Courtesy Michael Graziano ’89 *96

The science-fiction fantasy of a computer with conscious self-awareness could become reality within a generation, according a new book by psychology and neuroscience professor Michael Graziano ’89 *96. Rethinking Consciousness (W.W. Norton) explores Graziano’s theory of consciousness, which demystifies self-awareness into a function that a software engineer could implement in a computer. Its implication is that the minds of biological humans could someday migrate into computers, their thoughts continuing after death as data and code, with memories, aspirations, and neuroses permanently intact. Graziano spoke with PAW about how he imagines a world transformed by such immortal electronic minds.

How could a human mind be put on a computer?

There are really two technological questions. First, can we get computers to be conscious in the same sense that we are? And the answer to that, I think, is yes, and much sooner than people think. You can build a computer that thinks it is conscious in the same way that we think we are and can attribute consciousness to the people around it in the same way that we do. 

When do you expect this to happen? 

Oh, I think within a couple of decades. This field of artificial intelligence is moving at blinding speed. With theories like the one I’m working on and some others, we’re looking at algorithms and code and engineering principles that programmers can really sink their teeth into. We’re going to get computers that act in ways that we look at and say, “They must be conscious.”

So that’s one aspect of it. There is a different technological question: Can you read a person’s brain in enough detail to migrate their specific mind to machine form? And that’s much further in the future. Understanding the neurology, the brain, and what to scan, and what detail, and so on — that’s very difficult, but it’s not physically impossible. That, I figure, could be a couple of centuries from now. And that, I also think, is inevitable. 

Image
You discuss a future in which life is primarily preparation to join people living in computers. How would that society develop? 

Paradoxically, this revolutionary new society would be an inherently conservative society. It’d be very hard to change and move forward. I don’t know if all change would stop, but we know that culture moves forward through new, fresh blood that comes in generationally. But the old generation would never leave, and never lose their intellectual capability or their stamina. They would accumulate political power. 

Would you want yourself to be put into a computer? 

Good Lord, no. I think I would get bored. It’d be like a video game with infinite lives.

This idea of programming consciousness into a computer relies on your attention schema theory of consciousness. In the theory, you propose that we each hold a mental model of our own attention, and that this is what we interpret as consciousness, correct?

Yes. The brain builds simple cartoonish models, or bundles of information, to describe things in the world, and to describe itself. This thing we call consciousness — our claim to a non-physical essence inside of us — this belief derives from a kind of simplified, cartoonish description of what’s really going on in there, which is much more detailed and physical.

You have said consciousness is probably present in a lot of other animals.

Anyone who owns a pet is certain that mammals have consciousness. And I think that’s true, but for these very specific reasons related to the value and practicality of this self-model. Birds, I think, probably have similar mechanisms.

Do they think about things the way we do? “Am I a good bird or am I a bad bird?”

When you say a bird — like a crow — is conscious, immediately some people think that’s nonsense, because the crow doesn’t know about mortality, and it doesn’t think about its place in the world and it doesn’t have ambitions. But that isn’t really the question of consciousness that psychologists and neuroscientists have tried to tackle. The question is not what are you conscious of, but how do you have a subjective experience of anything at all?

Many people are not comfortable thinking about consciousness as something that could run on a computer. Why do we want it to be something that really is not physical?

There’s probably lots of reasons, but the attention-schema theory itself explains why we have such a strong intuition that consciousness is non-physical. That’s the whole point of this self-model. It’s simplified for easy computation. When the brain models itself, necessarily it models itself as a kind of magic essence inside of us. That’s why we have this very powerful intuition that way. That’s why it’s very hard for people to believe that it’s like a computer subroutine.

For many people, humanity’s identity and uniqueness come from this “magic essense” of consciousness. Will we have to re-evaluate what and who we are?

Yes, I think it forces that reappraisal. I think people will look at these conscious machines in their lives everywhere and they will have to deal with the fact that whatever theory produced those machines must have been right. And so consciousness is something that’s scientifically understandable.

Interview conducted and condensed by Charles Wohlforth ’86

This is an expanded version of a story from the Dec. 4, 2019, issue.

0 Responses

Join the conversation

Plain text

Full name and Princeton affiliation (if applicable) are required for all published comments. For more information, view our commenting policy. Responses are limited to 500 words for online and 250 words for print consideration.

Related News

Newsletters.
Get More From PAW In Your Inbox.

Learn More

Title complimentary graphics