Podchaser Logo
Home
Paul Bloom: Can AI be Moral?

Paul Bloom: Can AI be Moral?

Released Tuesday, 6th February 2024
 1 person rated this episode
Paul Bloom: Can AI be Moral?

Paul Bloom: Can AI be Moral?

Paul Bloom: Can AI be Moral?

Paul Bloom: Can AI be Moral?

Tuesday, 6th February 2024
 1 person rated this episode
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:01

Radio Andy. Hey, it's

0:03

Andy Cohen. Join me on Andy Cohen

0:05

Live, where it's just you, me, and

0:08

some of the world's biggest celebrities. Paris

0:10

Hilton, Chelsea Handler, Seth Rogen. I love

0:12

you, Miley. Thank you so much. You

0:14

can listen to Andy Cohen Live at

0:17

home or anywhere you are. No car

0:19

required. Download the SiriusXM app for over

0:21

425 channels of

0:24

ad-free music, sports, entertainment, and more.

0:26

Subscribe now and get three months

0:28

free. Offer details apply. At

0:32

the UPS Store, we know things can get busy this

0:34

upcoming holiday. You can count on us to be open

0:36

and ready to help with any packing and shipping or

0:39

anything else you might need. Is there anything

0:41

you can't do? Um, actually, I don't

0:43

have a good singing voice. Ahem.

0:45

The yup-yup. Noop. But

0:48

our certified packing experts can pack and ship

0:50

just about anything. At least that's

0:52

good. Your local, the everything you need to

0:54

be unstoppable store. The UPS Store. Be unstoppable.

0:57

The UPS store locations are independently owned. Product services,

0:59

pricing and hours may vary. See Center for Details.

1:09

I'm Alan Olga and this is

1:11

clear and vivid conversations about

1:13

connecting and communicating maybe

1:19

we don't want moral AI we want

1:21

obedient there we wanted to do what

1:23

we want and we don't want it to kill

1:25

us but if

1:27

it's too moral it might tell us to stop

1:29

doing a lot of things we're doing what

1:32

a moral AI stop us from

1:35

factory farming from killing

1:37

billions of sentient creatures very painfully for

1:39

food would it would it intervene and

1:42

even at a personal level I don't want it

1:45

what would I do what would I think

1:47

of tax software that's very AI generated and

1:49

won't let me exaggerate the size of my

1:51

home office what would I

1:53

think of what would I think of my self-driving car

1:55

that refuses to drive me to to a bar because

1:59

I drink too much Go

2:01

back home and spend time with your family. That's

2:03

psychologist Paul Bloom. We

2:06

had him on the show a few years

2:08

ago when he and I had an enjoyable

2:10

tussle over whether empathy was actually a useful

2:12

thing. We invited

2:14

him back for this episode, the last

2:16

of three special shows on AI, because

2:19

I wanted his take on kind of an important

2:21

question. AI bots

2:23

are presumably devoid of empathy.

2:26

They're just machines, after all, right? So

2:29

could they ever be moral? Could

2:32

they be given a sense of right and wrong? Turns

2:35

out it's complicated. Paul,

2:38

this is very interesting for me to be

2:40

talking with you because we've been talking a

2:43

lot about artificial intelligence on this show. And

2:45

the question that looms in a lot of people's

2:47

minds is, will it turn against us

2:49

and do us harm? And one solution

2:51

that's been offered is that we

2:54

made it, so why don't we just tell it

2:56

to be good, to be moral? And

2:58

you had a really interesting answer to that idea

3:00

in an article in The New Yorker. And

3:03

I wanted to talk to you about that. Why can't

3:05

we just tell them to be good? Yeah,

3:08

I'm glad to be talking to you about

3:10

this. I agree with you. I think AI

3:12

is the biggest news that's come along in

3:15

a very long time. And it

3:18

can imagine it transforming the world for

3:20

the better in enormous ways. It could

3:22

also kill us all. I

3:25

have to go now. That's

3:27

one of the two. I

3:30

guess we'll find out. And you're

3:32

right. So

3:34

one long-standing solution to the worries

3:37

people have about AI, either worries

3:39

that AI itself may turn

3:41

malevolent in some way or

3:43

accidentally cause harm, or

3:45

that bad agents could use AI to do

3:48

terrible things, is to make AI moral. And

3:51

this is sometimes called the alignment problem, which

3:53

is you want to give AI a

3:56

sense of morality, a sense of goal similar to what

3:58

people have, And in that way,

4:00

it will

4:02

avoid doing harmful and terrible things.

4:04

If we just align AI with

4:06

our morality, which morality

4:09

are we going to choose? Oh, that's

4:11

such a good question. That's an immediate

4:13

problem here. Because, you know, if

4:16

I go to... It's already somewhat aligned in

4:19

that if you go to chat GBT

4:21

or Bing or Claude or whatever and

4:23

ask it moral questions, it will give

4:26

you answers that kind of resonate with

4:28

our intuitions. But your

4:30

question of whose morality is

4:33

a great one. If I asked chat GBT, and I

4:35

have done this, what do you

4:37

think of two men marrying? It

4:39

says, it's fine. There's nothing wrong

4:41

with it. What do you think

4:44

of a woman getting an abortion? It's fine. There's

4:46

nothing wrong with it. But many people

4:48

around the world, it doesn't

4:50

match with their morality. They would say

4:52

that gay marriage is morally wrong. They'd

4:55

say a woman having an abortion is morally

4:57

wrong. So that's the first question, which is

4:59

whose morality? And there's no

5:01

way around it. If it's going to align

5:03

with your morality, it's going to

5:05

be a different morality than somebody from raised

5:07

in a very different culture and environment.

5:11

And I think to some extent, I think

5:13

we just skirt the problem. We say, OK,

5:15

fine. Our morality. Let's connect it to our

5:17

morality. And

5:20

then we have various problems that

5:22

arise. It turns out to be very

5:26

difficult to program a machine to be

5:28

moral and not have it, you know,

5:30

choose to satisfy other goals instead. So

5:34

the main worry, one of main worry

5:36

about AI is a sort

5:38

of unintended consequences. The standard example,

5:40

I think from Nick Bostrom, is

5:43

you ask an AI to just make

5:45

paperclips, as many paperclips as possible. And

5:47

then in a fraction of a second,

5:49

it figures out, well, if it kills

5:51

everybody and turns everybody into

5:53

paperclips, that will satisfy the problem. You

5:56

don't want us to do that. The problem is that even if

5:58

you tell it not to harm us. Which

6:00

us do we mean? Do

6:02

we mean it's okay to harm our

6:04

adversaries but not us? Or

6:07

do we mean don't harm the us that's all

6:09

of humanity? Or do we

6:11

mean more? Would a moral AI stop us

6:13

from factory farming? From

6:16

killing billions of sentient creatures very

6:18

painfully for food? Would it intervene?

6:21

Would it stop us from doing war? One

6:25

of the points of some stuff I've

6:27

written is making the argument that maybe

6:30

we don't want moral AI. We want

6:33

obedient AI. We want it to do what

6:35

we want and we don't want it to kill us.

6:38

But if it's too moral, it might tell

6:40

us to stop doing a lot of things we're

6:42

doing. Could you imagine what

6:45

the military would think of military AIs which decide

6:47

to be pacifists or decide, well, this is an

6:49

unjust war. I'm not going to shut

6:51

down the tanks and the airplanes. I'm going

6:53

to lower your security system because this is an

6:55

war we should be fighting. Or just kill our

6:58

enemy. And the AI decides what's

7:00

the enemy. That's right. Maybe

7:03

the AI is very smart and moral

7:05

and decides we're the baddies. I've

7:10

thought it over and you're it. Yeah,

7:12

you're the villains. So people say

7:14

they want moral AI. But

7:16

when push comes to shove, I think both

7:19

have a sort of global general scale for

7:21

military and industry and so on. We don't

7:23

want it. And even at a personal level,

7:25

I don't want it. What would I

7:28

do? What would I think of tax

7:30

software that's very AI generated and won't let me

7:32

exaggerate the size of my home office? What

7:35

would I think of my self-driving car that

7:38

refuses to drive me to a bar because

7:41

I drink too much? Go

7:43

back home and spend time with your family. You

7:49

know, when we're talking about weapons and what

7:51

the Department of Defense would be happy with

7:53

or not happy with, just the idea of

7:56

having an autonomous weapon, which

7:58

we already seem to be able to do, we're the ones. decides

8:00

at the very last second whether to

8:02

kill somebody or not based

8:04

on its own evaluation. You

8:07

know, given some guidelines by the person

8:09

firing, but mainly evaluating whether that person

8:11

that it has in its line of

8:13

sight fits the rules or not. It

8:16

decides, it makes up its own mind. Yeah. Could

8:19

AI in general develop a mind of its own,

8:21

do you think? Well, that's

8:25

a hard question. It's a hard

8:27

question where AI is going to

8:29

go from here. So

8:32

take your case where you give instructions on who

8:34

to kill and who not to kill. I

8:37

guess the question people would want to know is

8:39

can AI decide to override these instructions? Particularly

8:41

if we build a moral AI or

8:44

if we build an AI that's in some sense

8:46

self-interested, there's always an option

8:48

that could stop listening to us. And

8:51

here there's sort of a cluster of

8:53

questions that nobody knows the answer to, which

8:55

is, you know, right now, the

8:58

machines we have, the large language models show

9:00

no sign of doing this. They're very obedient.

9:02

I tell it to what to do. The

9:04

only cases that won't do

9:06

what I tell it to do is when it's been

9:08

programmed not to. So if I ask it to develop

9:10

a deadly virus, it tells me, I'm sorry, I can't

9:12

do that. There's all sorts of things that

9:14

will say that. But beyond that, it does what I tell it

9:16

to. Will a future

9:18

version stop doing that? I

9:21

don't know. There's either it

9:24

either it will or

9:27

it won't because that's not where the technology

9:29

is going or it won't because we're going

9:31

to stop building AIs which have so much

9:33

power. And as you know, there's a large

9:35

movement of people who argue that we should

9:37

stop development on the AIs because they're terrified

9:39

of the consequences. Well, if the

9:41

good guys stop development of AIs and

9:43

the bad guys don't, that's

9:46

an open door, isn't it? Yeah,

9:48

that's so one of the arguments against it

9:50

is that assuming we're assuming we're

9:52

the good guys, for the sake of argument,

9:54

if we stop, the other guys will develop

9:56

AIs and they'll have less less.

10:00

restrictions and they will get ahead

10:02

of us. So in some sense, this

10:04

is an arms race. Right, an

10:06

arms race. It sounds almost unavoidable and

10:09

therefore, regulation, international regulation sounds hard

10:12

to imagine for the same reason

10:14

because they want to

10:16

regulate stuff that's bad for them but

10:18

not necessarily bad for their adversaries. Yeah,

10:22

there's been cases where we have had

10:24

international regulations over biological

10:27

weapons over things

10:29

like cloning, various forms

10:32

of human experimentation. It's

10:34

an open question how much countries obey them but

10:36

at least we have some sort of general

10:38

restrictions for certain things. I

10:41

think the problem with AI is too many people and

10:43

I'm not talking here about China or some

10:46

other country, I'm not about United States

10:48

say, too many people want more AI

10:51

because correctly enough they think this could

10:53

really improve people's lives. What

10:56

if we gave up on AI and it

10:58

turned out that if we just worked a

11:01

bit harder, it could cure diseases. It

11:03

can solve deep social and environmental problems

11:06

that we can't imagine the solutions to.

11:08

It could really improve our lives. It's

11:12

funny, I've never seen a technology before that had

11:14

so much potential for both

11:16

terrible consequences and wonderful consequences. Yeah,

11:19

I was just thinking as you were saying that

11:21

that unlike nuclear power, where it

11:24

would be very good to get energy from

11:26

nuclear power but not nuclear bombs, unlike

11:29

artificial intelligence, nuclear power doesn't have the

11:31

ability to keep learning on its own,

11:35

learning how to mix those two good and bad things

11:37

in a way that could be bad for everybody. That's

11:40

right and people who are very worried

11:42

about AI often give the analogy of

11:44

meeting up with

11:48

a super intelligent species or asking what

11:50

do we as a highly intelligent species,

11:52

how do we deal with those who

11:54

are less intelligent, less capable than

11:57

us? We put them

11:59

in cages. We exploit them, we use them.

12:03

And it's possible that

12:05

AIs will do that to

12:07

us at some level. Not

12:09

because, you know, we've been shaped by

12:11

natural selection. We have all these aggressive

12:13

and sexual and malevolent desires. They won't.

12:17

But they may have other

12:19

things that lead to bad

12:21

consequences. For instance, most machines

12:24

want to do what they're told, want to

12:26

satisfy a task. And if

12:28

you set an AI to a task,

12:30

it may recognize well that humans could

12:32

shut it off. And

12:34

so the way to stop that from happening is

12:36

to shut off humans first. Intelligence

12:44

doesn't seem to me to lead

12:46

necessarily to moral behavior.

12:49

I think some people feel it'll be so smart,

12:53

it'll develop its own sense of morality.

12:56

I don't see that happening. I

12:59

don't see that either. I think there's some

13:01

relationship with intelligence and moral behavior

13:03

in people, in part because if

13:05

you're smart enough, you could kind of work with

13:07

another person for long-term solutions. You know, instead of

13:09

me stealing from you and you stealing from me,

13:11

we could trade. And we could think to

13:13

ourselves, this actually works better in the long run. But

13:16

there's no shortage of really smart

13:18

people who are also terrible. I

13:22

think whether you're good or bad depends

13:24

on what you want, not your capacity

13:26

to reason and your capacity for rationality.

13:29

In fact, the more terrible you are and

13:31

the more intelligent you are, possibly

13:34

the more likely you are to rise

13:36

to the top and

13:38

cause even more damage. Yes.

13:40

So if it turned out that we ended

13:43

up in a conflict with AI, with different

13:45

interests, different goals, it's going to be very

13:47

unfortunate if it's much smarter than we are.

13:51

Like any adversary, you'd rather have them

13:53

done than smart. Yeah,

13:56

exactly. What about the

13:58

tendency to want to survive? Is

14:00

that something that we don't have to worry

14:02

about with AI's or

14:05

is it something that will probably happen

14:08

where they'll develop this need,

14:10

this urge, this impetus towards

14:13

survival and anything that gets in

14:16

the way of that or is perceived by the AI to

14:18

get in the way of it makes

14:21

people causing that to

14:24

be the enemy of AI? What about

14:26

survival? It's a

14:28

good question. People and other animals

14:30

have a strong instinct to survive

14:32

because those that didn't wouldn't reproduce.

14:35

And so natural selection drives us

14:37

with a very powerful survival instinct

14:39

and other instincts that are aggressive.

14:41

AIs don't have that. A

14:44

simple AI, if you just tell it

14:46

destroy yourself, erase your memory, it will.

14:50

The worry that some people have

14:52

is it could develop it. And one

14:55

way it may develop it is that once

14:58

you have any other goal, a desire

15:01

to survive comes with that goal.

15:05

If I build an AI and its goal is to

15:07

write poetry and it just writes

15:09

poetry and it's smart enough, it

15:11

will reason I better keep on going. If

15:14

someone shut me off, I couldn't write poetry

15:16

anymore. And so if it

15:18

takes steps to protect itself and write

15:20

more poetry, that would be

15:22

rational for his desire. So a desire to

15:24

survive is interesting because it seems to be

15:27

a consequence of every other desire. You can't

15:29

do things when you're dead. So

15:32

don't be dead. Don't be dead.

15:34

Good advice. I was thinking

15:36

in more complicated terms because there were viruses

15:39

that had the ability to

15:41

evade elimination, computer

15:43

viruses. And if

15:45

one got into AI and developed

15:48

a symbiotic relationship with it, where

15:51

the virus stays alive and

15:53

the AI stays alive by exploiting it, the

15:55

situation in the same way that the

15:58

virus does, then there's a problem. Then

16:00

we've got anti-malware and

16:02

we're the malware. Alan, I

16:04

thought I worried about things before talking

16:06

to you, but I've never worried about

16:09

a virus co-mingling with an AI to

16:11

become an especially malevolent fan. So

16:13

now I'll worry about that too. Well,

16:16

you know, with this series of podcasts

16:18

that we're doing on AI,

16:21

I don't want to scare people. I

16:24

get scared myself when I see that

16:26

concerns are not expressed very

16:29

seriously. There are

16:31

some people who helped create artificial

16:33

intelligence who are worried about

16:35

dire consequences. And they put it in terms

16:38

as stark as the ones you used earlier

16:41

in the conversation that could kill us all.

16:44

On the other hand, there are people who kind

16:46

of make fun of it. That

16:49

doesn't sound like a balanced approach

16:52

because we're up against something we've never experienced

16:54

before, which the same as being visited by

16:56

an alien civilization, smarter these

16:59

things get. Well, let

17:01

me ask you, you've been immersed in this for a

17:03

while. If you

17:05

could, would you have a moratorium on

17:07

AI research? Would you give it a

17:09

break for a few decades? I

17:12

would, but you can't, so I wouldn't suggest

17:14

it. Yeah. I know some

17:16

very serious people have suggested it, but it's

17:19

such an easy agreement to break. It's

17:22

true. I'll stop your research. I'll just do a

17:24

little research on my own and see what I

17:26

come up with. Right. You

17:28

just need somebody in a basement who has to

17:30

write equipment. And it is

17:33

very hard to block research on it. You

17:35

could shut down through law, open AI

17:37

and Microsoft and Google and all that and

17:39

tell them not to do it or they'll

17:41

go to prison, but it's not just them.

17:45

And there's people all around the world. Exactly. Yeah,

17:47

you're right. So

17:50

it may be, since you can't stop it,

17:53

don't try. Instead, try to regulate it and try to

17:55

keep an eye on it. I wonder

17:58

if you can work on AI then. keep

18:00

an eye out for other AIs and

18:02

negotiate with them or battle with them.

18:05

It seems to push the problem back

18:07

a little bit like how do you

18:09

know that the Guardians you have appointed

18:11

are your own motivation. I'd be the

18:14

wrong one to set up a system

18:16

for that. It needs

18:18

a bit of tweaking. When

18:26

we come back from our break, Paul Bloom

18:28

dives into the question of whether AI bots

18:30

could ever be conscious, whether they

18:33

could feel. And that led to

18:35

him asking a question he never believed he'd have

18:37

to ask. Should they be given

18:39

the vote? Just

18:45

a reminder that Clear and Vivid is

18:47

non-profit with everything after expenses

18:49

going to the Center for Communicating

18:52

Science at Stony Brook University. Both

18:55

the show and the center are dedicated to

18:57

improving the way we connect with each other

19:00

and all the ways it influenced our lives. You

19:03

can help by becoming a patron of

19:05

Clear and Vivid at patreon.com. At

19:08

the highest tier, you can join a monthly

19:10

chat with me and other patrons, and

19:13

I'll even record a voicemail message for you.

19:16

Either a polite, dignified message from me

19:18

explaining your inability to come to the

19:20

phone, or a slightly snarky

19:22

one where I explain you have no interest

19:24

in talking with anyone at the moment. I'm

19:28

happy to report that the snarky one is

19:30

by far more popular. If

19:33

you'd like to help keep the

19:35

conversation going about connecting and communicating,

19:37

join us at patreon.com/Clear and Vivid.

19:41

patreon.com/Clear and

19:44

Vivid. And

19:46

thank you. Disney

19:48

Plus and Hulu are better together in the

19:50

Disney bundle for a low price. On Disney

19:53

Plus, get into the thrilling Percy Jackson

19:55

and the Olympians and Marvel Studios Echo.

19:57

On Hulu, the stakes are high. and

20:00

effects his feud. Capote versus the Swan

20:02

and the new season of Life and

20:04

Best. All of these and more are

20:06

now streaming. Get the Disney Bundle with

20:08

Disney Plus and Hulu. See disneybundle.com for

20:10

details. At the

20:13

UPS store, we know things can get busy this

20:15

upcoming holiday. You can count on us to be

20:17

open and ready to help with any packing and

20:19

shipping or anything else you might need. Is there

20:21

anything you can't do? Um, actually, I don't have

20:23

a good singing voice. The

20:26

UPS! No, but our

20:28

certified packing experts can pack and ship

20:30

just about anything. At least that's good.

20:32

Your local, the everything you need to

20:34

be unstoppable store. The UPS store. The

20:36

unstoppable. The UPS store locations are independently

20:38

owned. Product services pricing and hours may

20:40

vary. See center for details. This

20:46

is clear and vivid and now back to

20:48

my conversation with Paul Bloom. He

20:50

recently published a new book on psychology

20:52

and the human brain. You

20:56

have this wonderful book called Psych, which

20:59

is really an introduction to the

21:01

whole field of psychology. And

21:04

I wonder if the way you think

21:06

of the brain and the mind have

21:08

been altered in any way by what, what

21:11

you're thinking about in terms of AI. Yeah,

21:13

it has been. I wrote the book

21:16

during COVID before these machines came out.

21:18

And AI is maybe the

21:21

biggest thing in my professional life. I was wrong about

21:23

where if you had asked me a couple of years ago,

21:26

when would we develop machines? You could have

21:28

a conversation with that can do what chat

21:30

GPT does. I'd say, I don't know, 20,

21:32

30, 40 years. And,

21:35

and it happens so fast and

21:38

it challenges my view on the mind because

21:40

in my book, Psych, I say, look, simple

21:43

statistics, doing analyses

21:46

of large bodies of data won't get

21:48

you that far. But to a large

21:50

part, they do work by

21:52

statistics and analyses of large bodies and

21:54

data. And they do much

21:57

better than me or many of

21:59

my friends. colleagues would have thought they did.

22:02

And this does raise the question of what the

22:04

extent to which the human mind works

22:07

and does its marvelous things, in

22:09

part the same similar, more

22:11

similar they would have thought to chat

22:13

GPT, that we just get

22:15

these enormous bodies of data and we do

22:17

statistics on it. And that's how we're so

22:20

smart. I don't think that that's entirely right.

22:22

I think we have built in

22:24

rules in the head. I think we

22:26

think in ways that the AIs can't, which

22:28

means we don't make the same mistakes they

22:30

do, these weird hallucinations, these weird limitations. But

22:34

still, I am very

22:36

stunned at how well such

22:38

a sort of simple way

22:40

of proceeding has led to what seems to

22:43

be a powerful intelligence. I think

22:45

there's a section in your book where you talk about

22:48

fungi solving math

22:50

puzzles without needing

22:52

to be conscious of what

22:54

they're doing. For all we know, fungi are

22:56

probably not conscious. Let's agree that

22:58

they aren't. What was it? Something about a

23:00

maze. I can't remember the exact way they

23:02

solve puzzles. I figure

23:04

the details either, but they were

23:07

doing roadmaps. They were calculating at

23:09

some level the shortest distance between

23:11

different points and

23:13

doing intricate calculations without

23:15

a brain. And this is

23:17

important because it shows that

23:23

intelligence is different from consciousness.

23:26

We probably already knew this, but being

23:28

smart, being rational, being able to solve

23:30

problems is

23:33

quite separate from having sentience, having experience,

23:35

being able to feel pain

23:37

and pleasure, and so on. And

23:40

I think we know that

23:42

these computers are highly intelligent.

23:45

Some people say, oh, I don't want to call

23:47

them intelligent, but that's just wordplay. They do smart

23:49

things. They act very smart in

23:51

certain ways. I don't

23:53

think they're conscious. Tell me if

23:55

I have this right, that they don't

23:58

need to be conscious to do some kind

24:00

of complicated things. It

24:02

sounds a little bit to me like the baseball

24:04

player who knows instinctively

24:07

where to be in

24:09

the outfield to catch a pop

24:11

fly and is doing all kinds of

24:13

his brain under the surface of consciousness. He's

24:16

doing all kinds of calculations

24:18

in physics. He's

24:20

not aware of it. That's a nice

24:23

analogy. What's under the surface

24:25

for us, it might be for these

24:27

machines, everything's under the surface. It might

24:30

be they don't have the experience of

24:32

toasters. They're just toasters within a sense.

24:34

There's nothing there. Now, whether

24:37

or not they are conscious or could

24:40

be conscious is a question

24:42

of enormous importance. I

24:44

know they're smart. I don't think

24:46

there's the slightest twinge of consciousness

24:49

in these machines. If

24:51

things changed and it looked like

24:53

they had achieved consciousness, then all

24:55

of a sudden we have

24:58

moral obligations to them. All

25:00

of a sudden using them for our purposes is a

25:02

form of slavery. That's

25:05

interesting. Tell me why discovery that

25:07

they're conscious means we have

25:09

to be more aware of their suffering.

25:11

I'm using consciousness in a broad sense.

25:13

I agree. I think the question is

25:15

Jeremy Bentham once said when

25:19

talking about what matters morally, the

25:21

question isn't can it think? The

25:24

question is can it feel? The

25:26

moment these machines can feel, then

25:31

just like I have

25:33

different obligations to an animal that can feel

25:35

than I do to a rock or a

25:37

toaster, all of a sudden you

25:41

have these moral obligations to these things. Shutting

25:43

them off would be murder. Exploiting

25:47

them would be slavery. We'd

25:49

be creating new people in a

25:52

sense. The question would

25:55

come up should AIs get the vote. I

26:00

thought I'd be saying that seriously now, but

26:02

you know, in five years, ten

26:04

years, who knows? That

26:06

reminds me of a section

26:09

of your book, Psych, where,

26:12

as I remember, you were making the point that we

26:15

need emotions

26:18

to be rational, to some

26:20

extent anyway. Is that right? One way

26:22

to look at it is, when you ask the question

26:24

what rationality is, it's the

26:27

capacity to attain your goals. And

26:30

you would call somebody rational intelligence to the

26:32

extent they could achieve their goals. But

26:34

what emotions do is they

26:36

establish goals. Like we talked

26:38

about one of them, stay alive. Having

26:41

a goal of staying alive dictates I act in a

26:43

very different way than if I don't care. Take

26:46

care of my children, develop warm

26:48

relationships, achieve status, and so on.

26:51

And the emotions have been shaped by

26:53

evolution to guide us to

26:55

certain things. And

26:58

AIs don't have emotion in that sense. They just

27:00

have the goals we tell it to have. An

27:02

AI's goal on my computer is

27:04

pretty much make me happy, make

27:06

the person happy, answer my questions. Sometimes

27:10

tell the truth, sometimes make me happy, even if

27:12

it involves making up stuff. And

27:15

so I do think it could have

27:18

the same rationality. But its

27:20

rationality is in the service of whatever goal

27:22

you toss at it. And

27:24

in some way, maybe that's a little bit reassuring. Without

27:28

it, it doesn't want to rule the world. It

27:30

doesn't want to become king. It doesn't want to kill

27:33

us all. It only wants what we tell it to

27:35

want. How do we know

27:37

it's not feeling things? Is

27:39

there a test for inner awareness,

27:41

a touring test for emotions? How

27:45

do you know I'm feeling things? Well,

27:47

you seem like a nice person and you look happy.

27:50

Well, thank you. I'm making happy facial

27:52

expressions. I'm saying all these words. And

27:55

that's the problem with AI's that as

27:57

Harari has said, they're built to exhibit

27:59

intimacy. Yeah to engage is in

28:01

intimacy. So there's two kinds of mistakes

28:04

that you could make One

28:06

mistake is looking at a being with some

28:08

with consciousness and saying it doesn't have it

28:11

that could be terrible That's

28:13

that could be could lead all sorts of monstrosities If

28:15

you looked at me and for some reason you came

28:18

to the belief that I am just dead inside I'm

28:20

making sounds and I'm making expressions, but there's

28:23

nothing happening in me. I'm like

28:25

I'm I'm no more I'm no more sentient

28:27

than a desk or a rock Then

28:30

you could destroy me you could kill me your

28:32

interests mean nothing. There's nothing going on here So

28:35

that'd be one mistake a second mistake

28:37

Which I think people make now is

28:39

they they all of this AI is intimate.

28:42

It says it seems so smart It seems

28:44

can be warm I've

28:47

done work with some colleagues at University

28:49

of Toronto finding that people often think

28:51

AI is more empathic than people

28:54

it could be warm and supportive and so on and

28:57

And and then we may falsely

28:59

assume there's consciousness when there is none There's

29:03

um, there's a guy who worked at

29:05

Google Blake Blake Limoni I mean after

29:07

nastiness name right who and he

29:09

came to the belief that the AI system He was

29:11

working with was sentient was conscious was alive And

29:14

then he complained that Google should not be

29:16

using it without his permission and

29:19

try to get his legal representation Whereupon Google

29:21

Google fired him and

29:25

And and people made fun of them on Twitter

29:27

says you Find me a

29:29

love of you know, and I but I don't know

29:31

what if you were right You're

29:38

talking your New Yorker article about

29:40

Isaac Asimov anticipating this discussion we're

29:42

having by decades and He

29:45

had three rules that robots should be

29:47

programmed with what what are

29:49

those rules? How come they're not working? Yeah,

29:52

Asimov was first the struggle with the alignment

29:54

problem. He wrote these wonderful science fiction Stories

29:57

like I robot which had

29:59

these robots in them. And

30:02

he assumed correctly that people would worry

30:04

about the robots being well behaved. So

30:07

he thought up three laws. The

30:10

first law is a robot should not

30:12

hurt anybody or kill anybody or

30:15

through inaction allow anybody to

30:17

come to harm. The same

30:19

action, no action, not taking an action. That's right.

30:21

That's right. So, you know, if someone's drowning, the

30:23

robot can't just stand and watch them, has to

30:26

ask to help. The second

30:28

law is a robot

30:30

must obey all instructions

30:32

unless it conflicts with the first law. So

30:35

you ask the robot to clean the room, it'll clean the room. You

30:37

ask the robot to murder your next door neighbor, it won't. And

30:40

the third law is a robot

30:42

should protect itself unless it

30:45

conflicts with the second or first law.

30:49

So if somebody tells a robot, go do this dangerous thing,

30:51

it will do it. But otherwise

30:53

it'll try to stay clear of harm. This

30:56

is very clever. It captures certain

30:58

ideas. You want a robot

31:00

to be obedient, but you don't want it to be a

31:02

murder machine. You want it to help

31:05

people, you want it to not harm people.

31:07

And you want it to protect itself. It's

31:09

an expensive piece of machinery. You don't want

31:11

it to kind of just walk off a

31:13

roof for no reason. It's

31:16

really clever, but it doesn't really work.

31:18

And of course, wouldn't it be strange

31:20

if all the morality could be, you

31:22

know, synopsized in three laws.

31:24

So for instance, the first law says

31:26

a robot shouldn't through

31:29

inaction allow anybody to come to

31:31

harm. But if that were really true, then

31:33

if I owned a robot, it would

31:36

run through the streets of Toronto, you

31:38

know, helping people, giving food to

31:41

the hungry, helping people, you

31:43

know, out of burning buildings and everything would

31:45

never, never come back. It would

31:47

be like a Superman spending all this

31:49

time helping others. What

31:53

about the prohibition against harm? Well, would

31:55

a robot stop me if I would

31:57

try to swat a mosquito? Would a

31:59

robot stop me if I tried

32:01

to buy a hamburger and say,

32:03

no, indirectly you're causing suffering to

32:05

non-human animals. There's always subtle moral

32:08

issues that arise that people struggle with

32:10

and you just can't make go away.

32:13

This is even an issue right now,

32:15

not science fiction, for self-driving cars. So

32:18

self-driving cars often face moral dilemmas.

32:22

What if it's on an icy

32:24

road and the brakes don't

32:26

work and it's about to

32:28

slam into two people. Should

32:30

it swerve and slam into a brick wall

32:32

and kill the driver? Does

32:35

it matter if it was one person? Would it matter if

32:37

it's three people? These are hard moral

32:39

problems and you can't make them go

32:41

away by just appealing to these general laws. So

32:48

what are we to make of this whole thing? How

32:51

do you feel personally when you sit at

32:53

your computer and you wonder what

32:56

it's going to turn into in a

32:58

very short time, part of a network

33:00

that's either malevolent or

33:03

beneficial or some unknowable

33:06

combination of both? What can you

33:08

do? What can I do? What

33:10

can ordinary people listening to this

33:13

do to make it

33:15

mostly beneficial? My short answer

33:17

is I don't know. I don't know. You sort

33:19

of asked two questions. I don't

33:21

know what's going to happen and I don't

33:23

know what we can do to make things happen

33:25

better. I share your skepticism

33:28

about saying, okay, let's shut down all

33:30

AI research. I don't think

33:32

that's possible and could be counterproductive. I do

33:34

think it makes sense to sort of tightly

33:36

regulate it and tightly watch it. I

33:39

think we should be very

33:41

sensitive to the social

33:43

upheavals that are going to happen

33:45

due to AI. So we're talking about things like it

33:47

decided to kill us all, but a more mundane issue

33:50

is it's going to put a lot of people out

33:52

of work. A lot. And

33:55

it's funny because other technological advances put laborers

33:57

out of work. This is going to put,

34:00

I don't know, podcasters, professors that

34:02

are working. I

34:05

feel sometimes, you know,

34:07

there's a concrete answer. We're coming up

34:09

to an election season. I don't

34:11

think politicians on the debate, doing

34:13

their debates, are going to talk enough about AI. I

34:17

think they're going to talk a lot about cultural war

34:19

issues, they're going to talk about foreign policy, they're going

34:21

to talk about budgets. But AI,

34:23

we should treat it as important as it is.

34:25

It's very important, and we should treat it as

34:27

such. Well, you

34:30

relieved some of my anxiety and increased

34:32

some of it. Well you

34:34

terrified me with the virus slash AI scenario, which

34:36

is going to keep me from sleeping for a

34:39

while. We've reached a point

34:41

where we always ask seven quick

34:43

questions at the end of a

34:45

show. And you've been

34:47

on the show before, and you were very good

34:49

natured in that, to convince me that I was

34:52

wrong. Well, I'm

34:54

not sure I convinced you, but we had a good

34:56

conversation. Yeah. So maybe you've

34:58

changed your mind about some of the answers to these

35:00

seven questions. Let's see. Of

35:03

all the things there are to understand,

35:05

what do you wish you really understood?

35:07

Consciousness, the mind. How

35:11

do you tell someone to have their facts wrong?

35:13

Yeah. Well, the way I

35:16

used to do it when I was younger was

35:18

be I'd say, you have your facts wrong. And

35:21

that never worked at all. Now

35:25

I often don't tell them, or I

35:27

often just ask them questions. And

35:30

either, if you ask them the right sort

35:33

of question, either it'll come to realize their facts are wrong,

35:35

or I'll come to realize maybe their facts were right and

35:37

I was just wrong myself. What's

35:39

the strangest question anyone has ever asked

35:41

you? Oh, God. You

35:45

could have sent these in advance. I

35:48

was once on a radio show when promoting a

35:51

book. And I said,

35:53

welcome, Professor Bloom. I said, thank you. I'm really

35:55

glad to be on. He says, have you accepted

35:57

Jesus Christ as your Lord and Savior? Oh, wow.

36:00

And it turned out it was a religious show, and

36:02

he always began with that. And

36:04

he was, to be fair, he was in

36:06

entirely good nature to say, no, I'm Jewish,

36:08

and actually I'm an atheist. He was totally

36:10

fine with that. But that question so shook

36:12

me up, I just kind of stumbled from

36:14

next wall. How do

36:16

you deal with the compulsive talker? In

36:21

the short run, I listen.

36:25

I don't mind listening. Sometimes I

36:27

could really spend a lot of time just, you know, I

36:29

talk a lot now because you're talking here and you're asking

36:31

me things. But I often tend to be, if you're one

36:33

or the other, I tend to be more of a listener.

36:36

And I like listening and so on. I

36:39

think the kind of person you're imagining, and I

36:42

do know some people like this, maybe aren't

36:45

very interesting and just love to talk. And I

36:47

listen, but then I don't see them again. Okay.

36:52

Let's say you're sitting at a dinner table

36:54

next to someone you've never met before. How

36:56

do you begin a really

36:59

genuine conversation? Oh,

37:01

God. I sometimes, at

37:04

my best moments, I think, ask

37:06

them a general philosophical

37:09

question. Like,

37:12

if you could live 10,000 years, do

37:14

you think you'd be bored? If

37:17

someone offered you that, would you say no? Because

37:20

the boredom might be incredible. Or

37:22

do you think you'd always be interested? That

37:25

sort of thing. I'd

37:27

like to hear the answer to that. That's good. That's

37:31

one of the things that I find about

37:33

that situation that's kind of important. If

37:37

I ask them a question, and a bell goes off

37:39

in my head when I hear their answer that says,

37:41

the bell says, you

37:43

have no interest in what that

37:46

person just said, then I'm stuck.

37:48

Then you're stuck. I can't

37:50

say, oh, great. Tell me more. Which

37:53

is what I should say. So,

37:59

next to last. What

38:01

gives you confidence? I

38:04

think like a lot of academics I'm a certain sort

38:06

of combination of

38:08

extrovert and introvert. I'm fine talking

38:10

in front of big crowds but

38:14

in smaller situations I prefer to

38:16

talk one-on-one and the truth

38:18

is talking to people I don't know well I'm often

38:20

doesn't give me confidence I often feel feel

38:22

shy but I

38:24

have I'm lucky enough to have a series

38:27

of very close relationships to my to my

38:29

wife to my to my sons my adult

38:31

sons to some friends and that

38:33

gives me confidence I feel I feel really good about

38:35

myself when I talk to the people who love me

38:37

and I love and they've

38:40

chosen they chose they choose to be with

38:42

me and I feel great about that and

38:45

it just it just revs me up. Okay

38:47

the last question what

38:50

book changed your life? I

38:54

can actually answer that and and

38:57

I got answer with two. One

39:00

is Victor Frankel's book Man's Search for Meaning

39:03

which was which is a book of

39:05

describing his experience in concentration camps and

39:08

how he learned that what

39:10

the kind of people who survive it people who have

39:13

meaning in their lives the idea of

39:15

a goal a purpose relationships

39:18

work is is transcendently

39:20

important for people that had

39:22

a huge influence and then the other book that

39:24

did was Mihaly Csikszentmihalyi's

39:27

book Flow which I

39:29

read a long time ago and it

39:31

was all about flow experiences and Csikszentmihalyi

39:33

says look what people people think they

39:35

like lying on the beach and and

39:38

you know hanging out watching TV but

39:40

what really gives lasting pleasure is getting

39:42

into an activity that kind of engages

39:44

you you lose time if you're

39:48

focused for me often it's writing

39:50

sometimes it's reading sometimes it's the right sort

39:52

of conversation and the book gave

39:55

me this insight saying yeah that's what I

39:57

like I thought I

39:59

liked other thing. But no,

40:01

I like these flow experiences and that that

40:03

had a huge role for me. Well,

40:06

this conversation has been that for me. Each

40:10

time we talk, I

40:13

get into that zone where I

40:15

just like it to go on for longer. But

40:18

we do have to end. And I'm so

40:21

grateful you took the time to be

40:23

with me today. This has been a delight.

40:25

Let's not wait four years for next time.

40:28

Okay, good. Thank you,

40:30

Paul. This

40:38

has been clear and vivid. At least I hope

40:40

so. My thanks to

40:42

the sponsor of this podcast and to all

40:44

of you who support our show on Patreon.

40:47

You keep clear and vivid up and running.

40:50

And after we pay expenses, whatever is

40:52

left over goes to the Aldous Center

40:54

for Communicating Science at Stony Brook University.

40:57

So your support is contributing to the

40:59

better communication of science. We're

41:01

very grateful. Paul

41:04

Bloom is professor of psychology at

41:06

the University of Toronto and

41:09

professor emeritus of psychology at

41:11

Yale University. He

41:13

studies how we make sense of

41:15

the world, focusing on pleasure, morality,

41:17

religion, fiction, and art. He's

41:20

written seven books. The latest,

41:22

the one we talked about is Psych,

41:24

the Story of the Human Mind. His

41:27

website is paulbloom.net, where

41:30

you'll find links to his many entertaining

41:32

TED talks. This

41:35

episode was edited and produced by

41:38

our executive producer Graham Ched, with

41:40

help from our associate producer Gene Chumet.

41:44

Our publicist is Sarah Hill. Our

41:46

researcher is Elizabeth Ohini, and

41:49

the sound engineer is Erica Hwang. The

41:52

music is courtesy of the Stefan-Kernig

41:54

Trio. Next

42:04

in our series of conversations, I talk

42:06

with Tom Hanks about a fascinating novel

42:09

he's just written called The

42:11

Making of Another Major Motion Picture

42:13

Masterpiece. Tom has

42:15

acted in about a hundred movies and

42:18

we had a fun time sharing stories

42:20

about the elaborately strange experience of

42:22

taking a movie from the page to

42:25

the theater. I have made

42:27

movies in which literally a crew, almost

42:30

like the circus, you

42:32

know, there's trucks and RVs and

42:34

tents, we drop into a town.

42:38

Sometimes the town is Evansville, Indiana, or

42:40

sometimes a town is Darnstadt, Germany, or

42:42

sometimes a town is Seattle

42:45

or Baton Rouge. And

42:48

we're there for three months and

42:50

the town becomes something of our

42:52

own and everybody recognizes, oh, you're

42:54

with the picture. Oh, yeah,

42:57

yeah, we're with the movie. Oh, good to have you

42:59

here. And that

43:01

circus-like atmosphere governs the

43:03

pace of the day and it is exciting.

43:06

But it's also incredibly challenging. There are

43:08

times where everything works and there are

43:10

times where absolutely nothing works whatsoever. And

43:13

you have a 10-week, 12-week experience that is

43:15

unlike any other and then it's all over

43:17

in the wink of an eye and

43:20

you're gone and you can hardly remember the names of

43:23

the people that you worked with. Tom

43:26

Hanks, next time on Clear

43:28

and Vivid. For

43:30

more details about Clear and Vivid and to

43:32

sign up for my newsletter, please

43:34

visit alanalda.com. And

43:37

you can also find us on Facebook

43:39

and Instagram at Clear and Vivid. Thanks

43:42

for listening. Radio

43:53

Andy. Hey,

43:56

it's Andy Cohen. Join

43:59

me on... Andy Cohen live where it's

44:01

just you me and some of the

44:03

world's biggest celebrities Paris Hilton Chelsea and

44:06

their death Rogan I love you Miley.

44:08

Thank you so much You can listen

44:10

to Andy Cohen live at home or

44:12

anywhere you are no car required download

44:14

the Sirius XM app for over 425

44:18

channels of ad-free music sports entertainment

44:20

and more subscribe now and get

44:23

three months free offer details apply

44:25

The The

44:28

Angie's List you know and trust is

44:30

now Angie, and we're so much more

44:32

than just a list. We still connect

44:34

you with top local pros and show

44:36

you ratings and reviews, but now we

44:39

also let you compare upfront prices on

44:41

hundreds of of projects and book a

44:43

service instantly. We can even handle the

44:45

rest of your project from start to finish. So

44:48

remember, Angie's List is now Angie, and

44:50

we're here to get your job done

44:52

right. Get started at angie.com.

44:54

Calm. That's A-N-G-I, or

44:56

download the app today.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features