Podchaser Logo
Home
 From the archive and in the news: How to cut through the 'noise' that hinders human judgment

From the archive and in the news: How to cut through the 'noise' that hinders human judgment

Released Saturday, 30th March 2024
Good episode? Give it some love!
 From the archive and in the news: How to cut through the 'noise' that hinders human judgment

From the archive and in the news: How to cut through the 'noise' that hinders human judgment

 From the archive and in the news: How to cut through the 'noise' that hinders human judgment

From the archive and in the news: How to cut through the 'noise' that hinders human judgment

Saturday, 30th March 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Hi. Everyone, it's Magna here with

0:02

a special from the archives podcast

0:04

Drop From On Point. Daniel Kahneman,

0:06

one of the world's most celebrated

0:09

economists, died this week at the

0:11

age of ninety. The Nobel prize

0:13

winner was one of the pioneers

0:16

in a field that later became

0:18

known as behavioral Economics. His groundbreaking

0:20

work showed that human intuitive reasoning

0:23

is flawed. In. Predictable Ways

0:25

the predictability being the breakthrough

0:27

kind of. Was also author

0:29

of thinking Fast and Slow,

0:31

a highly influential book that

0:33

debunked a long cherished beliefs

0:35

in economics that humans are

0:37

rational actors. His work, along

0:39

with others showed both qualitatively

0:41

and quantitatively that no matter

0:44

how much economists want to

0:46

believe it, human decision making

0:48

is not a rationally driven

0:50

process. Caught. Him and

0:52

was also a holocaust survivor. His

0:54

family was forced to wear the

0:57

Yellow Star of David in Occupied

0:59

France before their escape. He later

1:01

frequently said that his experience of

1:03

the holocaust was one of the

1:05

things that drove his powerful interest

1:07

in understanding the human mind. In

1:10

June of Twenty Twenty One we

1:12

spoke with Con I'm in an

1:14

his coauthor Olivier see Bony about

1:16

their latest book. It's called noise

1:18

or flaw in human judgment and

1:20

it's about how even though we're

1:22

told to trust our judgment, That.

1:25

Judgment is way more variable

1:27

than we think it is.

1:29

It's also about how that

1:31

variability or noise influences almost

1:33

every part of our lives.

1:35

So today from the archives

1:37

we offer you our conversation

1:39

with Daniel Kahneman. I. Hope

1:41

you enjoy. This

1:48

is on point. I Magna Chakrabarti.

1:50

Back when I was nineteen

1:52

years old, I suddenly suffered

1:54

from an auto immune disease

1:57

known as idiopathic thrombosis, I

1:59

do panic. It's

2:01

a mysterious ailment where my immune

2:03

system was attacking my own blood

2:05

platelets, and it was pretty

2:07

serious. Well, the first doctor

2:10

I saw said, we don't really know

2:12

what causes this, so I recommend

2:14

waiting and watching, just don't do

2:16

any major physical activity and it

2:18

might resolve itself. That

2:21

seemed too passive. So

2:24

the second doctor I went to, he said, to

2:27

reduce the autoimmune response,

2:29

I recommend surgery to

2:31

remove your spleen. Well

2:34

that was very aggressive.

2:37

So I asked, what's the chance that that

2:39

procedure will work? And he said, 50-50. Okay,

2:44

so I went to a third doctor and that doctor

2:46

said, you can take a complex

2:48

series of steroids for several months

2:50

and see what happens. Will

2:53

it work? I asked. And he said, I

2:56

don't know. Now

2:58

I didn't mind the uncertainty because

3:00

that is a fact in complex

3:02

systems and the body is a

3:05

profoundly complex system. What

3:07

threw me was the wildly different

3:09

solutions proposed by the three different

3:12

physicians all for the same ailment

3:14

in the same person. And as

3:16

a patient, I did not

3:18

know how to cope with that variability.

3:22

Now you know, I'm not a

3:24

particular fan of n equals

3:26

one examples or using one

3:28

anecdote to describe an entire

3:30

system, but it turns

3:32

out that kind of

3:34

variability is rampant in

3:37

the very professions, systems and

3:39

organizations whose judgment we are

3:41

meant to trust the most.

3:45

It is a huge, costly

3:47

and often unnoticed problem.

3:50

And it's a problem that Nobel

3:53

prize winning psychologist Daniel Kahneman, Olivier

3:55

Sibonie and Cass Sunstein write at

3:58

length about in their new- book,

4:00

Noise, a Flaw in

4:02

Human Judgment. And today,

4:05

Daniel Kahneman joins us. Professor Kahneman,

4:07

welcome to On Point. Glad

4:10

to be here. And Professor

4:12

Ciboney, welcome to you as well. Glad

4:15

to be here as well. Okay. So first, let

4:17

me ask you, I did open

4:19

up with that personal anecdote about

4:22

the medical system. But

4:25

Daniel Kahneman, how common or

4:27

how much noise is in

4:30

medicine, in decision-making amongst

4:32

doctors? Well, the

4:35

long and short of it is there is a

4:37

lot of noise. Doctors don't

4:40

agree with each other in many

4:42

cases, and they don't even agree

4:44

with themselves when shown the same

4:47

set of tests on

4:49

different occasions. So, yeah,

4:51

there's a lot of noise, and there's a

4:54

lot of noise in all professional judgment, not

4:56

only in medicine, but wherever

4:58

people make judgments, you

5:01

can expect to find noise, and you can expect

5:03

to find a surprising amount of noise.

5:06

A surprising amount. Well, the

5:08

thing about, I started with

5:11

medicine because it's one of the

5:13

systems that almost everyone has interactions

5:15

with at some point, if not

5:17

multiple points, in their lives.

5:20

Can you tell me, and Professor Ciboney, I'll

5:22

turn to you on this, can you tell

5:24

me a little bit more about what Daniel

5:26

Kahneman was saying about how doctors even disagree

5:29

with themselves when looking at sort

5:31

of the same set of information about a

5:33

particular case? How

5:36

do we understand that? A typical example

5:38

would be radiologists, and I

5:40

suspect that radiologists are not better

5:42

or worse than other doctors.

5:46

It's just that it's easier to test radiologists

5:48

because you can show them an x-ray that

5:50

they've seen some weeks or some months ago

5:52

and ask them, what is this? And

5:54

if they don't answer, of course, they cannot

5:56

recognize the x-ray because they see a lot

5:58

of x-rays. And if they. The tell you

6:00

something different from what they had told you

6:02

some weeks or so month ago. When looking

6:05

at the same x rayed you know that

6:07

is noise. Now that is by the way,

6:09

a different type of noise from the one

6:11

that you were dealing with in your example.

6:13

Make now because this would be noise in

6:15

the diagnosis which is a matter of sucks.

6:17

You either have this bizarre disease that you

6:20

were talking about the name of wish I

6:22

could not a member or you do at

6:24

least two. Three doctors seems to agree on

6:26

the diagnosis. They disagreed on the treatments which

6:28

is already something you you. Might find excuses

6:30

for the be there isn't an obvious treatment.

6:32

Maybe it's a very rare disease we don't

6:34

know, but in the examples that we document

6:36

in the book, the actually disagree on the

6:39

reality of the diagnosis of the disease that

6:41

is present. Their. Eyes. Which

6:43

is the bigger issue? Presumably okay.

6:45

Okay so so let's than step.

6:47

Step back here for a moment.

6:49

I suppose we should. We should

6:51

actually begin with basic definitions here,

6:54

Sir Daniel Home And when we're

6:56

talking about noise in assistant, right,

6:58

we're not homeless individuals that we're

7:00

talking about the organizational level. Hear?

7:02

how do we define what noises.

7:05

Well. Would apply noise as

7:07

unwanted variability in judgments

7:09

that should be identical

7:11

and thus the broad definition.

7:14

So you're you're you're three.

7:16

Physicians make judgments about the

7:19

same case and we

7:21

would expect them to give

7:23

identical answers. The

7:26

fact that they're variable is is

7:28

an indication that something is wrong

7:30

with the system. And

7:33

if I may, Ah! You're one

7:35

of. Not. One of I'd

7:37

say you probably the best known a

7:39

psychologist in in the world right now.

7:42

Okay, or at least one of them

7:44

are. And your previous work, ah, a

7:46

Sinking Fast and Slow is an incredibly.

7:49

Influential book Here does this interest

7:51

in the how a human judgments

7:54

across a systemic or organizational scale

7:56

it seems it must have in

7:58

naturally. Flows from your. Previous where it

8:01

doesn't not no, actually did not.

8:03

My previous work all my life

8:05

was studied. individuals have studied biases

8:08

and not noise and have been.

8:10

I knew that noise exist and

8:12

everyone knows that so that when

8:14

when it anything as a matter

8:17

of judgment people are not supposed

8:19

to agree exactly so there is

8:21

some noise. A with turned out

8:24

to be surprising was that as

8:26

some seven years ago and while

8:28

in on a consulting. Engagement

8:31

with an insurance company. I discovered

8:33

that there was much more disagreement

8:35

than anybody expected more than the

8:38

executives expected more than the underwriters

8:40

whom we looked at expected about

8:43

by factor of five, by the

8:45

way. so it's not a small

8:47

effect and that's that sets me

8:50

on the schools. Than Olivia join

8:52

me than cast joined us. and

8:55

and the book of came out

8:57

about seven years after as. And

8:59

Disagreement: A monks are using

9:02

underwriters in particular an insurance

9:04

industry. Vs. So you the way

9:06

that we conducted the experiment and

9:08

we call that the noise or

9:10

that because it's it's quite gentle.

9:12

You can conduct experiments like this.

9:14

In many cases they constructed cases

9:17

of that were realistic but fictitious.

9:19

You don't need to know the

9:21

correct answer in order to measure

9:23

noise. and then they present of

9:25

the same cases to above sixty

9:27

underwriters and each of them had

9:29

to give a dollar value. And

9:32

a question that we asked ourselves

9:34

and that we asked executives was

9:36

how much do they differ And

9:38

to get the sense of the

9:40

magnitude of the difference, think that

9:42

you pick to underwriters at random

9:44

from those who looked at the

9:46

same case and by how much

9:49

do they differ in percentages that

9:51

is it takes the average of

9:53

the of judgement difference. You divide

9:55

the difference by the average. Hello

9:58

to the difference. And

10:00

I asked the executives that question, not

10:03

all executives, but a few. And

10:06

since then, we especially Olivier

10:08

have collected a lot of

10:11

information on what people expect.

10:13

People expect about 10% variation

10:15

on quantitative judgment. That looks

10:17

sort of tolerable and reasonable.

10:19

You don't expect perfect agreement.

10:23

10% is sort of tolerable. The

10:25

answer for those underwriters

10:27

was 55%. So

10:31

that is not

10:33

an order of magnitude, but that

10:35

is qualitatively different from what anybody

10:37

had expected. It raises questions about

10:39

whether those underwriters were doing anything

10:42

useful for the company. I was

10:44

going to ask that because if

10:46

there's that much variability, what exactly are

10:48

they doing, right? It

10:51

is quite unclear. And I

10:53

think there is a movement

10:56

in insurance companies actually to take away

10:58

that role of judging, of evaluating risk

11:00

to take it away from underwriters and

11:02

to have them mainly as negotiators and

11:05

to have the judgments sort

11:08

of automated or made centrally. But

11:11

at the time, that was the

11:13

practice in that insurance company, underwriters

11:16

were actually setting dollar premiums. And

11:20

the striking thing that really set

11:22

this book in motion was not

11:25

only that there was a huge amount

11:27

of variability, but that the

11:29

executives in the company had not been aware

11:31

of it and that in fact the organization

11:33

did not know that it had the newest

11:35

problem. So when you can have

11:37

a problem of that magnitude that people are not aware

11:39

of, maybe there is something to

11:41

be studied. That's

11:44

what we did. So then I think we

11:46

need to understand more clearly, and Professor

11:49

Ciboni, I'll turn to you for this, how

11:51

then does noise

11:53

in your description

11:55

of it differ from another

11:57

word that Danny Kahneman used?

12:00

just a moment ago from bias. There

12:03

is actually a very easy way to think

12:05

about it and it's to think of an

12:07

example of measurement as opposed to judgment. It's

12:09

easier to figure it. So suppose

12:12

you step on your bathroom scale every

12:14

morning and on average

12:16

your bathroom scale is kind. It tells

12:19

you that you're a pound lighter than you

12:22

actually are on average every day.

12:25

That's a bias. That's an error of

12:27

a predictable direction and on

12:29

average is the error that your scale is

12:31

making. Now suppose that you step on your

12:33

bathroom scale three times in quick succession and

12:35

you read a different number each time. That

12:38

is random variability of something that should be

12:41

identical. It is noise. Now apply

12:43

this to judgment to see the difference between

12:45

the bias which is the average error and

12:48

the noise which is the random variability in

12:50

the judgment. Suppose that we're making

12:52

a forecast of say what the GDP growth

12:54

is going to be next year or something

12:56

like that. If on average all

12:59

of us who are making this forecast

13:01

tend to be optimistic that's a bias.

13:03

We overestimate that's an average error but

13:06

each of us is going to make a slightly different

13:08

forecast. The variability between our forecasts is

13:10

noise. So it's really quite simple. Bias

13:12

is a predictable error in a given

13:15

direction. It's the average error of a

13:17

number of people or a number of

13:19

observations by the same person. Noise

13:21

is a variability in those observations. Well

13:25

this hour we are talking with

13:27

Olivier Siboni and Daniel Kahneman,

13:29

the Nobel Prize-winning psychologist about their

13:32

new book Noise, a flaw

13:34

in human judgment and

13:36

how that flaw in human

13:38

judgment is amplified across organizations

13:41

and systems that can touch us all.

13:43

We'll be right back. Support

13:58

for the EnPointe podcast. The podcast comes from

14:01

Indeed. We're driven by the search for

14:03

better, but when it comes

14:05

to hiring, the best way to search

14:07

for a candidate isn't to search at

14:09

all. Don't search, match with Indeed. Ditch the

14:11

busy work and use Indeed for scheduling,

14:13

screening, and messaging so you can connect

14:15

with candidates faster. And listeners will

14:17

get a $75 sponsored job credit to

14:20

get your jobs more visibility

14:22

at indeed.com/on point. That's

14:25

indeed.com/on point. Terms

14:27

and conditions apply. Need to hire?

14:29

You need Indeed. This

14:32

is on point. I'm Meghna Chakrabarty,

14:34

and today we're talking with Daniel

14:37

Kahneman. He's the Nobel Prize-winning psychologist,

14:39

perhaps the world's most influential

14:41

psychologist. He's the

14:43

author of the book, Sinking Fast and

14:46

Slow, and we're talking as well

14:48

with Professor Olivier Sibonie. They,

14:50

along with Cass Sunstein, have co-authored a new

14:52

book called Noise, A Flaw

14:54

in Human Judgment, about how

14:57

that flaw, or

15:00

human judgment's flaws, gets amplified through systems, creates

15:03

noise, and makes it harder to come

15:06

to the right decisions and

15:08

what to do about it as well. So

15:11

I'd like to focus with

15:14

the two of you on one particular

15:16

system that you write at length

15:18

about, and that is

15:20

the judicial system. So Professor

15:22

Sibonie, I wonder if you can

15:24

help us understand, what

15:27

is the evidence that

15:29

there is a great deal of noise, or

15:32

this unwanted variability, as you both called

15:34

it, in judgments in the

15:36

judicial system? So

15:38

there has been evidence for quite a while. One

15:41

of the studies that we cite in the book

15:43

goes back to the 1970s, and in that study,

15:45

a great many judges, 208 judges to

15:48

be precise, looked

15:53

at vignettes describing cases,

15:55

so very simplified

15:58

descriptions of cases. where

16:00

you would expect pretty good agreement

16:03

on how to sentence a particular

16:05

defendant because the judges aren't

16:07

distracted by the particulars of what happens

16:09

in the courtroom or by the

16:11

looks of the defendant or by any distracting

16:15

information. You would expect some

16:17

consistency, perhaps not perfect consistency, but

16:19

at least some consistency. And

16:22

it turns out that on some of those

16:24

cases, one judge would say 15

16:27

days and another one would say 15 years.

16:29

On average, for a seven-year prison

16:31

term that was the average given

16:33

by the 200 judges, there

16:36

was, if you were to pick two

16:38

different judges, a difference of almost four

16:40

years in what the sentence would be.

16:43

Which basically tells you that if you're

16:45

a defendant, the moment you walk into

16:47

the courtroom because you've been assigned to

16:49

a particular judge, that has

16:51

already added two years or subtracted two

16:54

years from what would be otherwise a

16:56

seven-year sentence. That is

16:58

truly shocking. You would

17:01

want, of course, the specifics of the

17:03

case and the specifics of the defendant

17:05

and all the particular circumstances of a

17:07

particular offense to be taken into account.

17:10

But the particular circumstances of the

17:13

judge should not make a

17:15

big difference to the sentence, and they

17:17

do. And there have been quite

17:19

a few other studies replicating and amplifying

17:21

this finding, which basically tell you that

17:24

who the judge happens to be has

17:26

a very, very large

17:29

influence on the sentence. Of course

17:31

we know that, but it's

17:33

much larger than we suspect it is. Right.

17:35

I mean, the legal profession has known this

17:37

for quite some time, to your point, that

17:40

they would, you know, lawyers always talk about

17:42

hoping to get assigned particular judges for their

17:45

clients. But just to be clear, so

17:47

these judges in the studies that you're

17:49

talking about were given sort of

17:52

stripped-down information about cases so

17:54

that ostensibly the factors that

17:56

would normally contribute to bias

17:59

from the... individual judge were removed and yet

18:01

we still saw this variability in sentencing.

18:04

Is that what you're saying? That

18:06

is right. You can only expect

18:08

that in reality the noise would be

18:11

much worse than what we measure here

18:13

because these are stripped down cases where

18:15

all the distracting information that could add

18:17

and amplify the biases of the judge

18:20

has been taken out. Okay. So

18:23

Daniel Kahneman, do we know why there

18:25

was so much variability even in

18:27

these controlled circumstances amongst these judges

18:29

who – I mean their

18:32

profession is called judges. We are told

18:34

we are supposed to trust their judgment.

18:38

Well actually there is more than one source

18:40

of noise. We

18:43

distinguish three. So

18:45

one source of noise are differences in

18:48

the severity and that's hanging

18:51

judges and others. That's

18:53

the mean sentence,

18:56

the mean of a lot of sentences that the

18:58

judge gives differ across

19:00

judges. We call that

19:02

level noise. Then there is the

19:05

noise within a judge that

19:07

is elements that are

19:11

like the weather it turns out that

19:13

the sentences are more severe on hot

19:15

days. It turns out that judges are

19:17

more lenient when their football team just

19:19

won a game. Those

19:23

are small effects but they are reliable effects.

19:26

So it turns out that there

19:28

is a lot of noise within the judge

19:31

just as we were talking earlier

19:33

about radiologists. And

19:36

probably the largest source

19:38

of noise is that the judges

19:40

differ in how they see crimes.

19:43

They have different tastes in

19:45

crimes and different tastes in

19:47

defendants. Some of them are

19:49

more shocked by one kind of crime, others

19:51

by another. And there

19:54

are stable but idiosyncratic

19:56

differences among

19:59

judges. In that mysterious

20:01

set of differences, we call that

20:04

the judgment personality, seems to

20:07

account for much of the

20:09

differences among judges in

20:11

this traditional system and probably in

20:14

other professional judgments as well. And

20:16

is that judgment personality formed,

20:19

it must be formed over

20:21

time by that own judges,

20:23

I don't know, both their

20:25

DNA and their personal experiences

20:28

as they developed as humans? Absolutely,

20:32

except we know very little

20:34

about them because we

20:36

know it's just like personalities, we don't

20:39

expect personalities to be the same, but

20:41

we actually expect to see the world

20:43

in the same way. That

20:45

is, I don't expect you to like the same

20:47

things as I do, I don't expect you to

20:50

behave the same way as I do, but

20:52

I do expect you when we are looking

20:55

at the same situation to see it as

20:57

I do because I see it correctly. And

21:01

if I respect you, since I see

21:03

the situation the way it is, I

21:05

expect you to see exactly the same

21:08

thing that I do, and that expectation

21:10

is incorrect. I am curious

21:12

though about the

21:14

second factor that you talked about, even if it's less

21:18

influential, but the susceptibility

21:21

of everyone, but in this

21:23

case, judges sitting on the

21:25

bench to almost

21:27

imperceptible things like the weather or

21:30

whether their team won the

21:32

game the previous night or not.

21:35

Because if the susceptibility to

21:37

all manner of environmental

21:40

inputs is part of the

21:42

problem here, it seems as if

21:44

it would be impossible to meaningfully

21:46

reduce noise because it

21:48

would require changing what makes us human,

21:51

Professor Kahneman. Well, it

21:54

really depends on, to

21:56

some, we must expect that noise

21:58

will remain. so long as

22:01

there is judgment, because that actually

22:03

defines judgment. A matter of judgment

22:05

is a matter on which you

22:07

expect some disagreement. So you're not

22:09

going to resolve it completely. But

22:12

there are procedures, we think,

22:14

that if followed by

22:17

judges, are going to make them less

22:19

susceptible to variation. A source of variation

22:21

I'd like to mention, by the way,

22:23

is time of day. Procedures

22:26

and physicians are

22:29

different in the morning and in the afternoon when

22:31

they are hungry and when they are not hungry.

22:33

So those are substantial variabilities.

22:36

Wow. Well, so I want to talk more about

22:39

some of those procedures in a moment. But

22:41

Professor Siboni, let me turn back to you

22:43

here for a second, because I understand the

22:47

intellectual utility of the

22:49

kinds of studies that we're discussing here regarding

22:52

noise in the judicial system. But

22:55

at the same time, we're talking about judges looking at

22:57

cases that have been stripped of a

22:59

lot of detail, right? And

23:02

isn't part of what we

23:04

are actually entrusting to judges

23:07

is their discernment to come up with

23:09

the right sentence, given the

23:12

individual details of the cases that

23:14

they are hearing, that in fact

23:16

those details matter and then the

23:19

judgments made by the people

23:22

wearing the black robes should

23:24

be trusted. So how much

23:27

can we take these stripped down

23:29

studies and say that they

23:31

are really pointing to something fundamentally flawed in

23:33

the judicial system? Well, we have

23:36

every reason to believe that if you add

23:38

the real details that you see in a

23:40

real courtroom, it would make the noise worse.

23:43

Now there is an easy way to test

23:45

that, which would be to actually take judges,

23:47

I'm saying it's

23:50

actually not easy to do, but it's easy in principle, which

23:52

would be to take a number of judges and

23:54

have them sit in separate boxes

23:57

looking at the same trial, and

23:59

at the end actual trial having seen

24:01

the real descendants and the real jurors

24:03

and the real witnesses and so on

24:05

set a sentence. We would

24:07

see there what the real, that would be

24:09

a real full scale noise audit

24:12

if you will, where you would

24:14

see what the real noise is with real cases.

24:16

To our knowledge this hasn't been done because you

24:18

can see it's a cumbersome experiment. But

24:21

we are pretty convinced. I

24:23

think there's good reason to believe that all

24:25

the details you would see in an

24:28

actual trial like this would

24:30

only make the divergence between the

24:32

judges worse than it is in

24:34

the stripped down cases. Okay.

24:37

So, then help me understand.

24:41

Daniel Kahneman, have you, or actually both

24:43

of you, but Professor Kahneman, I'll turn to you for

24:45

this. Have you spoken

24:47

with judges about

24:50

this and how do they respond

24:52

when presented with this evidence of

24:55

the sort of built in noise

24:58

in their decision making? Well, I

25:01

have spoken with judges, but not

25:03

enough to form an opinion. But

25:05

a lot is known about the

25:07

reaction of judges to discussions

25:10

of noise and to

25:13

guidelines that were introduced in an

25:15

attempt to control the amount of

25:18

noise by setting boundaries for

25:21

different crimes. And

25:24

apparently judges hated them. The

25:27

guidelines were eliminated at some

25:29

point for reasons that are

25:31

not pertinent to the case. But

25:34

it turns out that judges are much

25:36

happier about their job ever since. And

25:38

clearly there is now more variability than

25:41

there was. So the

25:44

situation with there is a lot of

25:46

noise is a situation that judges are

25:48

entirely comfortable with. They

25:52

are comfortable with the situation. They don't know

25:54

there is noise. And

25:56

maybe I may add something here based on

25:58

my own. anecdotal

26:00

conversations with judges, here's

26:03

how the conversation basically goes. You

26:05

say there is noise and you give

26:07

them this evidence and basically they shrug.

26:09

They say, well, yeah, that's the reality

26:11

of making judgments. Every case is different.

26:13

So we're going to make different

26:15

judgments every time. And then

26:18

you ask them, well, okay, so the

26:20

same defendant is going to get a

26:22

different sentence depending on whether he's assigned

26:24

to you or to the judge next

26:26

door. And they say, yeah, that's life.

26:29

And then you ask them, well, what

26:31

if the same defendant got a different

26:33

sentence because his skin is of a

26:35

different color? And they

26:37

say, no, that would be completely unacceptable. And

26:40

then they realize that we have

26:43

a very different level

26:45

of outrage when we

26:47

can explain the cause of the

26:49

discrepancy, when we can identify a

26:52

bias, and when it

26:54

is noise that we can not identify.

26:56

And there isn't any obvious

26:58

reason why we should feel it's

27:00

completely acceptable for these differences to

27:02

appear for reasons we do not

27:04

understand. Whereas it is totally

27:06

unacceptable. And I think we would all agree

27:09

on that for them to appear because of

27:11

reasons that we do understand. And

27:13

that's what we're trying to point out when

27:15

we raise the question of noise in the

27:17

judicial system. Why do we tolerate large

27:20

differences that are caused by noise when

27:22

we would not tolerate them if they

27:24

were caused by bias? Okay. So you

27:27

mentioned both of you mentioned guidelines. And

27:30

Professor Kahneman, can you just elaborate

27:32

a little bit more about within the criminal

27:35

justice context, what you meant by guidelines?

27:38

Well, there was

27:41

a commission set up, I think, in the 1970s or 1980s to

27:43

discuss, well, to assign to each

27:51

crime as defined in

27:53

the law, to assign a

27:55

range of sentences and judges

27:58

were strongly discouraged from going

28:01

outside that

28:04

range, they were allowed to

28:07

do it. So there was discretion,

28:09

but clearly the

28:11

guidelines have a great deal of effect,

28:14

and the variability of sentences for any

28:16

given crime indeed diminished. So

28:18

those were just

28:21

part of the definition of the crime, is

28:23

the range of sentences that are allowed to

28:25

go with it. That's the guideline. Okay, so

28:28

the reason why I want to ask you about

28:30

that is because you do talk about the importance

28:32

of creating, I would say, the right kind of

28:34

guidelines to reduce noise in

28:37

organizations and systems, because the one

28:41

that pops up in my mind right now, which I think

28:44

has been deemed to

28:46

be something of a failure, is exactly what

28:49

you're talking about, mandatory minimum

28:51

sentencing, for example, in

28:53

drug crimes. You're right,

28:55

the judge's discretion was

28:57

removed from them with

29:00

mandatory minimum sentences, and the

29:03

part of the logic

29:06

behind those mandatory minimums was

29:08

to reduce variability in sentencing.

29:12

However, what we

29:14

saw, one of the outcomes

29:16

of that was also many, many

29:19

people being sentenced to

29:21

extremely long periods of

29:23

incarceration for relatively

29:26

minor drug crimes. So

29:29

there is still some sort

29:31

of systemic judgment that emerged

29:34

with those guidelines of

29:36

the mandatory minimums, which actually

29:38

made the problem of achieving justice even

29:42

worse. So Professor Siboni,

29:44

how do you find what the

29:46

right guidelines are without introducing a

29:48

whole other set of problems, in

29:51

trying to reduce unwanted variability and

29:54

reaching more identical solutions?

29:58

You can get a bunch of identical solutions. solutions that

30:00

aren't the right one. Absolutely.

30:03

And that would be bias, right? So that would

30:05

be an average error. If you think that the

30:08

proper sentence for a given crime

30:10

is one year in prison

30:12

and you set a mandatory minimum that is

30:15

ten years, you

30:19

have reduced noise because everybody will get ten years, but

30:21

you have created a lot of bias because everyone has

30:23

a sentence that is ten times worse than it should

30:25

be. And so that elevates

30:27

the question of what the proper sentence

30:29

should be to a debate

30:32

that has to take place in, I guess,

30:34

in the US Congress as

30:36

opposed to being a decision that is

30:38

being made separately by hundreds of judges

30:40

every day. Now it's

30:43

interesting that when it becomes a

30:45

problem of bias, when it becomes

30:47

a problem of the overall decision

30:49

being made at the wrong level,

30:52

it is at least a debate we can have. And

30:54

we can say three strikes and

30:56

you're out is terrible, mandatory minimum

30:58

sentences are terrible. We can have

31:01

that conversation. When that

31:03

decision is being made randomly

31:06

by judges all around the country every

31:08

day, the noise is

31:10

very hard to control and it leads to

31:12

many, many bad decisions as well.

31:15

Not all the decisions are uniformly bad,

31:18

but the randomness is in itself very bad.

31:21

That's the difference between bias and noise.

31:23

One is much easier to see, it's

31:26

much easier to counteract, it's much easier

31:28

to discuss and to combat.

31:31

The other is all over the place. And if you

31:33

don't do a noise audit to measure how much noise

31:35

there is, you can't even see it. Yeah.

31:38

Well, we are talking today with Olivier

31:41

Sibonie and Daniel Kahneman. Their new

31:43

book, along with Cass Sunstein, is

31:45

called Noise, a flaw in

31:47

human judgment. When we

31:50

come back, we'll talk about their recommendations

31:52

on how to reduce noise. This

31:54

is On Point. I'm

32:10

Kathleen Goldhar and I'm the host of

32:12

a new podcast, Crime Story. Every

32:15

week we bring you a different crime told

32:17

by the storyteller who knows it best. You

32:20

got one witness who can't be found.

32:22

You got another witness who's murdered. We

32:24

couldn't sugarcoat this story. I was getting

32:26

calls from Cosby's attorney threatening to sue

32:28

every day. Every crime in one

32:30

way or another is a reflection of who we

32:33

are as a people, as a city, as a

32:35

country. Find us wherever you get

32:37

your podcasts. This

32:40

is On Point. I'm Meghna Chakrabarti and

32:43

today we are talking with Olivier Ciboney

32:45

and Daniel Kahneman. Kahneman

32:47

is the Nobel Prize winning psychologist

32:49

and they, along with Cass

32:52

Sunstein, are co-authors of a fascinating

32:54

new book called Noise, a

32:57

Flaw and Human Judgment and

32:59

how noise or unwanted variability

33:02

in systems and organizations make

33:04

it really hard for those

33:06

systems and organizations to which we

33:09

all belong to operate at

33:11

our best interest. So we're trying to figure

33:13

out how to reduce noise. And

33:16

Professor Kahneman, before the break

33:18

we were talking about guidelines

33:20

and mandatory minimums in the

33:22

judicial system as one

33:24

perhaps flawed way in trying to

33:26

deal with the noise problem in

33:28

sentencing. I just wanted to quickly

33:30

hear your thoughts about that. Well,

33:34

you know, we should not

33:37

say that guidelines are a bad

33:39

idea because some guidelines were poorly

33:41

designed. In this case, clearly

33:44

there was a great deal of bias

33:46

in the setting of the guidelines. For

33:49

example, they distinguished among different kinds

33:51

of drugs in a way that

33:53

penalized track cocaine relative to

33:56

other drugs. Those are

33:58

poorly designed guidelines. guidelines

34:00

which will perpetuate bias rather

34:03

than eliminate error. But

34:06

you can design good guidelines

34:08

and the point about guidelines, and

34:10

here I echo something that Olivier

34:12

was saying earlier, the point about

34:14

guidelines is that you can see

34:16

them, you can discuss them, they're

34:19

out there. Noise is

34:21

something that you cannot see and you

34:24

cannot respond to appropriately. So

34:26

what other types of guidelines, just sticking

34:29

with the judicial system for one more

34:31

minute here, what other types of guidelines

34:33

that you suggest in the book might

34:36

be applicable here? Well,

34:40

if the guidelines are defined as

34:42

guidelines on sentencing, that's the

34:45

kind and that's the type, the only type.

34:48

We have ideas

34:52

about procedures, about

34:54

ways of thinking about the crime

34:57

and the defendant and the particular

34:59

case that we think might reduce

35:02

noise. But in terms of guidelines,

35:04

sentencing guidelines, well-designed sentencing

35:07

guidelines is what is

35:10

available I think. Okay, so then tell me

35:12

more about what you just said about the

35:17

other solutions for the judicial system. Well,

35:20

the general concept that we propose is

35:22

a concept that we call decision hygiene

35:24

and the term is almost

35:27

deliberately off-putting. It's to remind you

35:29

of what happens when

35:31

you wash your hands. And when you wash

35:33

your hands, you kill germs, you don't know

35:36

which germs you're killing and if you're successful,

35:38

you will never know. And it's

35:40

a sort of homely procedure but

35:43

it's extremely effective. And we have been

35:45

scouring the literature and what we

35:49

know to construct

35:52

a list of decision

35:54

hygiene procedures. And

35:57

one of them, just to give you an example, Well,

36:00

the most obvious one

36:02

is to ask several individuals

36:05

to make judgments independently because

36:07

that will reduce noise mechanically.

36:10

When you take several forecasters

36:12

and you average their forecast,

36:15

the average forecast is less

36:17

noisy than the individual forecast

36:19

and we know exactly by

36:22

what mathematical amount we

36:24

have cut down on the noise.

36:26

So that is another procedure. And

36:29

there are several others that Olivier,

36:31

I'm sure, can talk

36:33

about at least as well

36:35

as I can. Well, Mignon, just

36:37

to come back to guidelines for a second.

36:39

There is one field in which guidelines have

36:41

made a great difference and

36:44

that's medicine. You were talking,

36:46

as we started this conversation, about the

36:48

disease for which clearly there weren't guidelines

36:50

or if they're aware your three physicians

36:53

were not aware of them, sadly for

36:55

you. But in many fields,

36:57

guidelines have made a big difference. One

37:00

example that many people will have

37:02

encountered is that when a baby

37:04

is born, to determine if

37:06

the baby is healthy or needs to

37:08

be sent to neonatal care,

37:11

you use something called the

37:13

ABGAR score where you apply

37:15

five criteria, abbreviated A, P,

37:17

G, A, and R, and

37:20

you give this little baby that

37:22

is one minute or five minutes old a score

37:25

between zero and two on each of

37:27

those five criteria. And if the total

37:29

is six or less, the

37:32

baby has a problem. If the total is seven

37:34

or more, the baby is healthy. And

37:36

that has reduced massively the noise

37:39

and therefore the errors in the

37:41

treatment of newborn

37:43

babies. It's a great example

37:46

of a guideline that actually works. It's

37:48

a fairly simple guideline, but it's not

37:50

something one-dimensional like

37:53

a minimum sentencing guideline for

37:55

a particular crime. It takes into account

37:57

multiple factors, but it makes sure that

37:59

different... people will take the same factors into

38:01

account and will take them into account in the

38:03

same way so it reduces noise. Those

38:06

kinds of guidelines, when

38:08

they're well thought through, can actually

38:10

make a big difference. Okay.

38:14

So you also mentioned a

38:16

noise audit briefly in the last

38:18

segment there. Professor Ciboney, how

38:20

would you define what a noise audit is?

38:23

So a noise audit is not a way

38:25

to reduce noise. It's what you need to

38:28

do first. It's a way to measure noise.

38:30

So when we gave the

38:32

example of the underwriters or

38:34

the example of the justice system, these

38:36

are noise audits where you get a

38:38

feel for how large noise is. And

38:40

the reason you need to do that is that, as

38:44

Denny was pointing out, we don't imagine

38:46

that people see the world differently from

38:48

how we see it and therefore we

38:50

can't imagine that there is as much

38:52

noise as there is because

38:54

if I'm a judge, I never

38:56

hear what another judge would have

38:58

sentenced this particular defendant to because

39:00

each defendant is unique. And

39:02

if I'm a doctor and I look at an

39:05

x-ray, I never imagine that another doctor looking at

39:07

the same x-ray would see

39:09

something different from what I see. So a noise

39:11

audit makes this visible and tells you exactly how

39:13

much noise there is in your system. So

39:17

those are just a small

39:20

taste of the quite extensive

39:23

writing that you have in the book

39:25

about ways to reduce, to

39:28

know about, assess and reduce

39:30

noise in various systems and

39:33

organizations. But I'd like to just push

39:35

to one potential solution

39:38

and get both

39:41

your opinions on it. And that is

39:43

if you want to reduce variability entirely

39:45

on wanted variability,

39:48

you take the human condition out of it.

39:51

I mean, so of course I'm talking about

39:53

technology. People are actively trying to create AI

39:57

systems that achieve exactly what

39:59

you're talking about. about, take various

40:01

inputs and come up with the

40:03

same solution every single time. Professor

40:06

Kahneman, is that a desirable

40:08

way to reduce noise? Well,

40:11

in some, you know, there

40:15

have been many studies that

40:17

compared rules and algorithms to

40:19

human judgment and in

40:21

many of these studies human judgment comes

40:23

short and one of the

40:25

main reasons and that we know for that

40:28

humans come short is

40:31

because of noise because humans are noisy

40:33

and algorithms are not. You present the

40:35

same problem to an algorithm twice and

40:37

you get the same answer which is

40:39

not the case when you

40:42

do it with humans. So

40:45

we can expect that

40:47

algorithms when the information

40:49

is codeable. So there are some

40:51

conditions for algorithm to work well.

40:54

You need codeable information, you need

40:56

a lot of data and you

40:58

need a choice

41:02

about the criteria that you're applying

41:05

so as to eliminate bias to the

41:07

extent possible and then you

41:09

can have a system that is likely

41:11

to do better than humans and in

41:13

the judicial system there is an example

41:17

and the example is the

41:19

granting of bail where a

41:21

recent study using AI techniques,

41:25

I forget the number of millions

41:27

of cases that they looked at

41:29

but it's a very, very large

41:31

number, they were able to establish

41:33

that an algorithm would actually perform

41:36

better than the judicial system in

41:38

the sense that it would both

41:40

reduce crime and reduce

41:42

unnecessary and unjustified incarceration. So at

41:44

least in that domain there is

41:47

clear evidence that an algorithm can

41:49

do better than people. If the

41:51

algorithm has been eliminated of the

41:54

potential biases in its creation,

41:56

right, because I mean sticking with the

41:58

judicial system, I was reading several... several

42:00

years ago about how, for example, in

42:02

Washington, D.C. and this is also being

42:04

used everywhere, you were talking about bail

42:06

and this is actually regarding the AI

42:08

use in parole,

42:11

that prosecutors were

42:14

using an AI assessment system

42:17

to decide whether or not

42:19

to put parole on

42:21

the table for a particular defendant.

42:26

And defense lawyers had discovered

42:28

that that AI system was

42:30

making risk assessments based on

42:32

factors that included whether

42:34

or not a person lived in government subsidized

42:36

housing or whether they had

42:39

ever expressed negative attitudes about

42:41

the police. And

42:43

it seemed that there was ample opportunity

42:45

for bias to actually be

42:48

built into that AI system, which

42:50

was a problem. Absolutely. Absolutely.

42:53

No question. No

42:55

question about that and that's something to

42:57

be absolutely worried about. But

43:00

just to be clear, one biased algorithm

43:02

or two or ten do not

43:05

mean that all algorithms must be biased.

43:09

And algorithms have a big advantage over

43:11

humans, which is that, again, we can

43:13

have that conversation. We can measure whether

43:15

an algorithm is biased. We can have

43:18

ProPublica audit the algorithm and tell you

43:20

that the algorithm has this particular

43:23

bias or does not have that particular bias

43:25

and then it can get fixed. Of

43:27

course, that must happen. It

43:29

doesn't happen by magic. It takes action

43:31

from people who worry about it

43:34

and who make sure that algorithms improve.

43:37

But at least we can have that conversation.

43:39

A biased judge, a biased

43:41

human being is very difficult to

43:44

spot because of the noise

43:46

in the judgment, in part because of the

43:48

noise in the judgments of that person. No

43:51

judge is so consistently biased

43:54

that you would be able to indict

43:58

that particular judge for being biased. Yeah,

44:01

looks like a little internet instability

44:03

there. But Daniel

44:05

Kahneman, there's something I've been wanting

44:07

to ask you all our here because we're

44:10

talking about how noise can be

44:13

proliferated and amplified through a

44:16

system. But of course

44:18

that system is made up of individual

44:21

human beings. And I

44:23

wanted to hear from

44:25

you about how this actually does

44:27

connect to your previous

44:29

pathbreaking research about individual

44:32

judgment. Are there things

44:35

that individuals can do regarding their

44:37

thinking to reduce their contribution to

44:40

the noise? Well,

44:45

our idea on this matter is

44:47

really quite straightforward. And

44:50

it applies even to individuals' decisions,

44:52

not only to systems. It

44:54

applies to singular decisions, to

44:57

strategic decisions that people make.

44:59

And our argument is quite

45:01

straightforward. If decision

45:03

hygiene procedures work in repeated

45:05

cases, there is every reason

45:07

to believe that they will

45:09

apply as well to unique

45:12

or singular cases. So

45:14

decision hygiene recommendations

45:18

are applicable to any judgments

45:21

that people make. On

45:24

the argument, which actually Olivier,

45:26

the one who had that phrase that

45:29

we're very grateful for, that the singular

45:32

event is a repeated event that

45:35

happens only once so that everything

45:37

that we say about noise as

45:39

repeated events is actually applicable to

45:42

individual judgments. But

45:45

more specifically, from your book,

45:47

Thinking Fast and Slow, where you describe the

45:49

different types of thinking, aren't

45:53

there certain types of thinking that

45:55

achieve exactly what you're saying

45:57

that there are actually people who maybe

45:59

intuitively... are noise reducers?

46:04

We doubt that because and

46:06

the reason that we doubt that

46:08

and the connection with thinking fast

46:10

and slow is that

46:12

intuition is very rapid. Intuition

46:16

doesn't make efficient use of all

46:18

the information. So our

46:20

major recommendation in that context and

46:22

it's an important one is

46:25

that intuition should not be eliminated from

46:27

judgment but it should be delayed. That

46:30

is you want to not to

46:32

have a global intuition about a

46:34

case until you have made separate

46:37

judgments until you have all the

46:39

information whereas the human tendency is

46:41

to jump to conclusions. Jumping

46:44

to conclusions induces noise.

46:47

Well we only have a few minutes left. It

46:50

saddens my heart because I have so many more questions

46:52

for both of you but there's one more system I

46:55

wish to explore with you just briefly and

46:58

that is governance or political

47:00

systems and particularly let's just look

47:02

at the United States because

47:04

I feel like we are in a moment where

47:10

noise is the point. We

47:13

have very influential people in our

47:15

political system who have said, I'm

47:17

thinking of Steve Bannon for example

47:19

who said that his goal was

47:22

to flood the zone with

47:24

BS essentially. Is

47:29

there anything in your

47:31

book that we as citizens can

47:34

apply to reducing the noise and

47:37

improving the decision making in a

47:40

political system, in the American political system?

47:45

I don't think there is anything specific

47:48

that is to be applied. If

47:50

people thought better in

47:52

general and make better judgments

47:54

and better decisions we

47:56

might be better off but the

47:59

differences in the political system are

48:01

closer to issues of bias than to

48:03

issues of noise. And

48:05

bias and convictions and

48:08

convictions based on very little

48:10

evidence and on poor

48:12

evidence, those are political

48:15

problems. And to those,

48:17

we have no solution to offer

48:19

that I know of. Perhaps Olivier

48:21

can think of something, but I

48:23

have not. No, unfortunately

48:26

not. There is one

48:28

thing though, which is not a solution, but

48:30

which is part of the problem that we discuss

48:32

at some length in the book, which is that

48:35

groups and any forum,

48:37

including social media, in which people are

48:39

going to interact in a group, tend

48:42

to amplify the random noise that comes from

48:44

the opinion of a few people at the

48:46

beginning of the process. So any

48:50

system, and I'm thinking mostly of social

48:52

media, in which people are going to

48:54

be part of an eco

48:57

chamber is going to

48:59

add to the randomness in

49:01

the positions that people have eventually and is

49:03

going to add to the polarization of those

49:05

positions. Well if

49:07

Olivier Ciboney and Daniel Kahneman

49:09

especially don't necessarily have a

49:12

solution for the chaos

49:14

inducing noise in our political system,

49:17

I don't know who would, but

49:19

they along with Cass Sunstein are

49:21

authors of the new book Noise, a

49:23

flaw in human judgment, and we have

49:26

an excerpt of it at onpointradio.org. Professor

49:28

Kahneman, it's been a real pleasure to

49:31

speak with you. Thank you so much. My

49:33

pleasure. And Professor Ciboney likewise, thank you

49:36

so much for joining us. Thanks, Eumigna.

49:38

I'm Eumigna Chakrabarti. This is On Point. Thank

49:42

you.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features