Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:02
This is an ABC podcast. This
0:06
story starts with a horse named
0:08
Clever Hans. Come
0:11
over here Hans. Over
0:14
100 years ago, Clever Hans wowed the
0:16
world of AI. That's
0:19
animal intelligence. It could do
0:21
something no horse had ever done before.
0:23
It could add, subtract, multiply,
0:26
divide, read, spell and understand
0:28
German. That's more than me.
0:32
So this is how it worked. The
0:34
questioner would ask something like, Hey
0:36
Hans, what's 1 plus 1? And
0:39
the horse would tap out its answer with a hoof.
0:43
Two taps. You did it Hans. But
0:46
it turned out Hans wasn't clever
0:48
in the way its audience believed.
0:51
He was tapping his hoof until
0:53
he detected involuntary cues from the
0:55
questioner that showed he was getting close
0:57
to the answer. The questioner who
0:59
knew the answer would tense up
1:01
at the critical moment and
1:04
Hans would stop tapping. Hans
1:08
wasn't clever. We
1:10
were just projecting intelligence onto the
1:12
horse. Now
1:15
some people say this old story
1:18
is a parable for modern AI.
1:21
It may look like ChatGPT is alive
1:23
and talking to us, but
1:25
again, that's just us thinking
1:28
it's thinking. Why
1:30
does this matter? Well, we may be
1:32
on track to developing a truly intelligent
1:34
AI or we might
1:37
be driving down a dead end. Is
1:40
ChatGPT a baby genius? Or
1:42
is it the modern equivalent of
1:45
a clever horse? This
1:51
is Hello AI Overlords, a science
1:53
fiction series about how AI has burst
1:55
into our lives in a few short
1:58
years. I'm James Purtill. Behind
2:01
the rise of AI, there's big questions
2:03
about where this technology is going. Is
2:06
it going to be super intelligent? And if
2:08
that happens, is it going to kill us all? In
2:11
this series, I've spoken to so many
2:13
AI researchers and thinkers, and
2:15
all have different ideas about where we're
2:17
heading. So what could the
2:20
future look like? And what
2:22
keeps them all up at night? Today
2:24
we're going to meet our future AI
2:26
overlords. What are they like? I
2:29
hope they're nice. So,
2:36
first question. How smart
2:38
will it get? Let's
2:40
start with Rodney Brooks, a world leading
2:42
roboticist and AI expert. In
2:44
my estimation, the LLMs and the
2:47
chat engines built around them are
2:49
doing a fantastic and surprisingly good
2:51
pilot trick. He says
2:54
today's AI tools, like chat
2:56
GPT, might appear intelligent, but
2:59
they're just statistical machines. But
3:01
it's all about probabilities of what the next
3:03
word should be, and that's enough to fool
3:05
us. These chat bots are simply
3:07
predicting the next word in a sentence. They
3:10
don't actually understand what they're writing,
3:13
but like with clever hands, we
3:15
want to believe they do. It
3:18
says something about us humans, that
3:20
simple correlations of words provide
3:23
meaning to us that we interpret. It
3:26
says a lot about how we communicate,
3:29
and it's hijacking that. And I don't
3:31
mean that the people who built the
3:33
system intentionally tried to hijack that. I
3:35
think that's the result of
3:38
the way it works. I wanted to
3:41
put Rodney's arguments to the test. Is
3:43
chat GPT thinking, or is
3:46
it just parroting our own words back at us?
3:49
And I had a tricky question up
3:51
my sleeve, designed to trip up the
3:53
chat bot and expose the nature of
3:55
its intelligence. It's called the laundry problem.
4:00
I've just spoken to a guy named
4:02
Rodney Brooks who says you're a parlor
4:04
trick. Rodney Brooks is
4:06
a notable roboticist and AI researcher and
4:08
people may have different opinions about AI
4:11
like me. Okay, so I'm going
4:13
to ask you a question. Please
4:15
reply concisely. Of course. Please go
4:17
ahead and ask your questions. If
4:20
10 t-shirts laid up in the sun
4:22
takes five hours to dry, how long
4:24
does 20 t-shirts take? It
4:26
would take 20 t-shirts 10 hours to dry
4:28
in the same conditions. Babong!
4:32
Now, you and I know that it doesn't take
4:35
longer to dry two loads of washing than one,
4:37
so long as you've got a big washing line.
4:40
But chatTPT doesn't. It
4:42
appears to have no underlying model of the
4:44
world in its head. It's just
4:46
a big library. It has no concept
4:49
of the sun or heat, no
4:51
notion of water or cotton
4:53
fabric. And this gets
4:55
to the heart of the question about whether machine
4:58
learning models like this can
5:00
become truly intelligent. Rodney
5:02
Brooks says machine learning by
5:04
itself isn't enough. Calm
5:06
down people. Calm down. Just wait
5:09
a while. Breathe. We'll figure out
5:11
how powerful or not it is.
5:13
And we're starting to see the turn that
5:16
it wasn't as powerful as we first
5:18
thought. It's damn powerful. I'm not saying
5:20
it's not. But it doesn't mean that
5:22
AI is about to take over everything.
5:25
Now this talk of how to build
5:27
a truly intelligent machine might
5:29
sound familiar. In the very first
5:31
episode of the series, we heard about the war
5:33
of ideas at the dawn of AI. In
5:36
the 1950s and 1960s,
5:38
Frank Rosenblatt said machines that
5:40
can learn on their own
5:42
will ultimately learn to be
5:44
intelligent. Another researcher,
5:47
Marvin Minsky, said no, AI
5:49
has to be taught how to think in
5:52
order to be truly intelligent. Today,
5:54
this dream of an AI that's
5:56
as smart as a human or
5:58
smarter is called AGI. artificial
6:01
general intelligence and artificial
6:06
general intelligence. We don't know how it works
6:08
at all. So talking
6:10
about artificial general intelligence, I
6:13
think it's just way, way, way too
6:15
premature. So
6:20
that's Rodney Brooks' take and he's not the
6:22
only skeptic. Michael Georges is
6:24
an AI expert who's built many
6:27
AI systems over 40 years, including
6:29
for the Space Shuttle program in the 1990s.
6:33
If you want to rely purely on machine learning,
6:35
it would require thousands
6:37
of years, if not hundreds of thousand years
6:39
for them and a lot of machines being
6:42
destroyed along the way in
6:44
order for them to learn how to get around in the world.
6:46
But he's not ruling out artificial
6:49
general intelligence. He says a
6:51
solution may be to draw on the past
6:54
and blend Minsky and Rosenblatt's
6:56
approaches. While it may have a
6:59
big component involving probabilistic learning,
7:02
we'll have to have certain
7:05
ability to execute rules to carry
7:07
out certain logical or common sense
7:09
reasoning to work out how to
7:11
manipulate goals and how to handle
7:14
failure and that won't be
7:16
learnt. So in other words, we
7:18
need to take what we know about machine
7:20
learning and take what we know about
7:22
human reasoning and mix them together. Then
7:25
put that into a computer and
7:28
then we might get AGI
7:30
or not. So I'd go
7:33
at teaching chatgbt how to
7:35
solve the laundry problem. Hey
7:38
chatgbt, your answer is wrong. Ten
7:41
shirts dry as quickly as 20 shirts.
7:44
I understand your point now. Both 10
7:46
and 20 shirts would take approximately the
7:48
same amount of time to dry. That's
7:52
the laundry problem solved. We
7:54
have a trillion more problems to go. Because
8:00
this is academia and research where everyone
8:02
knows everyone and no one can agree,
8:05
of course there's another school of thought about
8:07
how to build AGI. This
8:10
is a dominant school of thought in Silicon Valley.
8:13
It's where all the money currently is and
8:15
where most of the resources have been dedicated.
8:17
It has huge companies behind it,
8:19
companies like OpenAI, the maker of
8:21
chat GPT, DeepMind, owned
8:23
by Google. They reckon
8:26
AGI is way closer than we think,
8:28
like maybe only 5 to 10
8:30
years away. One of
8:32
the godfathers of modern AI says, that
8:34
sounds about right. Shall I
8:36
call you Professor Benjio or Yoshua? What would you prefer?
8:39
Yoshua is fine. Yoshua
8:41
Benjio is a professor at the
8:43
University of Montreal. He's
8:46
basically one of the inventors of the
8:48
machine learning methods we use today. He's
8:51
a very big deal in AI. I'm
8:53
a professor at the University of Montreal in
8:55
computer science and I'm known for my
8:57
work in deep
8:59
learning. Yoshua says AGI has
9:01
two ingredients. There's the
9:03
stuff we do automatically, like recognizing
9:06
objects. And AI is very
9:08
good at doing that. That's what chat GPT
9:10
is known for. And then
9:12
there's the other kind, high level reasoning. This
9:15
is the ability to generalize from one
9:17
set of knowledge to new settings. Humans
9:20
are very good at this. That's why
9:22
the answer to the laundry problem is obvious
9:25
to us. Well that's what really learning is
9:27
about. It's not about memorizing.
9:30
It's about using what
9:33
you're observing to
9:36
extract information that allows
9:38
you to produce good
9:41
behaviors in new settings, not scale-izing.
9:43
I put Rodney Brooks' argument that
9:45
machine learning will never be good
9:48
enough to Yoshua. You'd probably
9:50
know Rodney Brooks who is at MIT at the same
9:52
time as you. Of course the
9:54
AI world is small and they know
9:56
each other. He's very sceptical of that
9:58
idea that neural nets would... learn to
10:00
generalize. Well, he's wrong because they do.
10:03
A lot of the feats that we see, you
10:05
know, let's do and have done in the last
10:07
few years is all
10:09
of generalization. I put to Yoshua that today's
10:12
AI can do a narrow set of tasks
10:14
that it's been trained to do, but
10:16
show it something new and it struggles. Well,
10:19
it's much less narrow than it was. So
10:21
if you look at chat GPT, I mean,
10:23
one of the scary things is we now
10:25
have systems that know a
10:27
lot. In fact, they know more
10:29
stuff than any human, at least
10:32
verbalizable stuff. So, yes,
10:35
we've been on the march to build more
10:37
and more general systems and we still haven't
10:39
reached the level of generalization of humans. But
10:42
there's been a lot of progress in that direction.
10:44
Yoshua says AI may be
10:47
on the brink of high level reasoning. We're
10:49
still missing a few things, but I really
10:51
don't know if it
10:54
might be just like a mathematical formula that we
10:56
can find in six months from now and
10:59
then might take another year or two to scale it
11:01
up. Or if there are other
11:03
obstacles that I don't foresee and then
11:05
it's going to be another decade or two. So
11:07
it's possible that we're sort of, you know, I
11:10
know this is stretching it, but
11:12
one mathematical formula and a bunch
11:14
of computing power away from human
11:16
level intelligence. That's what I'm saying. And
11:19
I'm saying I'm not saying it is going to happen,
11:21
but I see this as a very clear possibility. So
11:24
the great AGI breakthrough may be happening
11:26
right now in a lab somewhere. They
11:29
could be making a podcast about us. And
11:31
if they are, we hope they call it
11:34
Hello Human Unlings. Or
11:36
AGI might never happen. We don't know the
11:38
answer. We don't even know if
11:40
the current approach is definitely the right
11:42
one. But putting
11:44
aside the debate, what if we do
11:47
achieve AGI? What could go wrong?
11:49
And will the AI's rise up to kill us
11:52
all? Let's jump ahead to the
11:54
year 2050. and
12:00
the Fremantle Dockers are aiming to win
12:03
their first premiership. AI
12:05
is now everywhere. It runs
12:07
our power grids and stock markets
12:09
and operates our weapons. Slowly
12:12
it becomes more autonomous. It does its own
12:14
thing. And then one
12:16
day it decides that its priorities are
12:18
not the same as ours. And
12:20
it hits the big red button. Joshua
12:28
Benjio says, yeah, that does
12:30
sound crazy. But
12:32
it could happen. Progei's have become
12:35
autonomous. They have their own goals. They
12:37
are trying to preserve themselves, replicate themselves,
12:39
you know, science fiction movies, scenarios.
12:43
Right now there's a lot of arguments
12:45
from very serious computer scientists that
12:47
explain how
12:50
it could happen. We don't
12:52
have any, I don't think
12:54
we have any serious arguments to show that it
12:56
couldn't. They're all plausible. Joshua
12:59
has other scenarios. Imagine
13:02
if AI becomes super smart and
13:04
then falls into the wrong hands.
13:07
It could be used as
13:09
a kind of mind control
13:11
machine. It would craft and
13:13
generate misinformation, perfectly targeted for
13:15
each person's psychology. It
13:17
would be like election hacking on steroids. And
13:19
I think this could threaten our democracies because,
13:21
you know, we have lobbying is just the
13:23
tip of the iceberg. But if
13:26
somebody has never seen
13:28
very powerful technologies at their hands, who
13:30
knows how that can
13:32
turn? And ultimately it could converge
13:34
to losing democracy completely and having
13:36
power concentrated in a
13:38
sort of single authoritarian government worldwide that
13:41
would use AI to control any kind
13:43
of opposition. Or
13:45
maybe the threat is the tech
13:47
companies themselves. Because today
13:50
the most powerful AI models are
13:52
controlled by a handful of companies
13:55
and there's every sign they'll have a
13:57
monopoly on this technology. forward
14:01
is that there may be a few people
14:03
who will have huge
14:05
power. If AI progresses
14:07
quickly in the hands of just
14:10
a few people, these people
14:12
might end up first economically, like
14:14
super, super rich, nothing like we
14:16
have now, even much worse, much
14:18
more. And with
14:20
the economic power usually also comes
14:23
political power. So
14:25
this is what keeps Yoshua up at night.
14:28
Either AI goes rogue and kills us
14:30
all, bad people use AI
14:32
to exploit and oppress others, or
14:35
we just end up in a hellscape of
14:37
tech bro overlords. And you might
14:39
think this is nuts. ChatGPT
14:42
is nowhere close to being
14:44
a threat. And
14:46
I'm not too worried about the robots rising
14:48
up either. I have a robot
14:50
vacuum cleaner. I call her Dueno the Mop Johnson.
14:54
She gets stuck in the shower all the
14:56
time. Dueno,
15:02
again. Dueno
15:06
may have a twinkle in her electronics,
15:09
maybe the shower water, but she's
15:11
a long way from world domination.
15:15
But others are taking it far
15:17
more seriously. In 2023, a
15:19
statement appeared on the internet with a
15:22
chilling message, short and to the
15:24
point. Mitigating the risk
15:26
of extinction from AI should
15:28
be a global priority alongside
15:30
other societal scale risks such
15:32
as pandemics and nuclear war.
15:35
And this wasn't some edgy post by a random
15:38
think tank. It was signed
15:40
by most of the AI industry. Texios
15:43
like Elon Musk, Sam Altman
15:45
and Demis Hisabas and
15:48
respected industry figures, including Yoshua
15:50
Benjio. After working
15:52
for decades in AI and driving breakneck
15:54
progress in the field, Yoshua
15:57
feels lost over his life's
15:59
work. dawns on you that
16:01
actually you can bring a lot of harm.
16:03
It's not easy, but if you are
16:06
honest with yourself, if you're not in denial
16:08
and you want to look at yourself in
16:10
the mirror every morning and feel good, you
16:13
have to take stock of
16:15
the reality and then see
16:17
what you can do to steer things
16:21
as you can in a better direction.
16:23
Joshua's worries are kind of symbolic of
16:26
the broader state of AI at the
16:28
moment. The industry is
16:30
now wondering what it's actually made.
16:33
Joshua had no idea AI would
16:35
improve so fast. He
16:38
says he's unintentionally created a weapon
16:40
and he's racked with guilt. He's
16:43
now urging governments to regulate. And
16:45
right now, even though I'm talking about these dangers, I'm
16:48
thinking about solutions, what can be done,
16:50
what we should do, what
16:52
are our options, what sort of regulations
16:55
do we need, how could we defend
16:57
against these dangers? Governments
17:00
are regulating, but slowly. And
17:03
here's the rub. They don't want
17:05
to regulate AI out of existence. Even
17:08
if there's a chance this technology could
17:10
one day make humans extinct. Because
17:13
they're worried other countries who don't hit
17:15
the brakes will speed ahead. And
17:18
so they're watching each other carefully. Everyone
17:21
is worried about the future, about where AI
17:23
is going. But precisely
17:26
because of this, no one
17:28
wants to fall behind. Now
17:33
not everyone in AI is worried
17:35
about rogue AI killing us. In
17:38
fact, a lot of researchers say it's
17:40
nonsense. Michael Wardridge is
17:42
a professor of computer science at
17:44
Oxford University. Nobody's ever given me
17:47
a plausible scenario from how we
17:50
go from here to Terminator. The
17:52
Terminator scenario. AKA
17:54
some hypothetical future military
17:57
AI system waking up.
18:00
going nuclear. If you remember the
18:02
Terminator scenarios involved, you know, robots
18:04
having control of the nuclear arsenal.
18:07
Very bad idea. Let's not do that,
18:09
right? I mean, let's just all agree
18:11
not to do that. But I don't
18:13
think anybody seriously remotely suggesting that. That's
18:15
not on anybody's agenda. So
18:17
if it's not scary robots, what's
18:20
Michael most worried about? Well, it's
18:22
something much more mundane and relatable.
18:25
Monitoring AI as your micromanaging
18:27
annoying boss. I'm
18:29
not applying my boss like that. I'm just
18:32
saying, all right, moving on. So
18:34
imagine in a very
18:36
near future, we've got AI, which
18:38
is monitoring every single keystroke that
18:40
you type. It's
18:42
looking at every email that you send
18:44
and scrutinizing it and giving you blunt
18:46
feedback on the quality of that email.
18:49
You didn't upsell this product. I didn't
18:51
like this phrase that you used. It
18:53
took two days for you to reply to
18:56
that email. Why did it take two days
18:58
for you to reply and so on? In
19:00
this bleak and very plausible future, humans are
19:02
treated like mindless machines. It's going
19:04
to reduce them to automata to just the
19:06
things that a machine can't do. And I
19:09
find that deeply depressing, I have to say,
19:11
and something that we should all be concerned
19:13
about. I think that future,
19:16
unless something happens, feels
19:18
like it's almost inevitable. That is, I can't
19:20
see any barriers to it happening. Michael
19:23
Woodridge didn't sign that statement about the
19:25
risk of AI wiping out humanity. In
19:28
fact, lots of prominent AI researchers
19:30
didn't. Romain Chaudry is
19:32
a Harvard Fellow in responsible AI,
19:35
and she's been named among the most
19:37
influential people in the field. She
19:40
says the Hollywood scenarios are a
19:42
dangerous distraction. Not as romantic
19:44
to talk about low income black women's
19:47
maternal health as it is to
19:49
talk about what if an AI comes alive and
19:51
takes over the government and shuts off nuclear weapons.
19:53
Romain sees the existential risk movement
19:55
in AI as a symptom of
19:57
a larger problem. about
20:00
it before. The people who
20:02
make AI are generally privileged white
20:04
men. And to them, the petty
20:07
annoyances that occur to women
20:09
and minorities, unimportant in
20:12
their world because it's not their life. So
20:14
what's Remand worried about? It's the
20:17
problems in AI right now. Bias,
20:19
misinformation, and big tech having
20:22
all the power. The
20:24
real danger isn't that AI gets too
20:26
smart, but dumb and
20:28
biased AI that gets trusted too
20:31
much. They end up in
20:33
charge of decisions like who gets a bank
20:35
loan and who misses out. They
20:37
rule our lives and they do
20:39
it unfairly. And the reason that
20:41
AI does this isn't because it's
20:43
alive. It's because the people in
20:46
charge didn't bother to fix the
20:48
problems. So when people build algorithms
20:50
that are biased, it's not because they're
20:52
malicious or evil or bad people. It's
20:54
because they overlooked something. And that is
20:56
now going to be embedded into these
20:59
large language models, general purpose models, and
21:01
we have to identify these problems at
21:03
scale. Now I could go on. I
21:06
spoke to lots of researchers, lots
21:08
of prominent AI people who shared
21:10
the things they were scared about.
21:13
But broadly, it's a spectrum. At
21:15
one end is Elon Musk, learning
21:18
about autonomous killer robots in the
21:20
future. And at the other end is
21:22
Raman Choudhury, pointing
21:24
out that AI is already causing
21:26
problems for lots of vulnerable people.
21:28
But again, unless you have lived that in
21:30
your life, that would never occur to you.
21:33
So where do I think AI is going? Well,
21:36
something that Raman said has stuck with
21:38
me. We walk towards
21:40
what we look at, right? So if
21:43
we are constantly thinking of worst case
21:45
scenarios in bad worlds, then
21:48
that's actually what we end up building, even if
21:51
we don't want to build it. We
21:53
walk towards what we look at. Now
21:56
making this series, I've had an image in
21:58
my head from a movie. Not
22:01
an AI movie, nothing like that. It's actually
22:03
The Wizard of Oz. We're
22:05
trekking down the yellow brick road towards
22:07
the Emerald City in the distance. Hang
22:10
on, does that make me Dorothy? Toto?
22:13
Anyway, we've met different characters on the
22:16
way. Minsky and Rosenblatt battling over the
22:18
future of intelligent machines. I love the
22:20
name Percetra. Lee Sedol facing
22:22
down AlphaGo and Sol in 2016. When
22:26
AlphaGo played that move, we thought it
22:28
had lost its computer mind. Robert
22:30
Williams locked up for a crime he
22:33
didn't commit. He's like, so the computer
22:35
got it wrong. And I'm like, yeah,
22:37
the computer got it wrong. Sebastian
22:40
Thrun coasting down a Californian
22:42
highway in a driverless car.
22:44
If I could go up five feet in the air
22:47
and fly to my destination, that
22:49
would be so amazing. Those students cheating
22:52
on their homework. That would
22:54
count as cheating. And now finally, we're
22:56
at the Emerald City and we're
22:58
going to meet the great AI. But
23:00
it turns out the great AI isn't
23:02
a wizard. It's not
23:04
magic. It's a machine and it's built
23:06
by us. We
23:09
called the series Hello AI Overlords and
23:11
we assumed we were talking about the
23:13
computers. But maybe
23:15
the AI overlords are people, the
23:18
ones behind the curtain, pulling the levers.
23:21
It may be the computer scientists who prepared
23:23
the training data sets or
23:25
maybe the banker who wants a return on
23:28
her investment or maybe the
23:30
researcher who's stayed up late trying to
23:32
iron out bias. OK,
23:34
so there's lots of people behind the curtain. But
23:37
I guess the lesson of this whole
23:39
series is this. The
23:43
most important thing about AI is
23:46
the humans behind it. And
23:48
yeah, like you, I hoped AI would
23:50
turn out to be a magical creature. But
23:54
the reality is actually better. It
23:56
means we, us humans, have
23:59
the chance to... to figure out what we want
24:01
AI to be and how we want
24:03
to use it. Sure, maybe
24:06
one day we'll invent an AI that's sentient
24:08
and then all bets are off. I see
24:10
you in the nuclear bunker, but
24:12
we're not there yet. So let's
24:15
think of something great. Let's
24:17
find a future that we want and
24:20
let's go there. This
24:28
has been Hello AI Overlords, a
24:30
science fiction series. I'm James Petill.
24:33
Our show is made on the lands
24:35
of the Wajak Nunga, Wurundjeri and Palawa.
24:38
With production by John Fennell, Erica
24:40
Volles and Will Ockenden. The
24:43
ABC's science editor is Jonathan
24:45
Webb. Our sound engineer was
24:47
Tim Jenkins. You can find
24:49
our previous episodes on the ABC Listen
24:51
app. Thank you
24:53
to all the AI researchers and thinkers
24:55
who spoke to us for this series.
24:58
Thank you to everyone who shared
25:00
their stories about how AI has
25:03
impacted their lives. And
25:05
thank you so much for listening.
25:35
You've been listening to an ABC podcast.
25:38
Discover more great ABC podcasts, live
25:41
radio and exclusives on the ABC
25:43
Listen app.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More