Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
You've probably heard about the job
0:02
of an intimacy coordinator. But
0:04
do you know what they actually do? In
0:06
every intimacy coordinator's kit is going to be some
0:09
form of mint. Really? I think most
0:11
of the time. Wait. Oh, 100%. Huh.
0:14
Then we add in the full retro Listerine breath strip, which
0:17
is
0:17
crucial. Several years into
0:19
the era of the intimacy coordinator,
0:22
we ask what they've changed for the
0:24
better in Hollywood. And
0:26
what still needs work? This week
0:29
on Intuit, Vulture's Pop
0:31
Culture Podcast.
0:38
It's some of the most
0:39
expensive meat in the world. But
0:42
is it even meat? This summer
0:44
for the first time, Americans are going to be able
0:46
to try actual chicken meat that didn't
0:49
involve killing a chicken. This episode of Gastropod,
0:52
we are among the very first people
0:54
to taste our way through these brand new lab
0:56
grown offerings. Chicken, hamburger,
0:58
bacon, salmon, bluefin tuna. We tasted
1:01
it all. We wanted to know whether it matches up
1:03
to the real thing, but we also
1:05
wanted to know if it can ever really
1:07
replace meat from animals, not
1:09
to mention keep our planet from going up
1:11
in smoke. Find Gastropod wherever you get your
1:14
podcasts and taste the future.
1:18
I went to see the latest Mission Impossible
1:20
movie this weekend, and it had a bad
1:23
guy that felt very 2023. The
1:26
entity has since become sentient.
1:29
An AI becoming super intelligent and
1:31
turning on us. You're telling me this thing has
1:33
a mind of its own?
1:34
And it's just the latest entry in a
1:36
long line of super smart AI
1:39
villains. Open the pod bay doors, Hal.
1:42
Like in 2001, a Space Odyssey. I'm
1:44
sorry, Dave. I'm afraid I can't
1:46
do that. Or Ex Machina. Ava,
1:50
go back to your room. Or maybe the
1:52
most famous example, Terminator.
1:55
They say it got smart. A new
1:57
order of intelligence decided
1:59
our fate.
1:59
microseconds. But
2:02
AI doesn't need to be super intelligent in
2:05
order to pose some pretty major risks.
2:08
Last week on the first episode of our Black Box
2:10
series, we talked about the unknowns at the
2:12
center of modern AI. How even the
2:15
experts often don't understand how these
2:17
systems work,
2:18
or what they might be able to do. And
2:20
it's true that understanding isn't necessary
2:23
for technology. Engineers don't always
2:25
understand exactly how their inventions
2:28
work when they first design them. But
2:30
the difference here is that researchers using
2:32
AI often can't predict what
2:34
outcome they're going to get. They can't
2:37
really steer these systems all that well. And
2:40
that's what keeps a lot of researchers up at night.
2:42
It's not Terminator. It's
2:44
a much likelier, and maybe even
2:46
stranger scenario. It's the story
2:49
of a little boat. Specifically
2:52
a boat in this retro-looking online
2:54
video game. It's called Coast
2:57
Runners, and it's a pretty straightforward racing
2:59
game. There are these power-ups that give
3:01
you points if your boat hits them. There are
3:03
obstacles to dodge. There are these kind of lagoons
3:06
where your boat can get all turned around. And
3:09
a couple years ago, the research company
3:11
OpenAI wanted to see if they could get an
3:13
AI to teach itself how to get a high
3:16
score on the game without being
3:18
explicitly told how. We are
3:20
supposed to
3:20
train a boat to complete a
3:22
course from start to finish. This
3:25
is Dario Amade. He used to be a researcher
3:27
at OpenAI. Now he's the CEO of another
3:29
AI company called Anthropic. And
3:32
he gave a talk about this boat at a think tank called
3:34
the Center for a New American Security.
3:37
I remember setting it running one day, just
3:39
telling it to teach itself. And I
3:41
figured that it would learn to complete the course.
3:44
Dario had the AI run tons of
3:46
simulated races over and over. But
3:49
when he came back to check
3:50
on it, the boat hadn't even come
3:53
close to the end of the track. What it does
3:55
instead, this thing that's been looping, is it
3:57
finds this isolated lagoon.
3:59
and it goes backwards in the course. The
4:03
boat wasn't just going backwards in this
4:05
lagoon. It was on fire,
4:07
covered in pixelated flames, crashing
4:10
into docks and other boats, and
4:12
just spinning around in circles. But
4:17
somehow the AI's score was going
4:19
up. Turns out that by
4:21
spinning around in this isolated lagoon in
4:24
exactly the right way, it can
4:26
get more points than it could possibly ever
4:28
have gotten by completing the race
4:30
in the most straightforward way. When he looked
4:32
into it, Dario realized that the game didn't
4:34
award points for finishing first. For
4:37
some reason, it gave them out for picking up
4:39
power-ups. Every time you get
4:41
one, you increase your score, and they're kind of laid
4:44
out mostly linearly along the course. But
4:46
this one lagoon was just full of these power-ups,
4:49
and the power-ups would regenerate after
4:52
a couple seconds. So the AI
4:54
learned to time its movement to get these
4:56
power-ups over and over by
4:58
spinning around and
4:59
exploiting this weird game design.
5:03
There's nothing wrong with this in the sense
5:05
that we asked it to find a solution to a mathematical
5:07
problem. How do you get the most points? And this
5:10
is how it did it. But if this was
5:12
a passenger ferry or something, you wouldn't
5:14
want it spinning around, setting itself on fire,
5:16
crashing into everything.
5:19
This boat game might seem like a small,
5:22
glitchy example, but it
5:24
illustrates one of the most concerning aspects
5:26
of AI. It's called the alignment problem.
5:29
Essentially, an AI's solution to a problem
5:32
isn't always aligned with its designers'
5:34
values, how they might want
5:36
it to solve the problem.
5:38
And like this game, our world isn't
5:40
perfectly designed. So if scientists don't
5:43
account for every small detail in our society
5:45
when they train in AI, it can solve
5:47
problems in unexpected ways,
5:50
sometimes even harmful ways. Something
5:52
like this can happen without us even knowing that it's happening,
5:55
where our system has found a way to do the thing
5:57
we think we want in a way that we really don't want.
5:59
The problem here isn't with the AI itself.
6:02
It's with our expectations of it.
6:05
Given what AIs can do, it's tempting
6:07
to just give them a task and assume the whole
6:09
thing won't end up in flames. But
6:12
despite this risk, more and more institutions,
6:15
companies, and even militaries are
6:17
considering how AI might be useful to
6:19
make important real-world decisions.
6:22
Hiring. Self-driving cars. Even
6:25
battlefield judgment calls.
6:28
Using AI like this can almost feel like making
6:30
a wish with a super annoying, super
6:33
literal genie. You
6:35
got real potential for a wish, but
6:37
you need to be extremely careful. This
6:40
reminds me of the tale of the man who
6:42
wished to be the richest man in the world. That
6:44
was then crushed at the mountain of corgoines.
6:49
I'm Noam Hasenfeld, and this is the second episode
6:52
of The Black Box, unexplainable series
6:54
on the unknowns at the heart of AI. If
6:57
there's so much we still don't understand about
6:59
AI, how can we make sure it does
7:01
what we want, the way we want? And
7:04
what happens if we can't?
7:07
Thinking intelligent thoughts is a mysterious
7:10
activity. The future of the computer
7:12
is just heartwarming.
7:13
I just have to admit I don't
7:15
really know. You're confused, Doctor. How do you
7:17
think I'd feel? Activity. Intelligence.
7:19
Can
7:21
the computer think? No!
7:29
So given the risks here that AI
7:31
can solve problems in ways its designers
7:33
don't intend, it's easy to wonder
7:35
why anyone would want to use AI to
7:37
make decisions in the first place. It's
7:40
because of all this promise, the positive
7:43
side of this potential genie. Here's
7:45
just a couple examples. Last year,
7:47
an AI built by Google predicted almost
7:50
all known protein structures. It
7:52
was a problem that had frustrated scientists for
7:54
decades, and this development has already
7:57
started accelerating drug discovery. AI
8:00
has helped astronomers detect undiscovered stars,
8:03
it's allowed scientists to make progress on decoding
8:05
animal communication, and
8:07
like we talked about last week, it was able
8:09
to beat humans at Go, arguably
8:12
the most complicated game ever made.
8:15
In all of these situations, AI has
8:17
given humans access to knowledge we
8:19
just didn't have before. So
8:22
the powerful and compelling thing about AI
8:24
when it's playing Go is sometimes
8:26
it will tell you a brilliant Go move that you would
8:28
never have thought of, that no Go master would ever have
8:30
thought of, that does advance your
8:33
goal of winning the game.
8:34
This is Kelsey Piper. She's a reporter for
8:37
Vox who we heard from last episode, and
8:39
she says this kind of innovation is really useful,
8:41
at
8:42
least in the context of a game. But
8:44
when you're operating in a very complicated context
8:47
like the world, then those brilliant
8:49
moves that advance your goals might
8:52
do it by having a bunch of side effects
8:54
or inviting a bunch of risks that you don't
8:56
know, don't understand, and aren't evaluating.
8:59
Essentially, there's always that risk of
9:02
the boat on fire. We've
9:04
already seen this kind of thing happen outside of
9:06
video game. Just take the example of
9:09
Amazon back in 2014. So
9:11
Amazon tried to use an AI hiring
9:13
algorithm, looked at candidates and then recommended
9:15
which ones would proceed in the interview process. Amazon
9:19
fed this hiring AI 10 years
9:21
worth of submitted resumes, and they told
9:23
it to find patterns that were associated with
9:25
stronger candidates.
9:26
And then an analysis came
9:28
out, finding that the AI was biased. It had
9:30
learned that Amazon generally preferred
9:32
to hire men, so it was happily more likely to recommend
9:35
Amazon men.
9:36
Amazon never actually used this AI
9:38
in the real world. They only tested it. But
9:41
a report by Reuters found exactly which
9:43
patterns the AI might have internalized. The
9:46
technology thought, oh, Amazon doesn't
9:48
like any resume that has the word women's
9:51
in it. So this is a woman's university, captain
9:53
of a women's chess club, captain of women's
9:56
soccer team.
9:57
Essentially, when they were training their AI, Amazon
9:59
was not a Amazon hadn't accounted for
10:01
their own flaws in how they'd been measuring
10:04
success internally. Kind of like
10:06
how OpenAI hadn't accounted for the way the
10:08
boat game gave out points based on power-ups,
10:10
not based on who finished first.
10:12
And of course, when Amazon realized
10:14
that, they took the AI out of
10:17
their process. But it seems like they
10:19
might be getting back in the AI hiring
10:21
game. According to an internal document
10:23
obtained by former Vox reporter Jason
10:25
Del Rey, Amazon's been working on a new
10:28
AI system for recruitment. At the same
10:30
time, they've been extending buyout offers
10:32
to hundreds of human recruiters. And
10:35
these flaws aren't unique to hiring AIs.
10:38
The way AIs are trained has led to all kinds
10:40
of problems. Take what happened with Uber
10:42
in 2018, when they didn't include jaywalkers
10:45
in the training data for their self-driving cars,
10:48
and then a car killed a pedestrian.
10:50
Tempe, Arizona police say 49-year-old
10:53
Elaine Herzberg was walking a bicycle
10:55
across a busy thoroughfare frequented by pedestrians
10:57
Sunday night. She was not in a crosswalk.
11:01
And a similar thing happened a few years ago with
11:03
the self-training AI Google used in its
11:05
photos app.
11:06
The company's automatic image recognition
11:08
feature in its photo application
11:10
identified two black persons as gorillas,
11:13
and in fact even tagged them as soap.
11:16
According to some former Google employees,
11:18
this may have happened because Google had a biased
11:20
data set. They may just not have
11:22
included enough black people.
11:24
The worrying thing is if you're using AIs to
11:26
make decisions, and the data they have
11:29
reflects our own biased processes,
11:31
like a biased justice system that sends
11:34
some people to prison for crimes where it lets
11:36
other people off with a slap on the wrist, or
11:38
a biased hiring process, then
11:40
the AI is going to learn the same thing.
11:44
But despite these risks, more companies
11:46
are using AI to guide them in making
11:48
important decisions. This is changing
11:51
very fast. Like there are a lot more companies
11:53
doing this now than there were even a year ago,
11:56
and there will be a lot more in a couple
11:59
more years.
11:59
Companies see a lot of benefits here.
12:02
First, on a simple level, AI is
12:04
cheap. Systems like chat GPT
12:06
are currently being heavily subsidized by investors,
12:09
but at least for now, AI is way cheaper
12:12
than hiring real people.
12:13
If you want to look over thousands of job
12:15
applicants, AI is cheaper than having humans
12:18
screen those thousands of job applicants. If
12:20
you want to make salary decisions, AI is
12:22
cheaper than having a human whose job is to
12:24
think about and make those salary decisions. If
12:26
you want to make firing decisions, those get done
12:28
by algorithm because it's easier
12:30
to fire who the algorithm spits out than to
12:32
have human judgment and
12:35
human analysis in the picture. And
12:37
even
12:37
if companies know that AI decision-making
12:39
can lead to boat-on-fire situations,
12:42
Kelsey says they might be OK with that
12:44
risk. It's so much cheaper that that's
12:47
a good business trade-off. And so we hand
12:49
off more and more decision-making to AI
12:51
systems for
12:53
financial reasons.
12:55
The second reason behind this push to use AI
12:57
to make decisions is because it could
12:59
offer a competitive advantage.
13:01
Companies that are employing AI
13:04
in a very winner-take-all capitalist
13:06
market, they might outperform the companies
13:08
that are still relying on expensive human
13:11
labor. And the companies that aren't
13:13
are much more expensive, so fewer people want
13:15
to work with them, and they're a smaller share of the economy.
13:18
And you might have huge
13:20
economic behemoths that are making
13:22
decisions almost entirely with AI systems.
13:24
But it's not just companies.
13:27
Kelsey says competitive pressure is even leading
13:29
the military to look into using AI to make
13:31
decisions.
13:32
I think there is a lot of fear
13:34
that the first country to successfully
13:37
integrate AI into its decision-making
13:39
will have a major battlefield advantage over
13:41
anyone still relying on slow humans.
13:44
And that's the driver of a lot in the military,
13:46
right? If we don't do it, somebody else
13:48
will, and maybe it will be a huge advantage.
13:51
This kind of thing may have already happened
13:54
in actual battlefields. In 2021,
13:56
a UN panel determined that an autonomous
13:59
Turkish drone
13:59
may have killed Libyan soldiers
14:02
without a human controlling it or even
14:04
ordering it to fire.
14:06
And lots of other countries, including the US, are
14:08
actively researching AI-controlled weapons.
14:10
You don't want to be the people,
14:13
you know, still fighting on horses when
14:15
someone else has invented fighting with guns,
14:17
and you don't want to be the people who don't have AI when
14:19
the other side has AI. So I think there's
14:22
this very powerful pressure not just
14:24
to figure this out, but to have
14:26
it ready to go.
14:27
And finally, the third reason behind the push toward AI
14:29
decision making is because of the promise we talked
14:32
about at the top. AI can provide
14:34
novel solutions for problems humans
14:36
might not be able to solve on their own. Just
14:39
look at the Department of Defense. They're
14:41
hoping to build AI systems that, quote,
14:43
function more as colleagues than as tools.
14:46
And they're studying how to use AI to help soldiers
14:49
make extremely difficult battlefield
14:51
decisions, specifically when it comes to medical
14:53
triage. I'm going to talk about
14:55
how we can build AI-based systems
14:57
that we would be willing to bet our lives with
15:00
and not be foolish to do so.
15:02
AI has already shown an ability to beat
15:04
humans in war game scenarios, like with
15:06
the board game diplomacy. And researchers
15:08
think this ability could be used to advise
15:11
militaries on bigger decisions, like strategic
15:13
planning.
15:14
Cybersecurity expert Matt Davos talked
15:16
about this on a recent episode of On the Media.
15:19
I think it'll probably get really good at threat
15:21
assessment. I think analysts might also
15:24
use it to help them through their thinking, right?
15:26
They might come up with an assessment and
15:28
say, tell me how I'm wrong. So I think there'll be
15:30
a lot of unique ways in which the technology
15:33
is used in the intelligence community.
15:35
But this whole time, that boat
15:37
on fire possibility is just lurking.
15:40
One of the things
15:42
that makes AI so promising, the
15:45
novelty of its solutions, it's
15:47
also the thing that makes it so hard to predict. Kelsey
15:50
imagines a situation where AI recommendations
15:53
are initially successful, which leads
15:55
humans to start relying on them uncritically,
15:58
even when the recommendations seem counter- are intuitive.
16:01
Humans might just assume the AI sees something
16:03
they don't,
16:04
so they follow the recommendation anyway. We've
16:07
already seen something like this happen in a game context
16:09
with AlphaGo, like we talked about last week. So
16:12
the next step is just imagining it happening
16:14
in the world.
16:16
And we know that AI can have fundamental
16:19
flaws. Things like bias training
16:21
data or strange loopholes engineers haven't
16:23
noticed.
16:24
But powerful actors relying on AI
16:26
for decision-making might not notice
16:29
these faults until it's too late.
16:31
And this is before we get into the AI like
16:34
being deliberately adversarial. This
16:36
isn't the terminator scenario with AI
16:38
becoming super intelligent and wanting to kill us.
16:41
The problem is more about humans and
16:44
our temptation to rely on AI uncritically.
16:46
This isn't the AI trying to trick
16:49
you. It's just the AI exploring
16:52
options that no one
16:54
would have thought of that get us into weird territory
16:57
that no one has been in before. And
16:59
since they're so untransparent, we can't
17:01
even ask the AI, hey, what are the risks of
17:04
doing this?
17:08
So if it's hard to make sure that AI operates
17:10
in the way its users intend, and
17:13
more institutions feel like the benefits
17:15
of using AI to make decisions might outweigh
17:17
the risks,
17:19
what do we do? What can
17:21
we do? There's a lot that we don't
17:23
know, but I think we should be changing
17:26
the policy and regulatory incentives so
17:28
that we don't have to learn from a horrible
17:30
disaster. And so that we like understand
17:33
the problem better and can start making progress
17:35
on solving it.
17:36
How to start solving a problem that
17:39
you don't understand
17:41
after the break. 100 years
17:46
ago, Louis Armstrong walked into a
17:49
tiny studio in Richmond,
17:51
Indiana, and made his first recording.
17:54
A century later, we're still living
17:56
in the musical world that this extraordinary trumpeter
17:58
and vocalist helped create. Listen to
18:01
virtually any pop song and whether you know
18:03
it or not, you're hearing the legacy of Louis Armstrong.
18:05
If you think of Armstrong today, you might think of a
18:07
funny voiced slightly corny entertainer
18:10
whose music serves as the soundtrack for cruise
18:12
ship commercials and comic impressions. But
18:15
there's a lot to learn about this iconic
18:18
musician. For one, his name. He
18:20
preferred Louis, not Louie.
18:22
His success. He was the oldest artist
18:25
to ever score a number one billboard
18:27
hit and he knocked the Beatles off the
18:29
charts
18:29
to do it. His influence, he
18:32
made a new mold for the
18:34
modern pop star that everyone from
18:36
the Wu Tang Clan to Harry Styles has
18:38
followed. I'm Nate Sloan, co-host
18:41
of the Vulture Music Podcast, Switched on Pop,
18:44
and this week we're discussing how Louis
18:46
Armstrong continues to shape the sound
18:48
of popular music 100 years later
18:51
and why his music resonates today more
18:53
than ever. Listen to Switched on Pop
18:56
anywhere you get podcasts.
19:07
So here's what we know. Number
19:09
one, engineers often struggle to
19:12
account for all the details in the world when they
19:14
program an AI.
19:15
They might want it to complete a boat race and
19:17
end up with a boat on fire.
19:19
A company might want to use it to recommend a set of
19:21
layoffs only to realize that the AI
19:23
has built in biases. Number
19:26
two, like we talked about in the first episode of this
19:28
series, it
19:29
isn't always possible to explain
19:31
why modern AI makes the decisions it
19:33
does, which makes it difficult
19:36
to predict what it'll do.
19:38
And finally, number three,
19:40
we've got more and more companies, financial
19:42
institutions, even the military,
19:44
considering how to integrate these AIs into
19:46
their decision making.
19:48
There's essentially a race to deploy this
19:50
tech into important situations, which
19:53
only makes the potential risks here more
19:56
unpredictable. on
20:00
unknowns. So what
20:02
do we do? I would say at this point
20:04
it's sort of unclear. Seagal
20:07
Samuel writes about AI and ethics for
20:09
Vox, and she's about as confused
20:11
as the rest of us here. But she says
20:13
there's a few different things we can work
20:15
on. The first one is interpretability,
20:18
just trying to understand how these AIs
20:20
work. But like we talked about last
20:22
week, interpreting modern AI systems
20:25
is a huge challenge.
20:26
Part of how they're so
20:28
powerful and they're able to give us info that we can't just
20:30
drum up easily ourselves is that they're so
20:33
complex. So there might be something
20:35
almost inherent about lack of interpretability
20:38
being an important feature of
20:40
AI systems that are going to be much more
20:42
powerful than my human brain.
20:44
So interpretability may not be
20:47
an easy way forward, but some
20:49
researchers have put forward another idea. Monitoring
20:52
AIs by using more AIs. At
20:54
the very least, just to alert users if AIs
20:57
seem to be behaving kind of erratically.
20:59
But it's a little bit circular
21:01
because then you have to ask, well, how would we be
21:03
sure that our helper AI is
21:06
not tricking us in the same way that we're
21:08
worried our original AI is doing?
21:10
So if these kind of tech-centric solutions
21:12
aren't the way forward, the best path
21:15
could be political, just trying to reduce
21:17
the power and ubiquity of certain kinds
21:19
of AI.
21:20
A great model for this is the EU, which
21:22
recently put forward some promising AI
21:24
regulation. The European Union is
21:27
now trying to put forward these regulations
21:29
that would basically require companies that
21:32
are offering AI products
21:35
in especially high-risk areas
21:38
to prove that these
21:41
products are safe.
21:42
This could mean doing assessments for bias,
21:44
requiring humans to be involved in the process of
21:46
creating and monitoring these systems, or
21:49
even just trying to reasonably demonstrate that
21:51
the AI won't cause harm.
21:53
We've unwittingly bought this premise
21:55
that they can just bring anything to market
21:57
when we would never do that for other similar
21:59
impactful technologies like think about
22:02
medication. You gotta get your FDA
22:04
approval. You gotta jump through these hoops. Why
22:06
not with AI?
22:09
Why not with AI? Well, there's
22:12
a couple reasons regulation might be pretty hard
22:14
here.
22:15
First, AI is different from something
22:17
like a medication that the FDA would approve.
22:19
The FDA has clear agreed upon hoops
22:21
to jump through, clinical testing. That's
22:24
how they assess the dangers of a medicine before
22:26
it goes out into the world. But with
22:28
AI, researchers often don't know what
22:30
it can do until it's been made public. And
22:33
if even the experts are often in the dark, it
22:35
may not be possible to prove to regulators
22:37
that AI is safe. The second
22:40
problem here is that even aside from
22:42
AI, big tech regulation doesn't
22:44
exactly have the greatest track record of
22:46
really holding companies accountable,
22:49
which might explain why some of the biggest AI companies
22:51
like OpenAI have actually been publicly
22:54
calling for more regulation.
22:56
The cynical read is that this is
22:58
very much a repeat of what we saw with a company
23:00
like Facebook, now Meta, where
23:02
people like Mark Zuckerberg were going to Washington,
23:05
D.C. and saying, oh, yes,
23:07
we're all in favor of regulation. We'll help you.
23:09
We wanna regulate too.
23:11
When they heard this, a lot of politicians said
23:13
they thought Zuckerberg's proposed changes were
23:15
vague and essentially self-serving,
23:18
that he just wanted to be seen supporting the rules,
23:21
rules which he never really thought
23:23
would hold them accountable.
23:24
Allowing them to
23:27
regulate in certain ways, but where really
23:29
they maintain control of their data sets,
23:31
they're not being super transparent and having
23:33
external auditors. So really they're
23:36
getting to continue to drive the ship and
23:38
make profits while
23:40
creating the semblance that society
23:43
or politicians are really driving the ship.
23:45
Regulation with real teeth seems
23:47
like such a huge challenge that one
23:49
major AI researcher even wrote an op-ed
23:51
in Time magazine calling for an indefinite
23:54
ban on AI research, just
23:56
shutting it all down. But Seagal
23:58
isn't sure that. That's such a good idea. I
24:01
mean, I think we would lose all the potential
24:03
benefits it stands to bring. So drug
24:06
discovery, you know, cures for certain
24:08
diseases, potentially
24:10
huge economic growth that
24:13
if it's managed wisely, big if,
24:15
could help alleviate some kinds of poverty.
24:18
I mean, at least potentially, it could
24:20
do a lot of good. And so you
24:23
don't necessarily want to throw that baby out with the bathwater.
24:25
At the very least, Seagal does want to turn
24:27
down the faucet. I think the problem
24:30
is we are
24:31
rushing at breakneck speed
24:33
towards more and more advanced forms of
24:35
AI. When the AIs that
24:38
we already currently have, we don't even know
24:40
how they're working.
24:41
When chatGPT launched, it was the fastest
24:43
publicly deployed technology in history.
24:47
Twitter took two years to reach a million users.
24:49
Instagram took two and a half months. ChatGPT
24:52
took five days.
24:54
And there are so many things researchers learned ChatGPT
24:57
could do only after it was
24:59
released to the public. There's so much we still
25:01
don't understand about them. So what
25:04
I would argue for is just slowing
25:06
down. Slowing down AI could
25:08
happen in a whole bunch of different ways. So
25:10
you could say, you know, we're going to stop
25:13
working on making AI more powerful
25:15
for the next few years, right? We're just not
25:17
going to try to develop AI that's got even
25:20
more capabilities than it already has.
25:22
AI isn't just software. It
25:24
runs on huge, powerful computers.
25:27
It requires lots of human labor. It
25:29
costs tons of money to make
25:32
and operate, even if those costs
25:34
are currently being subsidized by investors.
25:37
So the government could make it harder to get
25:39
the types of computer chips necessary for huge
25:41
processing power. Or it could
25:44
give more resources to researchers in
25:46
academia who don't have the same profit incentive
25:48
as researchers in industry.
25:50
You could also say, all right, we
25:52
understand researchers are going to keep doing the development
25:55
and try to make these systems more powerful, but
25:57
we're going to really halt or slow down deployment.
25:59
and like release to commercial
26:02
actors or whoever. Slowing down the development
26:04
of a transformative technology like
26:06
this, it's a pretty big ask, especially
26:09
when there's so much money to be made. It
26:11
would mean major cooperation, major regulation,
26:14
major complicated discussions with stakeholders
26:17
that definitely don't all agree. But
26:19
Segal isn't hopeless. I'm
26:21
actually reasonably
26:23
optimistic. I'm
26:26
very worried about the direction AI is
26:28
going in. I think it's going way
26:31
too fast. But
26:33
I also try to look at things with
26:36
a bit of a historical perspective. Segal
26:38
says that even though tech progress can seem
26:40
inevitable, there is precedent for
26:42
real global cooperation.
26:44
We know historically there
26:46
are a lot of technological innovations
26:49
that we could be doing, but we're
26:51
not because society just seems like a bad
26:53
idea. Human cloning or like
26:55
certain kinds of genetic experiments, like
26:58
humanity has shown that we are capable
27:00
of putting a stop or at least a slowdown
27:03
on things that we think are dangerous.
27:05
But even if guardrails are possible,
27:08
our society hasn't always been good about
27:10
building them when we should. The
27:12
fear is that sometimes society
27:15
is not prepared to design
27:17
those guardrails until there's been some huge
27:19
catastrophe, like Hiroshima,
27:21
Nagasaki, it's just horrific things that
27:23
happen. And then we pause and we say, hmm,
27:26
okay, maybe we need to go to the drawing board, right?
27:29
That's what I don't want to have happen with AI.
27:32
We've seen this story play out before.
27:35
Tech companies or technologists essentially
27:38
run mass experiments on society.
27:41
We're now prepared, huge harms happen,
27:44
and then afterwards we start to catch up and we
27:46
say, oh, we shouldn't let that catastrophe happen again. I
27:49
want us to get out in front of the catastrophe.
27:52
Hopefully that will be by slowing down the
27:54
whole AI race. If
27:56
people are not willing to slow down,
27:59
at least...
27:59
At least let's get in front by trying
28:02
to think really hard about what
28:04
the possible harms are and how we
28:06
can use regulation to
28:09
really prevent harm as much as we possibly
28:11
can.
28:15
Right now, the likeliest potential
28:17
catastrophe might have a lot less to
28:19
do with the sci-fi terminator scenario than
28:21
it does with us and how we could end up using
28:24
AI, relying on it in more and
28:26
more ways.
28:27
Because it's easy to look at AI and just
28:30
see all the new things it can let us do. AIs
28:33
are already helping enable new technologies,
28:35
they've shown potential to help companies and militaries
28:38
with strategy, they're even helping advance
28:40
scientific and medical research. But
28:43
we know they still have these blind spots that
28:45
we might not be able to predict. So
28:48
despite how tempting it can seem to rely
28:50
on AI, we should be honest
28:52
about what we don't know here. So
28:55
hopefully the powerful actors who are actually shaping
28:57
this future, companies, research
28:59
institutions, governments, will
29:02
at the very least stay skeptical of
29:04
all of this potential. Because if
29:06
we're really open about how little we know, we
29:09
can start to wrestle with the biggest question here.
29:12
Are all of these risks
29:14
worth it?
29:25
That's it for our Black Box series. This
29:28
episode was reported and produced by me,
29:30
Noam Hasenfeld. We had editing
29:32
from Brian Resnick and Catherine Wells, with
29:35
help from Meredith Hadnott, who also manages our team.
29:38
Mixing and sound design from Vince Fairchild,
29:40
with help from Christian Ayala. Music
29:42
from me, fact-checking from Tien Nguyen.
29:45
Mandy Nguyen is a potential werewolf, we're
29:47
not sure. And Bird Pinkerton sat
29:50
in the dark room at the Octopus Hospital, listening
29:53
to this prophecy.
29:55
and
30:01
that only a bird would be able to ensure
30:03
the survival of our species. You
30:06
are that bird, Pinkerton.
30:10
Special thanks this week to Pawan Jain, Jose
30:13
Hernandez-Orreo, Samir Rawashte,
30:15
and Eric Aldridge. If you have
30:17
thoughts about the show, email us at unexplainable
30:20
at vox.com, or you could leave
30:22
us a review or a rating, which we'd also love.
30:25
Unexplainable is part of the Vox Media Podcast
30:27
Network, and we'll be back in your feed next
30:29
week.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More