Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:01
Ted Audio Collective. Hi,
0:07
Fixable listeners. This week we wanted to
0:09
share with you an exciting episode from
0:11
another Ted Audio Collective podcast, The
0:14
Ted AI Show. This show guides
0:16
you through the mystifying world of artificial
0:18
intelligence. And this episode features
0:20
former OpenAI board member Helen Toner,
0:22
who gives us a fresh perspective
0:24
on how AI policy should be
0:26
shaped by leaders in the industry
0:28
and in the government. Enjoy
0:31
a look behind the curtain of the company who
0:33
gave us chat GPT. Hey, Belalval
0:35
here. This episode is a
0:37
bit different. Today I'm
0:40
interviewing Helen Toner, a researcher who
0:42
works on AI regulation. She's
0:44
also a former board member at
0:46
OpenAI. In my interview with
0:48
Helen, she reveals for the first time what really
0:51
went down at OpenAI late
0:53
last year when the CEO Sam Altman
0:55
was fired. And she makes some
0:57
pretty serious criticisms of him. We've reached out
0:59
to Sam for comments, and if he responds,
1:01
we'll include that update at the end of
1:03
the episode. But first, let's
1:05
get to the show. I'm
1:11
Belalval Sadhu, and this is The Ted AI Show, where
1:13
we figure out how to live and thrive
1:15
in a world where AI is changing everything.
1:24
True trailblazers know that innovation doesn't
1:26
come from meeting expectations. So
1:29
not only does the BMW 7 Series
1:31
exceed expectations, it transcends them. Shaped
1:34
by the visionaries of the future, the
1:36
BMW 7 Series and available all-electric i7
1:38
is uncharted luxury. From
1:41
the rear executive lounge hosting an available
1:43
31-inch theater screen and 4D surround sound
1:45
to real-time highway and parking assistance, the
1:48
BMW 7 Series has changed the standards of
1:50
luxury with relentless innovation, made
1:53
for those who appreciate detail by those
1:55
who are obsessed with it. Learn more about
1:57
the innovative BMW 7 Series and available at
1:59
OpenAI. available as a 100% electric i7 at bmwusa.com. Fixable
2:07
is brought to you by Progressive. Most
2:10
of you aren't just listening right
2:12
now. You're driving, cleaning, even exercising.
2:15
But what if you could be saving money by
2:17
switching to Progressive? Drivers who save by
2:19
switching save nearly $750 on average and
2:23
auto customers qualify for an
2:25
average of seven discounts. Multitask
2:27
right now. Quote today at
2:29
progressive.com. Progressive Casualty Insurance Company
2:31
and affiliates. National average 12
2:34
month savings of $744 by
2:37
new customers surveyed who saved with Progressive between
2:39
June 2022 and May 2023. Potential
2:44
savings will vary. Discounts
2:46
not available in all states and
2:48
situations. Do
2:51
you wonder how successful businesses
2:54
scale, make decisions and navigate
2:56
change? I wanna suggest
2:58
Harvard Business Review's podcast, HBR
3:00
On Strategy. Every
3:03
week, HBR editors hand select
3:05
the best strategy case studies
3:07
and conversations from across HBRs,
3:09
podcast videos and events, all
3:12
in one place. Search for HBR
3:14
On Strategy for free wherever you get
3:16
your podcasts. New episodes drop
3:19
every Wednesday. The
3:24
opening eye saga is still unfolding. So let's
3:26
get up to speed. In
3:28
case you missed it, on a Friday in November, 2023,
3:31
the board of directors at OpenAI fired
3:34
Sam Altman. This ouster
3:36
remained a top news item over that
3:38
weekend with the board saying that he
3:40
hadn't been quote, consistently candid in his
3:42
communications unquote. The Monday after
3:44
Microsoft announced that they had hired Sam to
3:46
head up their AI department. Many
3:49
OpenAI employees rallied behind Sam and threatened
3:51
to join him. Meanwhile,
3:53
OpenAI announced an interim CEO
3:56
and then a day later, plot twist,
3:58
Sam was rehired at OpenAI. AI. Several
4:01
of the board members were removed or resigned
4:03
and replaced. Since then,
4:05
there's been a steady fallout. On
4:08
May 15th, 2024, just
4:10
last week as of recording this episode, OpenAI's
4:13
chief scientist, Ilya
4:15
Sutskevair, formally resigned. Not
4:18
only was Ilya a member of the board that
4:20
fired Sam, he was also part of the super
4:22
alignment team, which focuses on mitigating
4:24
the long term risks of AI. With
4:27
a departure of another executive, Jan Leike,
4:29
many of the original safety conscious folks
4:32
in leadership positions have either departed OpenAI
4:34
or moved on to other teams. So
4:38
what's going on here? Well,
4:40
OpenAI started as a nonprofit in
4:43
2015, self-described as
4:45
an artificial intelligence research company.
4:47
They had one mission, to create AI for
4:50
the good of humanity. They wanted
4:52
to approach AI responsibly, to study the
4:54
risks up close, and to figure out
4:56
how to minimize them. This
4:59
was going to be the company that showed us
5:01
AI done right. Fast
5:03
forward to November 17, 2023, the
5:05
day Sam was fired, OpenAI
5:08
looked a bit different. They'd
5:11
released Dali and chat GPT was taking
5:13
the world by storm. With
5:15
hefty investments from Microsoft, it now seemed
5:17
that OpenAI was in something of a
5:19
tech arms race with Google. The
5:22
release of chat GPT prompted Google to
5:24
scramble and release their own chat bot,
5:26
Bard. Over time,
5:28
OpenAI became closed AI. Starting
5:31
2020 with the release of GPT-3,
5:34
OpenAI stopped sharing their code. And
5:37
I'm not saying that was a mistake. There are
5:39
good reasons for keeping your code private. But
5:41
OpenAI somehow changed, drifting
5:44
away from a mission minded nonprofit
5:46
with altruistic goals to a
5:48
run of the mill tech company shipping new
5:50
products at an astronomical pace. This
5:52
trajectory shows you just how powerful
5:55
economic incentives can be. There's
5:57
a lot of money to be made in AI right now.
6:00
But it's also crucial that profit isn't
6:02
the only factor driving decision making. Artificial
6:06
General Intelligence, or AGI, has the
6:08
potential to be very, very disruptive.
6:11
And that's where Helen Toner comes in. Less
6:15
than two weeks after OpenAI fired and
6:17
rehired Sam Altman, Helen
6:20
Toner resigned from the board. She
6:22
was one of the board members who had voted to remove him,
6:25
and at the time, she couldn't say much. There
6:27
was an internal investigation still ongoing, and she
6:30
was advised to keep mum. And
6:32
oh man, she got so much flack for all
6:35
of this. Looking at the
6:37
news coverage and the tweets, I got
6:39
the impression she was this techno-pessimist who
6:41
was standing in the way of progress,
6:43
or a kind of maniacal power seeker
6:46
using safety policy as her cudgel. But
6:49
then, I met Helen at this year's TED
6:51
conference, and I got to hear her side
6:53
of the story. And it made
6:55
me think a lot about the difference between governance and
6:58
regulation. To me, the
7:00
OpenAI saga is all about AI board
7:02
governance, and incentives being misaligned
7:05
among some really smart people. It
7:07
also shows us why trusting tech
7:09
companies to govern themselves may not
7:11
always go beautifully, which
7:14
is why we need external rules and regulations.
7:17
It's a balance. Helen's
7:20
been thinking and writing about AI policy for
7:22
about seven years. She's
7:25
the director of strategy at CSET, the
7:27
Center for Security and Emerging Technology at
7:29
Georgetown, where she works with policymakers in
7:31
DC about all sorts of AI issues.
7:35
Welcome to the show. Hey, good to be here. So,
7:38
Helen, a few weeks back at TED in Vancouver,
7:40
I got the short version of what happened at
7:42
OpenAI last year. I'm wondering,
7:44
can you give us the long version? As
7:47
a quick refresher on sort of the context here,
7:49
the OpenAI board was not a normal board. It's
7:51
not a normal company. The
7:54
board is a nonprofit board that was
7:56
set up explicitly for the purpose of
7:58
making sure that the companies... public
8:00
good mission was primary was coming first
8:02
over profits, investor interests and other things.
8:05
But for years, Sam had
8:07
made it really difficult for the board
8:09
to actually do that job by withholding
8:12
information, misrepresenting things that were
8:14
happening at the company, in some
8:16
cases outright lying to the board. At this
8:18
point, everyone always says, like what? Give me some examples.
8:21
And I can't share all the examples.
8:23
But to give a sense of the kind of
8:25
thing that I'm talking about, it's things like when
8:28
chat GPT came out November 2022, the
8:31
board was not informed in advance about that.
8:33
We learned about chat GPT on Twitter. Sam
8:36
didn't inform the board that he owned
8:38
the open AI startup fund, even
8:41
though he constantly was claiming to
8:43
be an independent board member with
8:45
no financial interest in the company. On
8:48
multiple occasions, he gave us inaccurate
8:50
information about the small number of formal
8:52
safety processes that the company did have
8:55
in place, meaning that it
8:57
was basically impossible for the board to know how
8:59
well those safety processes were working or what might
9:01
need to change. And then, you know, a last
9:03
example that I can share, because it's been very
9:06
widely reported, relates to this paper that I wrote, which
9:08
has been, you know, I think way overplayed in the
9:10
press. For listeners
9:12
who didn't follow this in the press,
9:14
Helen had co written a research paper
9:17
last fall intended for policymakers. I'm
9:19
not going to get into the details. But what you need to know
9:21
is that Sam Altman wasn't happy
9:23
about it. It seemed like
9:26
Helen's paper was critical of open AI
9:28
and more positive about one of their
9:30
competitors, Anthropic. It was also published
9:32
right when the Federal Trade Commission was investigating open
9:34
AI about the data used to
9:37
build its generative AI products. Essentially,
9:39
open AI was getting a lot of heat
9:41
and scrutiny all at once. The
9:45
way that played into what happened in November is pretty
9:47
simple. It had nothing to do with the substance of
9:49
this paper. The problem was that after the paper came
9:52
out, Sam started lying to other
9:54
board members in order to try and push me
9:56
off the board. So it was another example
9:58
that just like really damaged our ability to build our own data. to
10:00
trust him and actually only happened in
10:02
late October last year when we were
10:04
already talking pretty seriously about whether
10:06
we needed to fire him. And
10:08
so, you know, there's kind of more
10:11
individual examples and for any individual
10:13
case, Sam could always come up
10:16
with some kind of innocuous sounding explanation of why it
10:18
wasn't a big deal or misinterpreted or whatever. But
10:21
the end effect was that after years of this
10:23
kind of thing, all four of us
10:25
who fired him came to the
10:28
conclusion that we just couldn't believe things
10:30
that Sam was telling us. And that's
10:32
a completely unworkable place to be in
10:34
as a board, especially a board that
10:36
is supposed to be providing independent
10:39
oversight over the company, not just like, you
10:41
know, helping the CEO to raise more money.
10:44
You know, not trusting the word of the
10:46
CEO who is your main conduit to the
10:49
company, your main source of information about the company
10:51
is just totally, totally impossible. So that
10:54
was kind of the background that the state of
10:56
affairs coming into last fall. And
10:58
we had been, you know, working at the board
11:01
level as best we could to set up better structures,
11:04
processes, all that kind of thing to try and,
11:06
you know, improve these issues that we had been
11:08
having at the board level. But
11:10
then in mostly in
11:12
October of last year, we had this
11:15
series of conversations with these
11:17
executives where the two of
11:19
them suddenly started telling us about their own
11:22
experiences with Sam, which they hadn't
11:24
felt comfortable sharing before, but telling us how they
11:27
couldn't trust him about the
11:29
toxic atmosphere he was creating. They
11:32
used the phrase psychological abuse, telling
11:35
us they didn't think he was the right person to lead
11:37
the company to AGI, telling us
11:39
they had no belief that he could or would change,
11:41
no point in giving him feedback, no point in trying
11:43
to work through these issues. I mean,
11:45
you know, they've since tried to kind
11:48
of minimize what they told us, but these were
11:50
not like casual conversations. They
11:52
were really serious to the point where
11:55
they actually sent us screenshots and
11:57
documentation of some of the instances they
11:59
were telling. telling us about of him
12:01
lying and being manipulative in different situations.
12:04
So this was a huge deal. This was
12:06
a lot. And
12:09
we talked it all over very intensively
12:12
over the course of several weeks and
12:15
ultimately just came to the conclusion that the
12:18
best thing for OpenAI's mission and for OpenAI
12:20
as an organization would be to
12:22
bring on a different CEO. And
12:24
once we reached that conclusion, it
12:26
was very clear to all of us that as
12:29
soon as Sam had any inkling that we might
12:31
do something that went against him, he would pull
12:33
out all the stops, do
12:35
everything in his power to undermine the board,
12:37
to prevent us from even getting to the
12:40
point of being able to fire him. So we were
12:43
very careful, very deliberate about
12:46
who we told, which was essentially almost no one
12:48
in advance other than obviously our legal team. And
12:50
so that's kind of what took us to November
12:53
17th. Thank you for sharing
12:55
that. Now Sam was eventually reinstated as
12:57
CEO with most of the staff supporting
12:59
his return. What exactly happened there?
13:01
Why was there so much pressure to bring him back?
13:04
Yeah, this is obviously the elephant in the
13:06
room. And unfortunately, I think there's been
13:08
a lot of misreporting on
13:10
this. I think there were three big
13:13
things going on that helped make sense
13:15
of kind of what happened here. The first
13:17
is that really pretty early on, the
13:20
way the situation was being portrayed to
13:22
people inside the company was you have
13:24
two options, either Sam comes back immediately
13:26
with no accountability, you know, totally new
13:29
board of his choosing, or
13:31
the company will be destroyed. And
13:33
you know, those weren't actually the only two options.
13:36
And the outcome that we eventually landed on was
13:38
neither of those two options. But
13:40
I get why, you know, not wanting the
13:42
company to be destroyed, got a
13:44
lot of people to fall in line, you know,
13:47
whether because they were, in some cases, about
13:49
to make a lot of money from
13:51
this upcoming tender offer, or just because they love
13:53
their team, they didn't want to lose their job,
13:55
they cared about the work they were doing. And
13:57
of course, a lot of people didn't
13:59
want that. the company to fall apart, us
14:01
included. The second thing
14:04
I think it's really important to know that
14:06
has really gone under reported is how
14:09
scared people are to go against
14:12
Sam. They had experienced
14:14
him retaliating against people, retaliating against
14:16
them for past instances of being
14:18
critical. They were
14:20
really afraid of what might happen to them.
14:22
So when some employees started to say, wait,
14:25
I don't want the company to fall apart, let's
14:27
bring back Sam, it was very
14:29
hard for those people who had had
14:31
terrible experiences to actually
14:34
say that for a few that, if
14:36
Sam did stay in power as he ultimately did,
14:39
that would make their lives miserable. And
14:42
I guess the last thing I would say about this is that
14:45
this actually isn't a new problem for
14:47
Sam. And if you look at some
14:50
of the reporting that has come out since November, it's
14:53
come out that he was actually fired from
14:55
his previous job at Y Combinator, which was
14:57
hushed up at the time. And
15:00
then at his job before that, which was his
15:02
only other job in Silicon Valley, his startup looped,
15:05
apparently the management team went to the board
15:08
there twice and asked the board to fire
15:10
him for what they called deceptive and chaotic
15:12
behavior. If you actually
15:14
look at his track record, he doesn't exactly
15:16
have a glowing trail of references.
15:18
This wasn't a problem specific to the
15:21
personalities on the board as much as he would love to kind
15:23
of portray it that way. So I
15:26
had to ask you about that, but this actually does
15:28
tie into what we're gonna talk about today. OpenAI
15:30
is an example of a company that started off
15:32
trying to do good, but
15:34
now it's moved on to a for-profit model.
15:36
And it's really racing to the front of
15:38
this AI game, along with all of these
15:40
like ethical issues that are raised in the
15:42
wake of this progress. And
15:45
you could argue that the OpenAI saga shows
15:47
that trying to do good and regulating yourself
15:49
isn't enough. So let's
15:51
talk about why we need regulations. Great, let's do
15:53
it. So from my perspective,
15:55
AI went from the sci-fi thing that
15:57
seemed far away to something that's... pretty
16:00
much everywhere and regulators are suddenly trying
16:02
to catch up. But I think
16:04
for some people, it might not be obvious
16:06
why exactly we need regulations at all. Like
16:08
for the average person, it might seem like,
16:10
oh, we just have these cool new tools
16:12
like DALI and chat GPT that do these
16:15
amazing things. What exactly are
16:17
we worried about in concrete terms? There's
16:19
very basic stuff for very basic forms of
16:21
the technology. Like if people are
16:24
using it to decide who gets a loan,
16:26
to decide who gets parole, you
16:29
know, to decide who gets to buy a house,
16:31
like you need that technology to work well. If
16:33
that technology is going to be discriminatory, which AI
16:35
often is, it turns out, you
16:38
need to make sure that people have recourse. They
16:40
can go back and say, hey, why was this decision
16:42
made? If we're talking AI being
16:44
used in the military, that's a whole other
16:46
kettle of fish. And I don't
16:48
know if we would say like regulation for that, but
16:50
certainly need to have guidance, rules,
16:52
processes in place. And then kind
16:55
of looking forward and thinking
16:57
about more advanced AI systems, I
16:59
think there's a pretty wide range of
17:01
potential harms that we could well
17:03
see if AI keeps getting increasingly
17:05
sophisticated, you know, letting every little
17:07
script kitty in their parents' basement,
17:09
having the hacking capabilities of a
17:12
crack NSA cell. Like that's a
17:14
problem. I think something that really makes
17:16
AI hard for regulators to think about is that it
17:18
is so many different things and plenty of the things
17:21
don't need regulation. Like I don't know that how
17:23
Spotify decides how to make your
17:25
your playlist, the AI that they use for that, that
17:28
like, I'm happy for Spotify to just pick whatever songs they want
17:30
from me. And if they get it wrong, you know, who cares?
17:33
But for many, many other use cases, you want to have
17:35
at least some kind of basic common sense guardrails around it.
17:38
I want to talk about a few specific
17:40
examples that we might want to worry about,
17:42
not in some battlespace overseas, but at home
17:44
in our day to day lives. You know,
17:47
let's talk about surveillance. AI has gotten really
17:49
good at perception. Essentially understanding
17:51
the contents of images, video and
17:53
audio. And we've got a
17:55
growing number of surveillance cameras in public
17:57
and private spaces. And now companies are in
18:00
fusing AI into this fleet, essentially
18:02
breathing intelligence into these otherwise dumb
18:04
sensors that are almost everywhere. Madison
18:07
Square Garden in New York City
18:09
is an example. They've been using
18:11
facial recognition technology to bar lawyers
18:14
involved in lawsuits against their parent
18:16
company, MSG Entertainment, from attending events
18:18
at their venue. This
18:20
controversial practice obviously raised concerns about
18:23
privacy, due process, and potential for
18:25
abuse of this technology. Can
18:27
we talk about why this is problematic? Yeah,
18:29
I mean, I think this is a pretty common thing
18:31
that comes up in the history of technology is you
18:33
have some
18:35
existing thing in society, and then technology makes it much
18:38
faster, much cheaper, and much more widely available. Like surveillance,
18:40
where it goes from, like, oh, it used to be
18:42
the case that your neighbor could see you doing something
18:44
bad and go talk to the police about it. It's
18:47
one step up to go to, well, there's a camera, a
18:49
CCTV camera, and the police can go back and check it
18:51
anytime. And then another step up
18:53
to, like, oh, actually, it's just running all
18:55
the time, and there's an AI facial recognition
18:57
detector on there, and maybe in the future
18:59
an AI activity detector that's also flagging, you
19:01
know, this looks suspicious. In
19:04
some ways, there's no, like, qualitative change
19:07
in what's happened. It's just, like, you could
19:09
be seen doing something. But I
19:11
think you do also need to grapple with the
19:13
fact that if it's much more ubiquitous, much cheaper,
19:15
then the situation is different. I mean,
19:17
I think with surveillance, people immediately go to
19:19
the kind of law enforcement use cases, and I
19:22
think it is really important to figure
19:24
out what the right trade-offs are between
19:27
achieving sort of law enforcement objectives and
19:29
being able to catch criminals and prevent
19:31
bad things from happening, while also recognizing
19:33
the huge issues that you can get
19:35
if this technology is used with overreach. For
19:37
example, you know, facial recognition works
19:40
better and worse on different demographic groups. And
19:43
so if police are, as they have been in
19:45
some parts of the country, going and arresting people
19:47
purely on a facial recognition match and on no
19:49
other evidence, there's a story about a woman who
19:52
was eight months pregnant having contractions in a jail
19:54
cell after having done absolutely nothing wrong and being
19:56
arrested only on the basis of a, you know,
19:58
a bad facial recognition match. So
20:00
I personally don't go for, you know, this
20:02
needs to be totally banned and no one should ever use
20:05
it in any way for anything. But I think you really
20:07
need to be looking at how are
20:09
people using it? What happens when it goes
20:11
wrong? What recourse do people have? What kind of access
20:14
to due process do they have? And then
20:16
when it comes to private use, I really think we should
20:18
probably be a bit more, you know, restrictive. Like, I don't
20:21
know, it just seems pretty clearly against, I don't
20:23
know, freedom of expression, freedom of movement for somewhere
20:25
like Madison Square Gardens to be kicking the Runeloyers
20:27
out. I don't know, I'm not a lawyer myself.
20:29
So I don't know what exactly the state of
20:31
the law around that is. But
20:33
I think the sort of civil liberties and
20:38
privacy concerns there are pretty clear. I
20:40
think the problem with
20:43
sort of an existing set of technology
20:45
getting infused with more advanced capabilities, sort
20:47
of unbeknownst to the common population at
20:49
large, is certainly a trend. And
20:51
one example that shook me up is a
20:53
video went viral recently of a security camera
20:55
from a coffee shop, which showed
20:57
a view of a cafe full of people and baristas.
21:00
And basically over the heads of the customers, like the
21:02
amount of time they spent at the cafe, and then
21:05
over the baristas was like, how many drinks have
21:07
they made? And then, you know, so what does
21:09
this mean? Like, ostensibly the business can one,
21:12
track who is staying on their premises for how long, learn
21:14
a lot about customer behavior without
21:16
the customer's knowledge or consent. And
21:19
then number two, the businesses can
21:21
track how productive their workers are and could
21:24
potentially fire, let's say, less productive baristas. Let's
21:27
talk about the problems and the risk here. And like, how is
21:29
this legal? I mean,
21:31
the short version is, and this comes up
21:33
again and again and again if you're doing
21:35
policy, the U.S. has no federal privacy laws.
21:37
There's no there are no rules on the
21:39
books for how companies can use data.
21:41
The U.S. is pretty unique in terms of how few
21:44
protections there are of what kinds of personal data are
21:46
protected in what ways. Efforts to
21:48
make laws have just failed over and over and over again. But
21:50
there's now this sudden stealthy new effort that people think might
21:52
actually have a chance. So who knows? Maybe this problem is
21:54
on the way to getting solved. But at the moment, it's
21:56
a big, big hole for sure. And
21:58
I think step one is making it. people
22:00
aware of this, right? Because people have to
22:03
your point heard about online tracking, but having
22:05
those same set of analytics and like the
22:07
physical space in reality, it just feels like
22:09
the Rubicon has been crossed and we don't
22:11
really even know that's what's happening when we
22:13
walk into whatever grocery store. I mean, again,
22:15
yeah. And again, it's about sort of the scale
22:18
and the ubiquity of this, because
22:20
again, it could be like your
22:22
favorite barista knows that you always
22:25
come in and you sit there for a few hours on your laptop
22:27
because they've seen you do that a few weeks in a row. That's
22:30
very different to this, this data is being
22:32
collected systematically and then sold to, you know,
22:35
data vendors all around the country and used for all
22:37
kinds of other things or outside the country. So
22:40
again, I think we have these sort of intuitions
22:43
based on our real world person to person
22:45
interactions that really just break down when it comes to sort
22:47
of the size of data that we're talking about here. So,
22:52
Frances, you know, I love
22:54
the creation stage of any
22:56
project or company or work
22:58
stream and I have
23:01
a theory of the case that there are two
23:03
types of people in the world. There
23:07
are people who like to create
23:09
order out of chaos and
23:11
there are people who like to
23:14
create chaos out of order. Nice.
23:16
And in these creation moments, you
23:18
actually need both types of people.
23:21
I am a chaos out of
23:24
order. Chaos out of order. And I think
23:26
your order out of chaos. I am order
23:28
out of chaos. I need order out of
23:30
chaos. I have to say I'm
23:33
super excited about a whole
23:35
set of tools from Miro.
23:38
I selected one of
23:40
their gorgeous project timeline templates.
23:43
I came up with this board
23:45
that beautifully laid out milestones,
23:48
questions I had with their cool little
23:50
sticky notes and I brought it to
23:52
you. And it calmed me down. I
23:56
really loved it. So whether you
23:58
work in product design, engineering, UX,
24:00
Agile, Marketing, bring your team together
24:02
on Miro like we did. Your
24:04
first three Miro boards are free
24:06
when you sign up today at
24:09
miro.com. That's three
24:11
free boards at m-i-r-o.com.
24:16
Are you a natural born problem solver? UNC
24:19
Kenan Flagler's online Master of
24:21
Accounting program can help you
24:23
pair your passion for analytical
24:25
thinking with technical skills taught
24:27
by world-class faculty. Whether
24:29
you're looking to grow within your current
24:31
company or aiming to switch industries, our
24:33
program was designed for problem solvers like
24:35
you. Wherever you're heading,
24:37
diversifying your skill set with robust
24:39
accounting skills will make you a
24:41
valuable asset to any organization. Learn
24:44
more at accounting.unc.edu. I
24:48
also want to talk about scams. So
24:50
folks are being targeted by phone scams. They get
24:53
a call from their loved ones. It sounds like
24:55
their family members have been kidnapped and being held
24:57
for ransom. In reality, some
24:59
bad actor just used off-the-shelf AI to
25:02
scrub their social media feeds for these
25:04
folks' voices. Scammers can then
25:06
use this to make these very believable hoax calls
25:09
where people sound like they're in distress and being held
25:11
captive somewhere. So we have reporting
25:14
on this particular hoax now, but what's
25:16
on the horizon? What's keeping you up
25:18
at night? I think the obvious next step
25:20
would be with video as well. Definitely
25:23
if you haven't already gone and talked to your
25:25
parents, your grandparents, anyone in your life who is
25:28
not super tech savvy and told them, you need
25:30
to be on the lookout for this, you should
25:32
go do that. I talk a lot about policy
25:35
and what kind of government involvement
25:37
or regulation we might need for AI. I
25:39
do think a lot of things we can just adapt to
25:41
and we don't necessarily need new rules for. So
25:43
I think we've been through a lot of different waves of
25:45
online scams and I think this is the newest one and
25:47
it really sucks for the people who get targeted by
25:49
it. But I also expect that five
25:52
years from now it will be something that people are pretty familiar
25:54
with and will be a pretty small number of people who are
25:56
still vulnerable to it. So I think
25:59
the main thing is, yeah. be super suspicious of
26:01
any voice. Definitely don't use voice recognition for
26:03
your bank accounts or things like that. I'm
26:05
pretty sure some banks will offer that. Ditch
26:08
that. Definitely use something
26:10
more secure. And yeah, be on
26:12
the lookout for video scamming as well and
26:14
for people on video calls who
26:16
look real. I think there was recently just the other
26:18
day, a case of a guy who
26:20
was on a whole conference call where there were a bunch of
26:22
different AI-generated people all on the call and he was the only
26:24
real person, got scammed out a bunch of money. So
26:28
that's coming. Totally, content-based authentication is
26:30
on its last legs it seems.
26:32
Definitely. It's always worth checking in with
26:34
what is the baseline that we're starting with. And I mean,
26:36
so for instance, a lot of things, a
26:39
lot of things are already public and they don't seem to get
26:41
misused. So I think a lot of people's
26:44
addresses are listed publicly. We used to have little
26:46
white pages where you can look up someone's address
26:49
and that mostly didn't result in terrible things
26:51
happening. Or I even think of silly examples.
26:53
Like I think it's really nice
26:55
that delivery drivers when you go to a restaurant to
26:57
pick up food that you ordered, it's just there. All
27:00
right, so let's talk about what we can
27:02
actually do. It's one thing to regulate businesses
27:05
like cafes and restaurants. It's
27:07
another thing to rein in all the bad
27:09
actors that could abuse this technology. Can
27:11
laws and regulations actually protect us?
27:14
Yeah, they definitely can. I mean, and they already are. Again,
27:17
AI is so many different things that there's no
27:19
one set of AI regulations. There's plenty of laws
27:21
and regulations that already apply to AI. So there's
27:25
a lot of concern about AI, algorithmic
27:27
discrimination with good reason. But in
27:29
a lot of cases, there are already laws on the books
27:31
saying you can't discriminate on the basis of race or gender
27:33
or sexuality or whatever it might be. And
27:37
so in those cases, you don't
27:39
even need to pass new laws or make
27:41
new regulations. You just need to make sure
27:43
that the agencies in question have the staffing
27:45
they need. Maybe they have the
27:47
exact authorities they have tweaked in
27:51
terms of who are they allowed to investigate or who are they allowed
27:53
to penalize or things like that. There are already
27:55
rules for things like self-driving cars. You know,
27:57
the Department of Transportation is handling
27:59
that. It makes sense for me. for them to handle that.
28:01
For AI and banking, there's a bunch of banking regulators that
28:03
have a bunch of rules. So
28:05
I think there's a lot of places where AI actually
28:08
isn't fundamentally new, and the
28:10
existing systems that we have in place are doing
28:13
an OK job at handling that, but they
28:15
may need, again, more staff or slight
28:17
changes to what they can do. And
28:19
I think there are a few different places where
28:21
there are new challenges emerging at
28:24
the cutting edge of AI, where you have
28:27
systems that can really do things that computers
28:29
have never been able to do before, and
28:31
whether there should be rules around making sure
28:33
that those systems are being developed and deployed
28:35
responsibly. I'm particularly curious if there's something that
28:37
you've come across that's really clever or like
28:39
a model for what good regulation looks like.
28:41
I think this is mostly still
28:44
a work in progress, so I don't know that I've seen anything
28:46
that I think really absolutely nails
28:48
it. I think a lot of the challenge
28:50
that we have with AI right now relates
28:53
to how much uncertainty there is about what the
28:55
technology can do, what it's going to be
28:57
able to do in five years. Experts disagree enormously
29:00
about those questions, which makes it really hard to
29:02
make policy. So a lot of
29:04
the policies that I'm most excited about are about
29:06
shedding light on those kind of questions, giving us
29:08
a better understanding of where the technology is. So
29:11
some examples of
29:13
that are things like President
29:16
Biden created this big executive order last
29:18
October and had all kinds of things
29:20
in there. One example was a requirement
29:22
that companies that are training especially
29:24
advanced systems have to report
29:27
certain information about those systems to the government.
29:29
And so that's a requirement where you're not
29:31
saying you can't build that model, can't train
29:33
that model. You're not saying the
29:35
government has to approve something. You're really just
29:37
sharing information and creating more
29:40
awareness and more ability to respond as
29:42
the technology changes over time, which is such
29:44
a challenge for government keeping up with this
29:46
fast-moving technology. There's also been
29:49
a lot of good movement towards funding
29:52
the science of measuring and evaluating AI.
29:55
A huge part of the challenge with figuring out
29:57
what's happening with AI is that we're really bad
29:59
at actually really just measuring how good is this
30:01
AI system? How do these two AI
30:04
systems compare to each other? Is one of them sort of
30:06
quote unquote smarter? So I think there's been
30:08
a lot of attention over the last year or two
30:10
into funding and establishing
30:12
within government, better
30:15
capabilities on that front. I think that's really
30:17
productive. Okay, so policymakers are definitely
30:19
aware of AI if they weren't before. And
30:22
plenty of people are worried about it. They
30:25
wanna make sure it's safe, right? But
30:27
that's not necessarily easy to do. And
30:30
you've talked about this, how it's hard to
30:32
regulate AI. So why is
30:34
that? What makes it so hard? Yeah,
30:37
I think there's at least three things that make it
30:39
very hard. One thing is AI has so many different
30:41
things like we've talked about. It's
30:43
cuts across sector. It has so
30:45
many different use cases. It's really hard to get your arms around what
30:48
it is, what it can do, what impacts it will have. A
30:50
second thing is it's a moving target. So what
30:52
the technology can do is different now than it was
30:54
even two years ago, let alone five years ago, 10
30:57
years ago. And policymakers
31:00
are not good at sort
31:02
of agile policymaking. They're
31:04
not like software developers. And then the
31:06
third thing is no one can agree on how
31:09
they're changing or how they're gonna change in the
31:11
future. If you ask five experts where
31:13
the technology is going, you'll get five
31:16
completely different answers. Often five very confident,
31:18
completely different answers. So
31:21
that makes it really difficult for policymakers
31:23
as well because they need to get
31:26
scientific consensus and just like take
31:28
that and run with it. So I think
31:30
maybe this kind of third factor is the one
31:32
that I think is the biggest challenge for making
31:34
policy for AI, which is that
31:36
for policymakers, it's very hard for them to tell who
31:39
should they listen to, what problems should they be worried
31:41
about, and how is that gonna change over time?
31:44
Speaking of who you should listen to,
31:46
obviously, the very large companies in this
31:48
space have an incentive and there's been
31:50
a lot of talk about regulatory capture.
31:52
When you ask for transparency, why
31:54
would companies give a peek under the hood
31:57
of what they're building? They just cite this
31:59
to be proprietary. On the other hand, you
32:01
know, they might be, these
32:03
companies might want to set up, you
32:05
know, policy and institutional framework that is
32:07
actually beneficial for them and sort of
32:09
prevents any future competition. How do you
32:11
get these powerful companies to like participate
32:14
and play nice? Yeah, it's definitely very
32:16
challenging for policymakers to figure out how
32:18
to interact with those companies. Again, because,
32:20
you know, in part, because they're lacking
32:22
the expertise and the time to
32:25
really dig into things in depth themselves. Like a
32:27
typical set at Staffer might
32:29
cover like, you know, technology issues
32:31
and trade issues and veterans affairs
32:33
and agriculture and education, you know,
32:35
and that's like their portfolio. So
32:38
they are scrambling, like they have to, they
32:40
need outside help. So I
32:42
think it's very natural that the companies do come in and
32:44
play a role. And I also think there are plenty of
32:46
ways that policymakers can really mess things up if they don't, you
32:49
know, know how the technology works and they're not talking to
32:51
the companies that are regulating about what's going to happen. The
32:54
challenge, of course, is how do you balance that with
32:56
external voices who are going to point
32:58
out the places where the companies are actually being
33:00
self-serving. And so I think
33:02
that's where it's really important that civil society has
33:04
resources to also be in these conversations. Certainly what
33:07
we try to do at CSET, the organization I
33:09
work at, we're totally independent and, you know, really
33:11
just trying to work in the best interest of,
33:13
you know, making good policy. The
33:15
big companies obviously do need to have a seat
33:17
at the table, but you would hope that they
33:19
have, you know, a seat at
33:21
the table and not 99 seats out of 100 in
33:24
terms of who policymakers are talking to and
33:26
listening to. There
33:28
also seems to be a challenge with enforcement, right?
33:31
You've got all these AI models already out
33:34
there. A lot of them are open source.
33:36
You can't really put that genie back in
33:38
the bottle, nor can you really start, you
33:40
know, moderating how this technology is used without,
33:42
I don't know, like going full
33:45
1984 and having a process on
33:47
every single computer monitoring what they're doing. So
33:50
how do we deal with this landscape where
33:52
you do have, you know, closed source and
33:54
open source, like various ways to access and
33:57
build upon this technology? Yeah, I mean, I
33:59
think there are a lot of interminable. intermediate
34:01
things between just total anarchy and full 1984.
34:05
There's things like, you know,
34:07
Hugging Face, for example, is a very popular
34:09
platform for open source AI models. So Hugging
34:11
Face in the past has delisted models that
34:14
are, you know, considered to be offensive or
34:16
dangerous or whatever it might be. And
34:18
that actually does meaningfully reduce kind
34:21
of the usage of those models because Hugging Face's
34:23
whole deal is to make them
34:25
more accessible, easier to use, easier to find, you
34:27
know, depending on the specific problem we're talking about.
34:29
There are things that, for example, social
34:32
media platforms can do. So if we're talking about, as
34:35
you said, child pornography or also,
34:38
you know, political disinformation, things like that, maybe
34:41
you can't control that at the point
34:43
of creation. But if you have the
34:45
Facebooks, the Instagrams of the
34:47
world, you know, working on,
34:49
they already have methods in place for how to kind
34:51
of detect that material, suppress it, report it. And
34:55
so, you know, there are other mechanisms
34:57
that you can use. And
34:59
of course, specifically on the kind of image
35:01
and audio generation side, there are some really
35:03
interesting initiatives underway, mostly being led by industry
35:06
around what gets called content provenance or content
35:08
authentication, which is basically how do you know
35:10
where this piece of content came from? How
35:12
do you know if it's real? And
35:14
that's a very rapidly evolving space and a lot
35:16
of interesting stuff happening there. I think
35:19
there's a good amount of promise, not for perfect solutions, where
35:21
we'll always know, is this real or is it fake, but
35:24
for making it easier for individuals
35:26
and platforms to recognize, okay, this
35:28
is fake, it was AI
35:30
generated by this particular model, or this is
35:32
real, it was taken on this kind of
35:34
camera, and we have the cryptographic signature for
35:36
that. I don't think we'll ever have
35:38
perfect solutions. And again, I think, you know, societal adaptation
35:40
is just gonna be a big part of the story.
35:43
But I do think there's pretty interesting
35:45
technical and policy options that
35:47
can make a difference. Definitely. And even
35:50
if you can't completely control, you know, the
35:53
generation of this material, there are ways
35:55
to drastically cap the distribution of it.
35:58
And so like, I think that reduces.
36:00
some of the harms there. Yeah, at the
36:02
same time labeling content that is synthetically generated,
36:04
a bunch of platforms have started doing that.
36:06
That's exciting because I don't think the average
36:09
consumer should be a deep fake detection expert,
36:11
right? But really, if there could be a
36:13
technology solution to this, that feels a lot
36:15
more exciting. Which brings
36:18
me to the future. I'm kind of curious
36:20
in your mind, what's the dystopian scenario and
36:22
the utopian scenario in all of this? Let's
36:24
start with a dystopian one. What
36:26
does a world look like with inadequate
36:29
or bad regulations? Paint a picture for
36:31
us. So many possibilities.
36:34
I mean, I think there are worlds that are not that different
36:36
from now where you just have automated systems doing a lot of
36:39
things, playing a lot of important
36:41
roles in society, in some cases doing them badly
36:43
and people not having the ability to go in
36:45
and question those decisions. There's obviously this whole discourse
36:47
around existential risk from AI, et cetera, et cetera.
36:49
Kamala Harris had a whole speech about like, if
36:52
someone's, I forget the exact examples, but if
36:54
someone loses access to Medicare because of an
36:56
algorithmic issue, is that not existential for that,
37:00
an elderly person? So
37:02
there are already people who are being directly
37:04
impacted by algorithmic systems and AI in
37:07
really serious ways. Even some of the
37:09
reporting we've seen over the last couple months of how
37:11
AI is being used in warfare, like videos
37:14
of a drone chasing a Russian soldier around a
37:16
tank and then shooting him. I
37:19
don't think we're full dystopia, but there's
37:22
plenty of things we worry about already. Something
37:24
I think I worry about quite a bit
37:26
or that feels intuitively to me to
37:29
be a particularly plausible way things could go is
37:31
sort of what I think of as the Wall-E
37:34
future. I don't know if you remember that
37:36
movie. Oh, absolutely. With The Little Robot. And
37:38
the piece that I'm talking about is not
37:40
the like junk earth and
37:42
whatever. The piece I'm talking about is the
37:44
people in that movie, they just sit in
37:46
their soft roll around wheelchairs
37:48
all day and have
37:50
content and
37:53
food and whatever to keep them happy. And
37:56
I think what worries me about that is
37:58
I do think there's a really natural gradient. to
38:00
go towards what people want in
38:02
the moment and will choose
38:04
in the moment, which is
38:07
different from what they will really find fulfilling
38:09
or what will build kind of a meaningful
38:11
life. And I think there's
38:13
just really natural commercial incentives to build things
38:15
that people sort of superficially want, but then
38:17
end up with this really kind of meaningless,
38:21
shallow, superficial world, and
38:24
potentially one where kind of most of the
38:26
consequential decisions are being made by machines that
38:29
have no concept of what
38:32
it means to lead a meaningful life. And, you know, because
38:34
how would we program that into them? Because we have no,
38:36
we struggle to kind of put our finger on it ourselves.
38:38
So I think those kinds of futures,
38:41
not where there's some, you know, dramatic,
38:43
big event, but just
38:45
where we kind of gradually hand over more
38:47
and more control of the future
38:50
to computers that are more and more sophisticated, but
38:52
that don't really have any concept of meaning
38:55
or beauty or joy or fulfillment or, you
38:58
know, flourishing or whatever it might be. I
39:01
hope we don't go down those paths, but it
39:03
definitely seems possible that we will. They
39:06
can play to our hopes, wishes, anxieties, worries, all
39:09
of that, just give us like the junk food
39:11
all the time, whether that's like in terms of
39:13
nutrition or in terms of just like audio visual
39:15
content, and that could certainly end badly.
39:18
Let's talk about the opposite of that, the
39:20
utopian scenario. What does a world look like
39:22
where we've got this perfect balance of innovation
39:24
and regulation and society is thriving? I mean,
39:26
I think a very basic place to start
39:28
is can we solve some of the big
39:30
problems in the world? And I do think
39:32
that AI could help with those. So can
39:34
we have a world without
39:36
climate change, a world with much more abundant energy,
39:38
that is much more cheaper, and
39:40
therefore more people can have more access to it, where
39:44
we have better agriculture, so
39:46
there's greater access to food.
39:49
And beyond that, you know, I think
39:51
what I'm more interested in is setting, you
39:53
know, our
39:55
kids and our grandkids and our great grandkids up to
39:57
be deciding for themselves what they want the future to
39:59
be. to look like from there, rather
40:01
than having kind of some particular vision of
40:03
where it should go. But
40:06
I absolutely think that AI has the
40:08
potential to really contribute to solving some of the
40:10
biggest problems that we kind of face as a
40:12
civilization. It's hard to say that sentence without sounding
40:14
kind of grandiose and trite, but I think it's
40:16
true. So
40:19
maybe to close things out, just like, what
40:21
can we do? You mentioned some
40:23
examples of being aware of synthetically
40:26
generated content. What can we, as
40:28
individuals, do when we encounter, use,
40:30
or even discuss AI? Any recommendations?
40:33
I think my biggest suggestion here is
40:35
just not to be intimidated
40:37
by the technology and not to be intimidated
40:39
by technologists. This is really a technology where
40:41
we don't know what we're doing. The best
40:43
experts in the world don't understand how it
40:45
works. And so I think just if
40:48
you find it interesting, being interested. If you think of
40:50
fun ways to use it, use them. If
40:53
you're worried about it, feel free to be worried. I think the main
40:56
thing is just feeling like you have a right to your own
40:58
take on what you want to
41:00
happen with the technology and no
41:03
regulator, no CEO
41:06
is ever going to have full visibility into
41:08
all of the different ways that it's affecting
41:10
millions and billions of people around the world. And
41:13
so kind of trusting your own experience and exploring
41:16
for yourself and seeing what you think is, I
41:18
think the main suggestion I would have. It was a
41:20
pleasure having you on, Helen. Thank you for coming on
41:22
the show. Thanks so much. This was fun. So
41:27
maybe I bought into the story that
41:29
played out on the news and on
41:31
X, but I went into that interview
41:33
expecting Helen Toner to be more of
41:35
an AI policy maxim list. The
41:38
more laws, the better, which wasn't at
41:40
all the person I found her to be. Helen
41:43
sees a place for rules, a place
41:45
for techno optimism, and a place for
41:47
society to just roll with adapting
41:49
to the changes as they come for
41:52
balance. Policy doesn't have
41:54
to mean being heavy handed and
41:56
hamstringing innovation. It can just
41:59
be a check against perversion. first economic
42:01
incentives that are really not good for
42:03
society. And I think you'll agree. But
42:05
how do you get good rules? A
42:07
lot of people in tech are going to say, you don't know
42:10
shit. They know the technology the
42:12
best, the pitfalls, not the
42:14
lawmakers. And Helen talked about
42:16
the average Washington staffer who isn't an
42:18
expert, doesn't even have the time to
42:20
become an expert. And yet
42:23
it's on them to craft regulations that
42:25
govern AI for the benefit of all
42:27
of us. Governments
42:29
have the expertise, but they've also got that
42:31
profit motive. Their interests aren't always going to
42:33
be the same as the rest of ours.
42:36
You know, in tech you'll hear a lot
42:38
of regulation bad, don't engage with regulators. And
42:41
I get the distrust. Sometimes
42:44
regulators do not know what they're doing.
42:46
India recently put out an advisory saying
42:48
every AI model deployed in India first
42:50
had to be approved by regulators. Totally
42:54
unrealistic. There was a huge backlash there
42:56
and they've since reversed that decision. But
42:59
not engaging with government is only going to
43:01
give us more bad laws. So
43:04
we got to start talking, if only
43:06
to avoid that wall-y dystopia. Okay,
43:09
before we sign off for today, I want
43:11
to turn your attention back to the top
43:13
of our episode. I told you
43:16
we were going to reach out to Sam Altman for comments.
43:19
So a couple of hours ago, we shared
43:21
a transcript of this recording with Sam and
43:23
invited him to respond. We've
43:25
just received a response from Brett Taylor, chair
43:27
of the OpenAI board. And here's the statement
43:30
in full. Quote, we
43:32
are disappointed that Ms. Toner continues to
43:34
revisit these issues. An independent
43:36
committee of the board worked with a law firm
43:39
Wilmer Hale to conduct an extensive review of the
43:41
events of November. The review
43:43
concluded that the prior board's decision was
43:45
not based on concerns regarding product safety
43:47
or security, the pace
43:49
of development, OpenAI's finances, or its
43:52
statements to investors, members, or business
43:54
partners. Additionally, over
43:56
95% of employees, including senior
43:58
leadership, asked for a review. for Sam's
44:00
reinstatement as CEO and the resignation of
44:02
the prior board. Our focus
44:05
remains on moving forward and pursuing
44:07
OpenAI's mission to ensure AGI benefits
44:09
all of humanity." We'll
44:13
keep you posted if anything unfolds. The
44:19
TED AI show is a part of the
44:21
TED Audio Collective and is produced by TED
44:23
with Cosmic Standard. Our
44:25
producers are Ella Fetter and Sarah
44:27
McRae. Our editors are Ben Van
44:29
Sheng and Alejandro Salazar. Our
44:32
showrunner is Ivana Tucker and our
44:34
associate producer is Ben Montoya. Our
44:36
engineer is Asia Pilar Simpson, our
44:39
technical director is Jacob Winink, and
44:41
our executive producer is Eliza Smith. Our
44:44
fact checkers are Julia Dickerson and
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More