Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:01
Ted Audio Collective Hi,
0:08
I'm Adam Grant. I think about work a
0:10
lot. That's why I wanted to tell you
0:12
about Canva Docs, which will help you expertly
0:14
craft your work communications. They
0:16
have an AI text generator built in
0:19
called Magic Write, powered by OpenAI. You
0:22
can generate any text you want.
0:24
Job descriptions, marketing plans, sales proposals.
0:27
Just start with a prompt and you'll have a
0:30
draft in seconds. Tweak your draft
0:32
and you're done. Try Canva
0:34
Docs with an AI text generator
0:36
built in at canva.com. Designed
0:38
for work. This
0:42
episode is brought to you by Progressive. Most
0:45
of you aren't just listening right now. You're
0:47
driving, cleaning, and even exercising. But
0:49
what if you could be saving money by switching to
0:51
Progressive? Drivers who save by
0:53
switching save nearly $750 on average. And
0:57
auto customers qualify for an average of seven
0:59
discounts. Multitask right now.
1:01
Quote today at progressive.com. Progressive
1:03
Casualty Insurance Company and affiliates.
1:06
National average 12 month savings of $744
1:08
by new customers surveyed who saved with
1:10
Progressive between June 2022 and May 2023.
1:15
Potential savings will vary. Discounts not available
1:17
in all states and situations. You're
1:22
growing a business and you can't afford to slow
1:24
down. If anything, you could probably use a few
1:26
more hours in the day. That's
1:28
why the most successful growing businesses
1:31
are working together in Slack. Slack
1:33
is where work happens with all
1:35
your people, data, and information in
1:37
one AI powered place. Start
1:39
a call instantly in huddles and
1:41
ditch cumbersome calendar invites. Or build
1:44
an automation with Workflow Builder to take
1:46
routine tasks off your plate. No coding
1:49
required. Grow your business in Slack. Visit
1:51
slack.com to get started. updated
2:00
on Everything Business on TED Business,
2:02
a podcast hosted by Columbia Business
2:05
School professor Modupe Acanola. Every week
2:07
she'll introduce you to leaders with
2:09
unique insights on work, answering questions
2:11
like, how do four day
2:14
work weeks work? Will a machine ever
2:16
take my job? Get some surprising answers
2:18
on TED Business wherever you listen to
2:20
podcasts. Whatever
2:23
it is that is the solution to
2:25
humanity's problems, I'd argue it's probably not
2:27
in our imagination because if it was,
2:29
we'd be doing it. So what we're
2:31
looking for are things that are outside
2:33
the sphere of human
2:36
imagination. Hey everyone,
2:40
it's Adam Grant. Welcome back to Rethinking,
2:42
my podcast on the science of what
2:44
makes us tick with the TED Audio
2:46
Collective. I'm an organizational psychologist
2:48
and I'm taking you inside the minds
2:50
of fascinating people to explore new thoughts
2:53
and new ways of thinking. My
2:58
guest today is tech pioneer, Aza Raskin.
3:00
As co-founder of the Center for
3:02
Humane Technology, he's a leading advocate
3:05
for the responsible reimagining of the
3:07
digital world to prevent polarization and
3:09
promote well-being. Aza's work
3:11
focuses on solving some of the biggest collective
3:13
problems of our age, especially
3:15
as our tech rapidly evolves. AI
3:18
is like the invention of the
3:20
telescope and when we invented the telescope
3:22
we learned that Earth was not the
3:24
center. I've been thinking a
3:26
lot about the implications of
3:29
what happens when AI teaches us that
3:31
humanity is not the center. If
3:34
you don't know Aza by name, you know some
3:36
of his creations. He designed the
3:38
feature that makes students growing possible, which he
3:40
now regrets. And he coined
3:42
the phrase, freedom of speech is not freedom
3:45
of reach. Since then, he's
3:47
expanded his scope by co-founding the
3:49
Earth Species Project, where he's using
3:51
tech to decipher animal communication. Between
3:54
improving social media and talking to whales, we
3:57
had a lot to discuss. And
3:59
Aza challenged me to rethink my assumption that
4:01
these two missions aren't as different in any way.
4:11
Hey, Aza. Hey, Adam. I'm excited
4:13
for this. I feel like there is
4:15
so much ground we could cover, I
4:17
hardly know where to begin. Yeah, it's
4:20
only care for first all humans and
4:22
then after that, all beings. So
4:25
you grew up in tech and
4:28
I understand we have you to blame
4:30
for infinite scroll. Everyone's
4:32
just gonna start pelting me with tomatoes. I
4:35
did invent infinite scroll and I think it's
4:37
really important to understand my motivations and then
4:39
what went wrong because it was a
4:41
big lesson for me. When I invented infinite scroll, this
4:43
was before social media had really taken off. This was
4:45
way back just when MapQuest, you know, I don't know
4:48
if you remember that. Of course I do. Right? Like
4:50
we have to click and then the map would move
4:52
over and then you have to reload the page. And
4:55
the thought hit me like I'm a designer. Every
4:57
time I asked the user to make a decision
4:59
they don't care about, I failed. When you get
5:02
near the bottom of a page, that means you
5:04
haven't found what you're looking for. Just load some
5:06
more stuff. And I was designing it for blog
5:08
posts. I was thinking about search results. And it's
5:10
just honestly, it is a better interface. And then
5:13
I went around to like Google and Twitter and
5:15
say, Oh, we should adopt this interface. And I
5:17
was blind to the way
5:19
that my invention created with positive
5:21
intent was going to be picked
5:23
up by perverse incentives of what
5:26
would later become social media, where
5:28
it wasn't to help you, but
5:31
to essentially to hunt you, right? To extract something
5:33
from you using an asymmetric knowledge
5:35
about how the human mind works, which is
5:37
that your brain doesn't wake up to ask,
5:39
do I want to continue unless
5:41
it gets something like a stopping cue? What does that mean?
5:43
That means like generally you don't ask, do I want to
5:45
stop drinking wine until I get to the bottom of the
5:47
cup of wine? So my
5:50
invention got sort of sucked up by
5:52
a machine and waste on the
5:54
order of 100,000 human lifetimes per
5:57
day. Now it's, it's horrendous. And
6:00
this is what I think people miss all
6:03
the time in the invention of technology,
6:06
that it's not about the
6:08
intent, good or bad, of
6:10
the inventor. When you invent
6:12
a technology, you uncover a new class
6:15
of responsibilities. We didn't need the right
6:17
to be forgotten until the internet could
6:19
remember us forever. And then
6:21
two, if that technology
6:23
confers power, you're going to
6:26
start a race for that power. And if
6:28
there is some resource that we need, that
6:30
we depend on, that you can be exploited
6:32
for that power, in this case, like attention
6:35
and engagement with the attention economy, then that
6:37
race will end in tragedy unless you can
6:39
coordinate to protect it. And
6:41
so I was completely blind to that structure
6:43
when I was creating infinite scroll, and you
6:46
can see the results. That thing we call
6:48
doom scrolling would not exist without infinite scroll.
6:51
So I mean,
6:54
obviously, there's a tension between social
6:57
media business models, and what we think
6:59
is the humane option here. But a
7:03
lot of people hate doom scrolling, why have
7:05
we not seen a company yet experiment with
7:07
a limit on that? What would you do
7:09
at this point? How would you think about
7:11
solving this? It's a great question. So the
7:13
way we've often talked about the attention
7:15
economy is it's a
7:18
business model that is fundamentally about
7:21
getting reactions from the human nervous system.
7:23
You get people angry, you show them
7:26
things that they cannot help but look
7:28
at. So you would get them if
7:30
the incentive is to get reaction and
7:33
make reactive the human nervous system, it's
7:35
sort of obvious that we're going to
7:37
get polarization, narcissism, more outrage,
7:39
eventually democratic backsliding, like that's all
7:41
a predictable outcome of just make
7:43
the human nervous system more reactive
7:46
and get reactions from it. And
7:48
that's why we're able to call
7:50
it in 2013, building all the way up to
7:52
the social dilemma in 2020. And
7:55
so if we're going to think about solving
7:57
it, it's not a thing that an individual
7:59
company do. We get into that
8:01
paranoid logic. If we don't do it, we
8:03
lose the person who does. So you have
8:05
to do something to the entire space as
8:07
a whole so everyone can start competing for
8:10
the thing that is healthy and humane. I've
8:12
talked with multiple social media companies about this
8:14
over the last few years is just
8:17
run the A.B. test of let's
8:19
have people preset how many hours a day
8:21
or ideally minutes a day they actually want
8:23
to be scrolling and then it
8:25
just flags that their time is up. I think
8:27
to your point that would reduce
8:29
attention and engagement perhaps, but
8:32
it would also make people less angry at the platform
8:34
and I wonder if there's a net benefit there and
8:36
at least I would want to test that. I
8:38
think you probably have a better idea, but tell me what's wrong
8:40
with mine and then where you would go. There's
8:43
a different version of yourself before you started eating french
8:45
fries and a different version of yourself after you started
8:47
eating french fries. Before you've eaten one single french fry,
8:49
you're probably like, I don't know if I want to
8:51
have french fries. After you've eaten one, you're in a
8:54
hot state. You're just going to keep eating them until
8:56
they're gone and that's sort of the thing I think
8:58
you're pointing at. In psychology, we
9:00
would talk about this as a fundamental want-should
9:02
conflict. You know you should stop
9:04
scrolling, but in that moment you want to
9:06
keep doing it and it's hard to override
9:09
the temptation. The good
9:11
news about your should self is that although
9:13
it's weaker in the moment, it's smarter in
9:15
the long term. If we
9:17
can activate the should self in advance and pre-commit,
9:19
as you're saying to that target, the
9:22
probability should go up that you
9:24
would be willing to stick to that commitment
9:26
once you've made it. It's not a perfect
9:28
solution, but what I like about it as
9:30
an example is it's something one company could
9:33
try. They could be differentiating in a positive
9:35
way and doesn't require congressional intervention
9:37
or you know all of the companies to
9:39
form a coalition. Right. So tell me what
9:41
I'm missing there and what your more systemic
9:44
approach would look like. I
9:46
used to be addicted to both twitter and
9:48
to reddit and I'm like how
9:50
do I get myself off? So as
9:52
a technology maker, every
9:55
designer knows that retention
9:58
is directly correlated related with
10:01
speed. That is, the faster your website
10:03
loads or your app loads, the more
10:05
people continue to use it. Amazon really
10:07
famously found that for every 100 milliseconds
10:10
their page loads slower, they
10:13
lose 1% of revenue. Wow. So
10:16
using this insight, I actually wrote myself a
10:18
little tool that said the longer I scrolled
10:21
on Twitter, the more I used Reddit, the
10:23
longer I would get a random delay as
10:25
things would load. Sometimes it'd be sort of
10:27
fast, sometimes it'd get slower, the longer I
10:30
used it, the slower it would get. And
10:32
what I discovered is that this let my
10:35
brain catch up with my impulse.
10:38
And I would just get bored, do I really
10:40
want to be doing this? And it wasn't a
10:42
lot. It was like 250 milliseconds. It's human reaction
10:44
time. It gave me just enough time to overcome
10:46
the dopamine addiction loop. And within a couple of
10:48
days, honestly, my addiction was broken because I'm just
10:51
like, oh, no, I actually don't want to be
10:53
doing this. Wow. Okay, so
10:55
this is an ingenious invention. How
10:58
do I download it? And are you going to make
11:00
it widely available? For everyone listening to
11:02
this podcast, this is not a super hard thing
11:04
to make. I did it with my own personal
11:06
little VPN and proxy. If anyone wants
11:08
to come help build this thing, please,
11:11
it's not hard. And I think there's a big
11:13
opportunity. I just personally don't have the time. I
11:15
just made it for myself. So let's
11:17
put up that plea. Let's then
11:19
jump up to the global solution.
11:22
We need to hit these companies essentially in their
11:24
business model. We have to hit them where sort
11:27
of a scorecard for the
11:29
effects of social media on teen
11:31
mental health, depression, and suicide, on
11:34
the health of public discourse,
11:36
on the backsliding of democracies.
11:39
You make the big list of all the different kinds of
11:41
harms. And then you have some
11:44
way of evaluating how well
11:46
each company is doing. And
11:48
then you institute
11:51
some kind of latency
11:53
sanction. Like, all right, looks
11:55
like Facebook's doing really badly on teen
11:57
mental health. We're going to in
12:00
a democratic way, slow Facebook down after the
12:02
first n number of minutes. It'll never get
12:05
more than, I don't know, 500 milliseconds of
12:07
delay. It's not like you're stopping it. There's
12:09
no censorship. You're just saying, hey, you're having
12:12
negative action allies, you're affecting the whole, and
12:14
you would imagine how very quickly these
12:16
companies are going to innovate their way
12:19
to solving those metrics, like quarter after
12:21
quarter. The first thing,
12:23
of course, I'm sure in people's minds is
12:25
that seems really scary. Like who'd want to
12:27
give the government the ability to slow down
12:29
websites? This speaks
12:31
to the next thing, which is
12:33
especially in the era of exponentially
12:35
powerful technology as we move into
12:37
AI, we're going to need forms
12:39
of trustworthy governance that are hard
12:41
to capture. We're going to need
12:43
to have the equivalent of citizens'
12:45
juries or other things, like other
12:47
forms of institutions which are hard
12:50
to capture and are capture and
12:52
corruption resilient. This, I think, would
12:54
be an excellent place to start prototyping
12:56
what that future vision of what resilient
12:58
institutions would look like. I
13:00
actually think that would be really compelling. Note
13:03
that this is a solution that
13:05
never touches content. It never touches
13:07
content moderation. It never touches censorship.
13:10
It's a solution born out of seeing
13:12
the world as incentives leading to outcomes
13:14
and trying to shift things at
13:17
the incentive level so that you
13:19
can unleash the amazing amount of
13:21
creativity and ingenuity inside of
13:24
these companies that are just doing the
13:26
thing that the incentives tell them to do.
13:29
For uninitiated listeners, can you explain what
13:32
a bridging algorithm does? You
13:34
have used Twitter and
13:37
have seen community notes. This is
13:39
actually a bridging algorithm in
13:42
practice. Essentially what
13:44
bridging algorithms do is they look
13:47
for consensus across groups that
13:49
normally disagree. Once it
13:51
finds statements that
13:53
people across multiple different
13:56
divides agree on, then
13:58
it raises that up. It sort of
14:00
promotes those. One of my favorite
14:03
examples of thinking about this is something
14:06
called the perception gap. And
14:08
the perception gap is
14:11
how differently I
14:13
perceive you and your
14:15
beliefs than your actual beliefs.
14:18
When we are fighting, we are
14:20
often not actually fighting with the other side,
14:22
we are fighting with a mirage
14:24
of the other side, a sort of
14:26
a caricature. And now we
14:28
end up in a really interesting place because
14:31
we could start to measure what
14:33
kind of content at scale increases
14:36
perception gap, that is, sort of fills
14:38
you with false beliefs about the other
14:40
side beliefs. And which kinds of content
14:43
decreases, helps you see more accurately. And
14:45
you could imagine then an algorithm
14:48
which helps go viral, the content
14:50
that lets us see each other
14:52
correctly. It doesn't make all
14:54
disagreements go away, but it says at
14:56
the very least, we should be able
14:58
to accurately see what all
15:00
the other sides are saying.
15:03
And because we are actually closer than we believe,
15:05
it's sort of like bringing the two sides of
15:08
a wound closer together so it can start to
15:10
heal. I think you alluded
15:12
to your skepticism that a social
15:14
media platform would like this idea
15:16
because outrage is more
15:19
activating in some ways than connection.
15:21
That's right. We're not entirely convinced.
15:24
I just wonder if we haven't tried the
15:26
right approaches to bridging yet. So for example,
15:29
there was some evidence that was published a few
15:32
years ago showing that people would rather have a
15:34
conversation with a stranger who shared their political views
15:37
than a friend who didn't. And
15:40
I think people recognize that as a massive problem
15:42
in their lives. If we think about family members
15:44
and friends and close colleagues who are
15:46
not speaking to each other or having a hard time
15:49
getting on the same page, that's
15:51
an audience for a bridging algorithm. Facebook,
15:53
they discovered a very simple thing that
15:55
they could do for fighting
15:58
hate speech, disinformation, misinformation, and other
16:00
things. information, all the worst
16:02
stuff, what was that one
16:04
simple thing that they could do? It
16:07
was they could remove the reshare button
16:10
after two share hops. That is, I
16:12
could share something, somebody else could click
16:14
the reshare button, somebody
16:16
else could click the reshare button, but after
16:18
that the reshare button would disappear. Now, if
16:20
you're really motivated, you could copy the text
16:22
and paste it again. So again, no censorship,
16:24
it's just introducing a
16:26
little more friction. But
16:28
it comes at the cost of engagement. That's
16:31
why I think we're not going to
16:33
see much traction with bridging algorithms until
16:35
those fundamental incentives are fixed. Did
16:38
you co-coin the phrase, freedom of speech is
16:40
not freedom of reach? I
16:43
coined it and then Renee DiResta,
16:45
a brilliant researcher at the Internet
16:47
Observatory now, she wrote an
16:49
article in Wired that popularized it. So
16:52
I love that phrase. There's a
16:54
certain level of reach that concerns
16:56
me when the content is consequential. So
17:00
thinking about during COVID, health
17:02
information and misinformation or disinformation,
17:05
thinking about posts that
17:07
are safety relevant in
17:09
an area where there might be danger or
17:11
a threat of violence. I've
17:14
often wondered when health and
17:16
safety information reaches a certain
17:18
level of virality, why
17:20
isn't it flagged to not be re-shared
17:23
unless it's fact checked? And
17:25
why isn't there a process for that? Is something like
17:27
that viable? Other countries do
17:29
this. So it is viable. I'm
17:32
thinking of Sinan Aral's work showing that
17:34
lies spread faster and farther than the
17:36
truth. Falsehoods go six times
17:38
faster than truths. This is really important because
17:41
one of the things we want to sidestep
17:43
is, is this piece of
17:45
content true or false? Fact
17:48
checking is hard. And then the
17:50
thing that's next to fact checking is frame
17:52
checking. And now it
17:54
gets very hard to adjudicate. So we
17:56
should be looking not at specific
17:59
pieces of content. content, but at the
18:01
context surrounding them, how fast they're spreading, what
18:04
is the way that they're spreading, what are
18:06
the incentives for it spreading, and so
18:08
that we can move out of the morass
18:10
of free speech. Because as soon as
18:12
we head down that pathway and solutions that
18:15
require a debate about free speech, we lose.
18:18
I'm realizing it's not so simple as
18:20
just fact checking. Frame
18:22
checking is almost impossible. Yes, that is
18:24
exactly right. And that is why that
18:26
whole program of let's get more fact
18:29
checkers in is just going down the
18:31
wrong solution branch. And so we need
18:33
to be thinking about it at a
18:35
more systems level,
18:37
incentive level, context level
18:40
than content. Congress
18:43
realizes they're a bunch of luddites. They
18:45
put you in charge of a committee to
18:47
make a series of recommendations for what ought
18:49
to be done, societally.
18:54
What are you proposing? We are
18:56
at the cusp of the next era
18:59
of technology, of AI. Which
19:01
way is it going to go? Are we going to
19:03
get the incredible promise of AI, or are we
19:05
going to get the terrifying peril of AI? And
19:09
our point was the same as it's always been, which
19:11
is if you
19:13
want to understand where it's going to
19:15
go, look to the incentives. That's how
19:17
we're able to predict social media. So
19:19
what are the incentives for AI? To
19:23
grow your company as fast as possible,
19:25
to grow your capabilities, get
19:27
them into the market as quickly as
19:29
possible for market dominance so you can
19:31
sort of like wash, rinse, repeat. And
19:33
the shortcuts you're going to take are
19:35
always going to be shortcuts around safety.
19:37
And we are going to recapitulate all
19:40
of the problems of social media, just
19:42
orders of magnitude bigger. And
19:45
the way we like to say it is that social
19:47
media was actually humanity's first
19:50
contact with AI. And
19:52
whereas AI in social media, it's the
19:55
thing that sits behind the screen choosing
19:58
which posts and which videos. hit
20:00
your eyeballs and your eardrums. It's
20:03
the algorithm. It's the algorithm. And
20:05
it was a very simple, unsophisticated
20:07
version of AI, and
20:09
its small misalignment, optimizing for the
20:12
wrong thing, sort of broke
20:14
our world. So tell
20:16
us, what might you do? If
20:19
I could wave a magic wand and say,
20:21
all right, every one of these major AI
20:23
companies, there needs to be some way for
20:25
them to give, I don't know, 25%, 40% of their
20:28
compute to
20:34
forecasting all of the foreseeable
20:36
harms that they
20:38
can possibly foresee using the
20:41
new sort of cognitive labor
20:43
that AI affords, so
20:46
that there's now some kind of appropriate
20:48
liability for not doing
20:50
enough to constrain
20:53
those foreseeable harms. And
20:55
then you could imagine, we're gonna need
20:58
some kind of graduated sanctions, and the
21:00
sanction comes in the form of, like,
21:03
I don't know, like a compute tax or
21:05
something like that. This is very, like, much
21:08
of a sketch, because this is hard, but I'm just
21:10
trying to give a flavor of how we might start
21:12
to think about it in a way that's at the
21:14
incentive layer, not at what is a specific thing that
21:16
one company can or cannot do later. I
21:19
think what's tricky about it then is,
21:21
okay, you're gonna pour all those
21:24
compute resources into anticipating risks, and
21:27
then you're not just gonna rely on
21:29
the AI to decide which
21:31
of those risks are high versus
21:33
low probability, or high versus low severity.
21:35
We need to bring in human judgment.
21:38
And then what do we do when we have
21:40
a low probability, high
21:42
severity threat? This work gets
21:45
very, very challenging, because we as
21:47
human beings are not very good
21:49
at, like, emotionally
21:51
attuning to tail risks, especially when
21:53
on the other side of the
21:55
equation, right, because the AI could
21:57
enable, like, terrible bio.
22:00
weapons and race-based viruses, a whole
22:02
bunch of terrible things. And you
22:04
can imagine AI just increasing all
22:06
of those tail risks. But on
22:08
the other side, we get incredible
22:10
benefits and the benefits are concrete,
22:12
like cancer drugs. They happen for
22:14
you immediately versus these risks, which
22:16
are diffuse, probabilistic,
22:19
amorphous, and our brains just
22:22
can't deal with that trade
22:24
very well at all. I'm actually very curious what you
22:26
would say about it. How do you make those kinds
22:28
of risks? It's a really good question.
22:30
It's a hard one. Frankly, I don't think
22:32
we've cracked it yet. I
22:34
want to just delegate the problem to
22:37
Phil Tetlock and his team of duper
22:39
forecasters and say, okay, we have individuals
22:41
who have demonstrated a consistent ability to
22:43
do this. So let's treat them
22:45
as one of your juries. They
22:48
know what they know and they know what they don't know. We
22:50
know a lot about how to train people to be
22:52
better forecasters. A second is, I
22:54
think we can make some of
22:56
this probabilistic information easier to
22:58
digest. I think of the
23:00
work of Gerd Gigerändzer, for example, and colleagues where
23:04
they've shown that natural frequencies are
23:06
easier to process than statistics. And
23:09
so instead of saying that something is 0.1% odds, say this
23:11
is one in a thousand
23:15
and all of a sudden people are more
23:17
likely to take it seriously. Like, wow, that
23:19
could happen. You have to
23:21
make them visceral. You have to feel it. What
23:23
we need is a process. And
23:27
this is where I think it
23:29
gets really exciting because the United
23:32
States was founded on the principle
23:34
that we could build a
23:36
more trustworthy form of governance. No
23:39
one really, I think, deeply trusts
23:41
like the institutions that we have now.
23:44
And if we just handed power to
23:46
regulate AI to the government,
23:48
it would probably mess it up in some
23:50
way. There's probably some kind of deep centralization
23:52
of power that would happen. And that's super
23:54
scary in the era of essentially forever
23:57
dystopias. There is no such thing as
23:59
privacy. in the AI
24:02
world, everything that can be decoded
24:04
will be decoded. Governance needs to
24:07
scale with AI because otherwise, as
24:09
AI increases its intelligence, you're driving a car
24:12
whose engine is going faster and faster, but
24:14
your steering wheel is going faster and faster,
24:16
that thing is going to break. And we're
24:18
going to need a way of having human
24:21
collective intelligence scale with
24:23
AI. Otherwise, AI will overpower human
24:25
collective intelligence, which is another way
24:28
of saying we lose control. And
24:30
obviously, this is a complex, hard
24:32
topic, and they're like mini-publics.
24:34
And Audrey Tang's work from Taiwan,
24:36
I think, is the best living
24:38
example of how you can put
24:40
these values in to practice.
24:43
That it isn't just sort of like
24:46
a theoretical framework. She,
24:48
with a whole community, has built the
24:50
tools that do these kinds of bridging
24:53
algorithms we're talking about, so
24:55
that citizens can
24:58
set the agenda for
25:00
government to have to listen
25:02
to. They did this for,
25:04
say, how should Uber and
25:06
other ride-sharing apps, how should
25:08
they integrate into society? And
25:11
they asked everyone in society
25:13
to give them the ability to
25:15
contribute what their values are, what they care about.
25:17
And it's a lot of these incredible
25:20
little design philosophies that I
25:22
find super fascinating. Like, in
25:25
her system, there isn't a
25:27
reply button. You can only say your
25:29
value. And if you disagree, you don't
25:31
disagree. You just have to state in
25:33
the positive your value. And
25:36
now you have this big map of everyone's
25:38
values, so that when you then can thumb
25:40
up something, be like, oh, yes, I agree
25:42
with this idea, they can use
25:44
a bridging algorithm to say, well, we know
25:46
what everyone's positive value statements are. So let's
25:48
find the policy
25:51
or the agenda that we really care about
25:53
that sits across those divisions
25:56
in our society. So we're finding the things
25:58
that knit and heal versus the... things that
26:00
divide. Adri Tang was a
26:03
software programmer. And then they
26:05
created a new position for her, and now
26:07
she's Taiwan's first ever Minister of Digital Affairs.
26:10
Yes. Why are we not doing that? I
26:13
think honestly it's an imagination gap.
26:16
We cannot imagine a
26:18
system different than we
26:21
have now. It turns out there are a
26:24
huge number of really brilliant people working on these
26:26
kinds of things. So if
26:28
the US government were to put, I don't know,
26:31
let's just say $10 billion per year into
26:35
upgrading democracy
26:38
itself, not just digital democracy
26:40
adding more forms online, but
26:42
let's do the most American
26:44
thing, which is innovate our
26:46
way to a new form of what
26:49
democracy looks like. I want to
26:51
vote not left, not right, but upgrade. So
26:53
if I were to draw a Venn diagram of a
26:55
techie and a hippie, do you live in the middle?
26:58
I spent a lot of time in nature and
27:00
there is something profound about feeling
27:02
the smallness of your breath against the
27:05
largeness of the universe.
27:08
But I don't know if I'd say I'm a hippie and I don't know if I'd say I'm
27:10
a techie. Well that actually is
27:13
a great segue to where I wanted to
27:15
go. I was stunned when you said
27:18
last summer that you thought it might be possible
27:20
for us to understand whales
27:22
and maybe even talk to them one
27:24
day. Why do you want to
27:26
communicate with whales? We're
27:28
trying to talk to whales and already that's
27:31
not why we're trying to do it. We do
27:33
not change when we speak. We change
27:35
when we listen. The goal for
27:38
Earth Species Project is to
27:40
learn how to listen to
27:42
whales and orangutans and parrots,
27:45
the other non-human cultures
27:47
of Earth, sometimes which have been
27:50
communicating for 34
27:52
million years, passing down languages
27:54
and dialects and cultures because
27:57
whatever it is that is the solution to the
27:59
Earth. humanities problems, I'd argue it's
28:01
probably not in our imagination, because if
28:03
it was, we'd be doing it. So
28:05
what we're looking for are things that
28:08
are outside the sphere of
28:10
human imagination. And just to preempt, I
28:12
think, your listeners' questions, like we're talking
28:14
about animal languages, does such a thing
28:16
even exist? And I just want to
28:18
give a couple quick examples that
28:21
I think will, like, help illustrate this.
28:23
Many animals have names that
28:25
they will call each other by,
28:27
sometimes even in the third person.
28:29
Parrot parents will spend the
28:32
first couple of weeks of their
28:34
chicks' lives leaning over and whispering
28:36
into each of their individual children's
28:38
ears a unique name, and
28:41
the children will sort of, like, babble back
28:43
until they can get it, and they will
28:45
use that unique name for the rest of
28:47
their lives. Mind blown. And
28:51
then, just to give another
28:53
example, a 1994 University Hawaii
28:55
study where they were teaching
28:57
dolphins two gestures. And the
29:00
first gesture was, do something you've never
29:02
done before. And it takes a lot
29:04
of patience and a lot of fish to, like, communicate that
29:06
idea to a dolphin, but they will get it. Were
29:09
you a kid who was obsessed with Aquaman? What's
29:13
the origin story of this? I was a kid
29:15
that was obsessed with everything. It
29:17
must have been a very annoying
29:19
kid. This idea really came, actually,
29:22
from hearing a story on NPR
29:25
about this incredible animal, the
29:27
gelada monkey. The researchers said
29:29
they had one of the
29:31
largest vocabularies of any primates,
29:33
humans accepted. And when you
29:35
listen to them, they sound like women and children
29:37
babbling, and they sort of do turn taking, and
29:39
it's this complex vocal thing. And she's like,
29:41
we don't know what they're saying, but I
29:43
swear they're talking about me behind my back.
29:45
They were out there with like a hand
29:47
recorder, hand transcribing, trying to figure out what
29:50
they were saying. And the thought
29:52
sort of struck me like, why are we
29:54
using machine learning to translate? And
29:56
that changed in 2017, when suddenly AI developed
30:01
the ability to translate between
30:03
human languages without
30:05
the need for any Rosetta stone
30:07
or any examples and that's the
30:09
moment that it was time to
30:11
start Earth species project, start actually
30:14
going out to the field and learning
30:16
from biologists and the why really grew
30:18
with it. When I look out at
30:21
the structure of humanity's largest
30:23
problems, like I think
30:25
there's a connective thread between all of
30:28
them, whether it's the opioid epidemic
30:30
or the loneliness epidemic or climate
30:33
change or inequality, it
30:36
always takes the form of a narrow
30:38
optimization at the expense of a whole. Some
30:41
part of the system like
30:43
optimizing whether it's for GDP at
30:45
the expense of climate or
30:48
whether it's trying to grab people's
30:50
attention at the expense of mental
30:52
health and backsliding democracies. It's
30:54
always a narrow optimization that breaks
30:57
the whole and Earth species is
30:59
fundamentally about reconnection and a narrow
31:01
optimization at the expense of the
31:03
whole is fundamentally a different
31:05
way of saying that is disconnection from
31:08
ourselves, from each other, the natural world,
31:10
a disconnection of our systems to their
31:12
large-scale effects. When you think about
31:16
many of the just so stories, indigenous
31:19
myths, they almost always start out with
31:21
human beings talking with nature,
31:23
talking with animals, and that
31:25
moment of disconnection is symbolized by
31:28
the moment we can no longer communicate
31:30
with nature. This isn't just a question
31:32
of what we
31:34
must do. Fundamentally, this is a question
31:36
of who we must be, like
31:39
to change our identity, to
31:42
change the stories we tell ourselves in
31:44
order to live, to change our myths,
31:46
to reconnect us. At the deepest
31:49
level, that's the hope of what
31:51
Earth species can help bring
31:54
about and just to name and self-awareness that
31:56
no one thing can do this. There is
31:58
no silver bullet, but maybe there is
32:00
silver buckshot. Hey,
32:04
rethinking listeners, we're supported by our
32:06
friends at Working Smarter, a new
32:09
podcast from Dropbox exploring the exciting
32:11
potential of AI in the workplace.
32:14
Working Smarter talks with founders, researchers, and
32:16
engineers about the things they're building and
32:18
the problems they're solving with the help
32:20
of the latest AI tools. Tools
32:23
that can save them time, improve collaboration,
32:25
and create more space for the work
32:27
that matters most. On Working
32:30
Smarter, hear practical discussions about what AI
32:32
can do so that you can work
32:34
smarter, too. Listen to Working
32:36
Smarter on Apple Podcasts, Spotify, or
32:38
wherever you get your podcasts. Or
32:41
visit WorkingSmarter.ai. Hi,
32:45
I'm Ben. I suffer
32:47
from a condition called writer's block.
32:50
It strikes when I'm at work. That's
32:52
why I choose Canva MagicRide. It
32:55
works bad. Generating text in
32:57
seconds. It works to AI. Common
33:00
side effects include increased productivity, compliments
33:02
from coworkers, feelings of satisfaction. Now
33:05
I can say bye-bye to writer's
33:07
block. Ask your boss if Canva
33:09
MagicRide is right for you at
33:11
canva.com, designed for work. Let
33:16
me suggest now we go to lightning round. What
33:18
is the worst advice you've ever gotten? That
33:21
feeling in your body is telling you be careful
33:24
or something's up. Don't listen to that.
33:27
Push through. Oof. If
33:30
you could talk to any animal species, which
33:33
one would you choose? If
33:35
you were to talk to everyone on the Earth species
33:37
team, each person would have a different animal they're most
33:39
excited about. But for me, it's
33:41
beluga. Because beluga,
33:44
if you actually listen to them, they
33:46
sound like nothing you're expecting. They sound like
33:49
an alien modem. The
33:51
cultures of belugas and dolphins and
33:53
whales, they go back 34 million
33:56
years for something to have survived 34
33:58
million years of cultural history. evolution,
34:00
there has to be some deep wisdom in
34:02
there. I am so curious
34:04
to get the very first glimpses of
34:06
what that might be. You're
34:09
in conversation with a beluga whale. If
34:12
you could ask one question, what would it be?
34:14
I'd want to know, what does
34:16
it feel like to be them? What
34:18
is the question you have for me? You prove,
34:21
okay, animals think
34:23
they have language, there's an interiority.
34:26
What for you changes? What do you think the implications are?
34:30
I guess my hope is that we start
34:32
to realize that we need to do a
34:34
much better job, both avoiding
34:37
harm to and taking care of
34:39
species that aren't human. And
34:42
that this is a watershed moment. The
34:46
skeptical side of me says, we've
34:48
tried this with a lot of human cultures
34:50
and failed pretty much every time. It's
34:53
so easy to dehumanize people that
34:56
we already know are sentient and
34:58
entire groups that we already know
35:00
feel extreme pain. Why
35:03
would it be any different with animals? Whenever
35:05
I think about that, I'm like, well, it is
35:07
true that even though we know other
35:09
humans speak, we still do terrible things to
35:12
them. And imagine how much
35:14
worse it would be if they couldn't speak at all.
35:17
You mentioned earlier whales,
35:19
orangutans, parrots. How
35:22
did you go about deciding which animals? A
35:24
lot of which animals we decide to
35:27
work with are driven by the deep
35:29
insights of the biologists that have been
35:31
out there in the field. So
35:33
for instance, why start thinking about orangutans?
35:36
It's because one of our partners, Adriano
35:39
Lamera, was able to
35:41
show in the last couple of years that
35:43
orangutans have a kind of past tense. They
35:46
can refer to events that happened
35:48
at least up to 20 minutes ago. It's probably longer,
35:50
but that's as far as he's been able to show
35:52
so far. And when you think about language,
35:54
two of the big hallmarks of language are being
35:57
able to talk about things that are not here
35:59
and not now. Parrots, as we're talking
36:01
about, they have names, they call each other by. And
36:03
I honestly think even
36:05
just a campaign that let the world
36:07
know that animals have names, like that
36:10
would already start to shift human culture
36:12
and how we relate. I
36:14
think probably for a long time
36:16
I assumed that cognitive
36:19
capabilities tracked with vocal range.
36:22
But we all know that's not true. Parrots,
36:24
they can say incredible things. I
36:27
don't think their thinking capacity
36:29
is anywhere near what a dolphin is,
36:31
for example. How do
36:33
you weigh those two sets of factors? I'll
36:36
push back a little bit. There was a
36:38
nature publication maybe
36:40
three, four years ago now where they're
36:43
looking at ravens and crows and
36:46
their cognitive capabilities compared to, say,
36:48
the great apes. And they're
36:50
on par. This is the general thing
36:52
we find, which is as human beings, our
36:54
ability to understand is limited by our ability
36:56
to perceive. And generally speaking,
36:59
we just haven't been perceiving
37:02
enough. It seems like
37:04
this is long overdue because I've looked for
37:06
years at these supposed
37:09
intelligence rankings of animals and
37:11
said, well, this is just a function of the tasks that
37:13
we've given. Yes, exactly. And the way that we know how
37:16
to score them. And it's really
37:18
easy to discover that a
37:20
pigeon is dumb if you don't give
37:22
it a navigation task. Yes. And
37:24
then all of a sudden you do and you realize, wow, it's
37:26
a lot smarter than us when it comes to finding its way
37:29
around the world. And I wonder how
37:31
many species we've underestimated that way. One
37:33
of my favorite examples of this comes from
37:35
the mirror test is when you take
37:37
an animal and you paint a dot on them where they
37:40
can't see and they're unaware of it, they look in the
37:42
mirror and then they start trying
37:44
to get that dot off of them or investigate
37:46
it. It's a test
37:48
that tests self-awareness. They
37:50
have to look into a mirror and say,
37:53
oh, that image
37:56
in the mirror, that is me. step
38:00
to take. It means there's an interiority and a sense
38:02
of self. It was thought for the longest time that
38:04
elephants couldn't pass the mirror test, but
38:07
then it was turned out
38:09
that it's just because scientists were using small mirrors.
38:12
No. Right? It's
38:14
just like if you measure the thing wrong, all
38:16
it needed was a bigger mirror, then
38:19
suddenly what looked unintelligent becomes very
38:21
intelligent. You were really careful
38:23
to stress that we should just listen or
38:26
that listening is the primary goal. It's at
38:28
the center. It's the primary goal exactly. As
38:30
soon as we're capable of deciphering and understanding,
38:32
someone is going to want to communicate. What's
38:36
your answer to the question of should we
38:38
open Pandora's box? Because I feel like the
38:40
standard Silicon Valley response to this is not
38:42
satisfying. It's, well, somebody else is going to
38:44
do it if we don't and we're more
38:46
ethical than they are. So we need to
38:49
do it first. Which to me is just
38:51
dripping with narcissism and arrogance. Yeah. It's the,
38:53
like, well, I want to do it. So
38:55
I'm going to find the belief that
38:57
lets me do the thing I want to do. Exactly.
39:01
So why do you want to open the box
39:03
despite that risk? We're going to uncover a whole
39:06
bunch of new responsibilities about what does it mean
39:08
to be able
39:10
to communicate with the other
39:12
non-human cultures of Earth. And
39:15
of course, if it confers any kind
39:17
of power, it's going to start a race and that race will
39:19
end in tragedy. So I think
39:21
to be a sort of
39:23
humane technologist or responsible technology, really
39:25
just be to be a technologist
39:27
means to pre-think through all the
39:29
ways you're going to start some
39:31
kind of race. What are the
39:33
ways that your technology is going
39:35
to be abused or cause harm?
39:38
We might create like a whale QAnon or something.
39:40
We don't know. So
39:42
we need to be really careful about
39:45
going out and starting to just speak
39:47
in the same way. You could imagine
39:49
factory farms using it. You could imagine
39:51
poachers using it to attract animals.
39:53
You could imagine ecotourism
39:55
using it to attract animals. So there is
39:58
no such thing as a technology. doesn't
40:00
have externalities and doesn't have
40:03
bad actor abuses. So
40:06
what do we do? So that means we
40:08
need to race ahead and start thinking about
40:10
what are the international norms
40:13
and treaties and laws and other
40:15
things that can bind those races.
40:17
I think we're going to need
40:20
whatever the equivalent of a Geneva
40:22
Convention for cross-species communication is. And
40:25
to give another example, when we started our
40:28
species, we were doing everything open source.
40:31
We're like, it's good to get these
40:33
models out to as many of the
40:35
scientists as possible because as we
40:37
build the tools to decode animal communication
40:40
and translate animal language, we're also building
40:42
the tools that it turns out all
40:44
biologists need just to do their work and
40:46
their conservation work. And we've realized, actually,
40:48
that was a naive
40:50
value, that we can't just
40:53
open source everything. We're going to have to go through
40:55
a gated release. So as we build these models, we're
40:57
just not going to ship them to everyone. There's going
40:59
to have to be some kind of application process. And
41:01
then we're going to have to start thinking through, and
41:03
this is not just for us, but the wider space,
41:06
what is the right way so
41:08
that we as one entity
41:11
can't sort of abuse our centralized power? How
41:13
do we find these processes that we've been
41:15
talking about that make it a trustworthy process
41:18
for who gets access to the models? How
41:21
do you think about the problem of privacy
41:23
violations? In
41:26
general or for animals? For
41:28
animals. I'm thinking that you're trying
41:30
not to disrupt or disturb whales by
41:32
listening in. Yeah. But they
41:34
also didn't give you permission to listen in. In the
41:36
process, if we learn how to ask
41:39
whether we're violating consent, then we
41:41
can actually just ask and find out. If
41:43
we think about whales, for example, what year do
41:45
you think it'll be when we
41:47
can understand everything that they're saying, or
41:49
maybe not everything, but where we
41:52
can decipher a significant
41:54
chunk of their communication? And of
41:56
course, we're talking about science here, so I just want to
41:58
call out any prediction. of where
42:00
we're going to be, but we are this
42:03
year heading into our first non-wild
42:06
two-way AI to animal communication experiment.
42:09
And we're seeing, can we essentially
42:11
pass the Turing test for a
42:14
specific kind of sogginbird, a zebra
42:16
finch? Can you swap one zebra
42:18
finch out for the AI zebra
42:20
finch and see if the actual animal can tell
42:23
the difference? What sound does an elephant
42:25
make or two elephants make when they come together? And
42:27
that means something about greeting, affiliation, but maybe it means
42:29
they miss you, or maybe it means I'm glad to
42:31
see you. Maybe it means their
42:33
name, but you can see, okay, well, it happens with one
42:35
elephant is running really quick and running
42:37
its ears, and we know that that has
42:40
emotional conjugation to it. And
42:42
so you can see that as we start to pass
42:44
the Turing test, we get towards decoding pretty quickly. This
42:47
is a real frame shift. We started out, and
42:49
I was under the impression that the goal
42:51
is to learn something that can benefit humanity.
42:55
And also, it would be really nice if it got
42:58
us to be kinder to animals.
43:01
It did not occur to me that you're in a position
43:03
to actually help the very animals
43:05
that you're communicating with. So the idea, for
43:07
example, that you could put a warning device
43:09
on every major ship that
43:11
would signal to any underwater creature, get out
43:13
of the way. That
43:16
could be very meaningful. You could potentially
43:18
do the same thing in a rainforest with
43:20
birds, right? Is one
43:22
of your aspirations to be able
43:24
to use some of what you learned to actually
43:27
save species from extinction? Yes,
43:30
absolutely. And I just want to paint a
43:32
picture in everyone's head for what might a
43:34
translation look like? Because are we
43:36
just talking about a Google Translate, and you say whatever you
43:38
want, it comes out. And it probably won't look
43:41
like that. I think there are parts of
43:44
the experience that we share with animals.
43:46
We know that whales will carry their
43:49
dead children for up to three weeks.
43:51
Pilot whales do this. And
43:53
it looks like grief is a shared
43:55
experience. But then there are huge portions
43:58
of their experience that we might not. never be
44:00
able to directly translate. Like
44:03
Spermwell spent 80% of their
44:05
life a kilometer deep in
44:07
complete darkness and seeing in
44:09
3D sound. That's
44:12
not anything in the human experience.
44:15
What might those translations be? I think those
44:17
translations are likely to be much more poetic.
44:19
It might be a snatch of music with
44:22
a specific kind of color. It's
44:24
some kind of multimodal translation.
44:27
We won't know what it means exactly, but
44:29
we will get a sense over time
44:31
and maybe it'll be our children who grow
44:34
up immersed in these odd translations from other
44:36
beings and other cultures that they're like, oh,
44:38
I get it. I have a sense for
44:40
what that thing is. Things to look
44:43
forward to and brace ourselves
44:45
for. Yeah, exactly. Awesome. Thanks,
44:47
Zaeza. Thank you so much, Adam. This
44:49
was super fun. To be continued. Agreed.
44:54
As we're waiting for digital platforms to
44:56
evolve, I have one thought about a
44:58
small step we can each take to
45:00
reduce polarization and misinformation. People
45:03
often say, I'm entitled to my opinion.
45:06
I want to rethink that. Yes, you're
45:08
entitled to your opinion in your head. But
45:10
if you decide to share that opinion, it's
45:13
your responsibility to change your mind when
45:15
you come across better logic or better
45:17
evidence. Rethinking
45:22
is hosted by me, Adam Grant. This
45:24
show is part of the TED Audio
45:27
Collective. And this episode was produced and
45:29
mixed by Cosmic Standard. Our
45:31
producers are Hannah Kingsley Ma and Asia Simpson.
45:33
Our editor is Alejandro Salazar. Our
45:36
fact-checker is Paul Durbin, original music by
45:38
Hans-Dale Stu and Alison Leighton Brown. Our
45:41
team includes Elijah Smith, Jacob
45:43
Winnick, Samaya Adams, Michelle Quint,
45:45
Banban Cheng, Julia Dickerson, and
45:47
Whitney Pennington-Roggers. Humpback
45:56
Whales, their song goes viral. And
45:59
for whatever reason, they're not. and Australian humpbacks are
46:01
like the K-pop singers and their songs
46:04
will spread, and we don't know why, over
46:06
the entire world within a couple
46:08
of seasons sometimes and then everyone
46:11
is singing like the Australian pop
46:13
songs. Support
46:15
for the show comes from Brooks Running.
46:17
I'm so excited because I have been
46:19
a runner, gosh, my entire adult life
46:22
and for as long as I can remember, I
46:24
have run with Brooks Running
46:27
shoes. Now I'm running with a
46:29
pair of Ghost 16s from Brooks,
46:32
incredibly lightweight shoes that have
46:34
really soft cushioning, it feels
46:37
just right when I'm hitting my running
46:39
trail that's just out behind my house.
46:41
You now can take your daily run
46:43
in the better than ever Ghost 16,
46:46
you can visit brooksrunning.com to learn more.
46:52
PRX.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More