Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Support for this show comes from
0:02
SoFi. Traditionally, access
0:04
to IPOs have not been aimed at individual
0:07
retail investors. With SoFi
0:09
Invest, you can get in on the IPO
0:11
action at IPO prices. Get
0:13
started by seeing what IPOs are available
0:15
now on sofi.com slash
0:18
IPO. Investing in IPOs
0:20
comes with risks, including risk of
0:22
loss. Please visit sofi.com
0:24
slash IPO risk. Offered
0:27
by a SoFi Securities LLC, member
0:29
FINRA
0:29
SPIC.
0:35
Before we get to the show, we've got an exciting announcement.
0:38
We're doing a live taping of our game
0:40
show, Unexplainable or Not, on September
0:42
21st at the Green Space in New
0:44
York. We can't wait to have some
0:46
crowd noise. Our engineer, Christian, doesn't
0:49
need a sound design. So if you're
0:51
in New York or you want to make the trip, we'd love
0:53
to see you. You can find tickets at vox.com
0:56
slash unexplainable live, and
0:58
you can find a link in the show notes for our August
1:00
30th episode.
1:05
There are lots of stories about mind
1:07
reading. Stories
1:10
about people who can eavesdrop
1:12
on your thoughts. I can read every
1:15
mind in this room.
1:17
Stories about aliens who can communicate
1:19
telepathically. You can read my mind,
1:21
I can read yours. Even stories about machines
1:24
built to make thoughts more transparent.
1:27
We'll be measuring the tiniest electrical impulses
1:29
of your brain, and we'll be sending
1:31
impulses back into the box.
1:33
But one thing these stories all have in
1:35
common is that they are just
1:38
stories. Until pretty
1:40
recently, most mainstream scientists
1:42
agreed that reading minds was the
1:44
stuff of fiction.
1:46
But now... New research shows
1:48
that tech can help read
1:51
people's private thoughts.
1:53
They're training AI to essentially
1:55
read your mind. In the last
1:58
few decades, we've been able to extract more.
1:59
and more things from people's minds.
2:03
And last May, a study was published in
2:05
the journal Nature that got a lot of play
2:07
in news outlets. In that paper,
2:10
a group of Texas scientists revealed that they'd
2:12
been able to translate some of people's thoughts
2:15
into words on a screen.
2:17
The thing that you could call mind
2:19
reading. But do we want machines
2:22
reading human minds? I'm
2:24
Bird Pinkerton, and on this episode
2:25
of Unexplainable, how
2:28
much can these researchers actually
2:30
see inside of our heads? Will
2:32
they be able to see more someday
2:34
soon? And what does all this
2:37
mean for our privacy?
2:54
I reached out to one of the co-authors on this
2:56
paper to get some answers to these questions.
2:59
This guy named Alex Huth, who
3:01
researches how the brain processes
3:04
language.
3:05
And Alex has a word
3:06
of caution on the terminology
3:09
here. A lot of people call this mind reading. We
3:12
don't use that term in general because I think it's
3:14
vague and what does that
3:16
mean? He prefers a more descriptive
3:18
word, which is decoding.
3:21
So basically, when the brain processes
3:23
language or sounds or emotions, whatever, it
3:26
generates this huge flurry of activity.
3:29
And we can capture that activity with a
3:31
variety of tools. So like electroencephalography,
3:35
for example, which is EEG, that
3:37
reads electrical impulses from the
3:39
mind. Or fMRI machines
3:41
will take pictures of our brain at
3:43
work as we react to the things that we experience.
3:47
But then researchers like Alex have
3:49
to decode the cryptic
3:51
signals that come from these machines.
3:53
And in Alex's case, their lab is trying
3:56
to parse exactly how the brain
3:58
processes language.
3:59
So,
4:00
For them, decoding means taking
4:02
the brain responses and then trying to figure out
4:04
like what were the words, what was the
4:07
story that elicited these brain responses.
4:09
So how do you do that? That
4:11
is what this paper from May was all about.
4:15
Step one in their process of decoding the
4:17
mind is, and I swear
4:19
I'm not making this up, listening
4:22
to lots of podcasts.
4:25
So we just had people go in the MRI
4:27
scanner over and over and over and over
4:29
and listen to stories. That was
4:32
it. Alex and his fellow researchers, they took seven
4:34
people and they played them a variety of shows.
4:36
This is the Moth Radio Hour from PRX. So
4:40
Moth Stories, right? The Moth Radio Hour. And
4:42
also the Modern Love Podcast from the New York
4:44
Times.
4:45
So we're just listening to tons and tons of these
4:47
stories, hours and hours and hours. Which sounds kind
4:49
of fun. Right? It's like, it's
4:51
not that bad. It's a dream experiment, really.
4:54
So that's it for this episode of the
4:56
Moth Radio Hour. But then things got
4:58
a little less
4:59
dreamy because the researchers actually had
5:01
to decode all this very
5:03
fun data to kind of match up
5:05
words and phrases from these podcasts
5:08
to the signals coming from the brain. Which
5:11
might sound easy, but unfortunately,
5:14
fMRI has one small problem.
5:17
Which is
5:19
that what it measures sucks.
5:23
fMRI measures blood flow in the brain
5:25
and the amount
5:26
of oxygen in that blood. It
5:28
turns out that when you have a burst of neural activity,
5:31
if your neurons are active, they call
5:33
out to nearby capillaries and say like, hey, I need
5:35
more energy. So let's say you hear the word
5:38
unexplainable, for example. A bunch
5:40
of neurons in different parts of your brain
5:42
will fire and call for energy, which comes
5:45
to them via blood. And over the
5:47
next three-ish seconds,
5:49
you see this increase in blood flow in that
5:51
area. And then over the next five seconds, you see a slow
5:54
decrease.
5:55
But it's not like your brain is only firing one
5:58
thought at a time and then kind of waiting for
6:00
blood flow to clear an area, right? It's potentially
6:02
hearing lots of words, even whole
6:05
sentences in that 8 to 10 second period. Like
6:07
maybe it's hearing, thanks so much for
6:10
listening to unexplainable. Please
6:12
leave a review. And all
6:14
those words could trigger activity in the
6:16
brain, which leaves researchers
6:18
like Alex with this very messy
6:21
scrambled picture to decode. Because
6:23
that means that every brain image that we measure is really some mushy
6:25
combination over
6:28
stuff that happened over the last 10 seconds.
6:32
So if every brain image that you see is like a
6:34
mushy combination of 20, 30 words, like
6:37
how the hell can you do any kind of decoding? For
6:42
a while, the answer was you could
6:44
not really do very much decoding.
6:47
This was a huge roadblock to this research
6:50
until around 2017, when
6:53
we got the first real seeds of something
6:56
you've almost certainly heard about in
6:58
the news. This chatbot
7:00
called ChatGPT. It's a large
7:03
language model, AI, trained
7:05
on a large amount of text across the internet. The
7:08
language model that powers ChatGPT is
7:11
much more advanced than what Alex's team
7:13
started using. They were working with something
7:15
called GPT-1, which is like
7:17
a much more basic model
7:19
that came out in 2018. But
7:22
this model did help Alex and his team sort
7:24
of sort through the mushy,
7:27
noisy pictures that they were getting from FMRI
7:30
scans and sharpen
7:32
the
7:33
image a little bit.
7:34
It was still hard, like even with a
7:37
language model helping him, it took one of Alex's
7:39
grad students, this guy named Jerry Tang,
7:42
years to really perfect this. But
7:45
finally, after some testing, some
7:47
retesting, checking their work, they
7:49
were successful. They could pop someone
7:51
into an FMRI machine, play them a
7:53
podcast, scan their brain, and
7:57
decode the signals coming from
7:59
their brain. back into language
8:01
on a screen.
8:04
The decoding here was not perfect. Like,
8:07
for example, here's a sentence from a story
8:09
that the researchers played for a subject.
8:12
I don't have my driver's license yet, and I just
8:14
jumped out right when I needed to.
8:16
Their decoder interpreted the brain scans and
8:19
came up with this.
8:20
She's not even started to learn to drive yet.
8:22
I had to push her out of the car. Again, the
8:25
story that was played. She says, well, why don't
8:27
you come back to my house and I'll give you a ride? I
8:30
said, we will take her home now. The story?
8:32
I say, OK. The decoder? And
8:34
she agreed. So as
8:36
you can hear, in the decoder's translations,
8:38
pronouns get mixed up. In other examples
8:41
that the researchers provide in their paper, some ideas
8:44
get lost. Others get garbled. But
8:46
still, overall, the decoder
8:49
is picking up on the kind of main gist
8:51
of the story here. And it's not likely
8:53
that it was just lucky. Like, it does
8:55
seem to be reading these signals.
8:59
And that would be amazing enough. But
9:01
the researchers did not stop there. Jerry
9:03
designed a set of experiments to test
9:06
how far can we go? For example,
9:08
they wanted to see if they could decode the signals
9:11
coming from someone's brain if the person was
9:13
just thinking about a story and not
9:15
hearing it. So they ended up having
9:18
people memorize a story. And then
9:20
instead of playing them a podcast, they
9:23
just asked them to think about the
9:25
story while they were in an
9:27
fMRI machine.
9:28
And then we tried to decode that
9:31
data. And? It worked pretty
9:33
well, which I think was
9:36
kind of a surprise, the fact that that worked.
9:38
Because this meant that this tool wasn't
9:40
just detecting what a person was hearing,
9:43
but also what they were imagining.
9:46
Which is also interesting, because it suggests
9:49
that there's some kind of parallel, potentially,
9:51
between hearing something and
9:53
just thinking about it. Like, our brains are
9:55
doing something similar when we listen
9:57
to speech and when we think about it. And
10:00
the researchers found other interesting parallels
10:02
too. Like they tried this other
10:04
experiment. Which was just weird
10:08
and I still think it's kind of wild that it worked.
10:10
We had the subjects go in the scanner and watch little
10:12
videos.
10:13
Silent videos with no speech, no
10:15
language involved. They were actually using Pixar
10:18
shorts. And again,
10:20
they collected people's brain activities while
10:23
they were watching these things and then
10:25
popped that activity into
10:27
their decoder. And it turns out that the
10:29
decoded things were quite good.
10:31
For example, one video is about a girl
10:34
raising a baby dragon and
10:36
then the decoding example that they give. There
10:38
are definitely moments that the decoder is way off.
10:41
Like at one point something falls out of the sky
10:43
in kind of a surprising way. And the decoded
10:45
description is, quote, my mom
10:47
brought out a book and she was like, wow, look
10:49
what I made. Which is not super related.
10:53
But other moments do sync up pretty well.
10:55
Like at one point the girl gets hit by
10:57
a dragon tail and falls over
11:00
and the decoded text is, quote, I
11:02
see a girl that looks just like me get hit
11:05
on her back and then she is knocked
11:06
off. And that was wild.
11:12
And also potentially says something really
11:15
interesting about the brain, right? Like that
11:17
even as we watch something that doesn't involve
11:20
language at all, on some
11:22
level our brains seem to be processing
11:25
it into language, sort of descriptions
11:27
of what's on screen.
11:28
That was like exciting
11:30
and weird. And I don't know that
11:32
I expected that to work as well as it did.
11:34
Now this research is part of a longer
11:36
line of work. Like other researchers
11:38
have been able to do stuff that's sort of similar to
11:41
this by implanting devices into the
11:43
brain, for example. They've even been able
11:45
to use fMRI machines
11:47
to reconstruct images and sounds
11:49
that brains have been thinking about. But
11:52
Alex and his lab, they've really taken an impressive
11:54
step towards decoding part
11:57
of this sort of messy chaos
11:59
of free. revealing thought that runs
12:01
through someone's head. And
12:04
that's kind of wild.
12:05
You know, the first response to seeing this was
12:08
like, oh, this is really exciting. And then the second response
12:10
was like, oh, this is actually kind of scary too, that this
12:12
works.
12:13
It's especially unsettling, at least to me, from
12:16
a privacy
12:16
perspective. Like right
12:18
now, I
12:19
can think pretty much whatever I want and
12:22
nobody can probe those thoughts unless I
12:24
choose to share them.
12:26
And
12:27
to be clear, it's not obvious
12:29
that this technology is going to change that.
12:31
There are a lot of barriers in place right
12:33
now keeping our brains private. Like
12:36
these decoders have to be tailored to one
12:38
individual brain, for example. You
12:41
can't take the,
12:42
whatever, many hours of another person sitting in the scanner
12:44
and use it to predict this person's brain
12:47
responses or to decode this person's brain responses.
12:49
So unless you're currently in an fMRI
12:51
machine having your brain scanned
12:54
and you also recently spent many
12:56
hours in an fMRI machine listening to podcasts,
13:00
you probably don't need to worry too much that
13:02
someone is reading your thoughts. And
13:04
even if you are in an fMRI machine
13:06
listening to, I don't know, this podcast,
13:09
you could still keep your thoughts from being
13:11
read because Alex and his team
13:14
tested whether someone had to cooperate
13:16
in order for the decoder to work.
13:18
Like if they actively try to make it not
13:20
work, does it fail? And it turns out that, yes, it does
13:22
fail in that situation.
13:23
Like if a subject refuses
13:25
to listen does math
13:27
in their head, for example, like takes a number and
13:29
keeps adding seven to it, the decoder
13:32
does a really bad job of
13:34
reading their thoughts as a result. Like its
13:36
answers become much more random. Still,
13:39
barriers like this, like the need for a bespoke
13:42
decoder for each person's brain
13:44
or the ability to block a decoder with
13:47
one's thoughts. That's definitely
13:49
not a fundamental limitation, right? That's definitely
13:51
not something that's like
13:53
never going to change. Maybe
13:55
it won't, maybe that'll still be necessary, but
13:58
that doesn't seem like a fundamental thing.
16:06
Support for this show comes
16:07
from Gold Peak Real Brew Tea.
16:10
There's a time of day, about an hour before
16:13
sunset, where the rays feel
16:15
warm and the breeze feels
16:17
cool. But that
16:19
hour of golden bliss is always
16:21
gone too soon.
16:23
You might rekindle that feeling with
16:24
a bottle of Gold Peak.
16:26
And with high quality tea leaves, its
16:28
smooth taste transports you to golden
16:30
hours, at any hour.
16:32
Gold Peak Tea.
16:34
It's got to be gold.
16:39
Support for this show comes from DraftKings.
16:43
DraftKings Rainmakers Football is back for its second
16:45
season and it's better than ever. This
16:47
week, new customers can claim their first
16:50
pack of digital player cards for free
16:52
to get started. Each DraftKings
16:54
digital card represents an athlete and
16:56
you score points based on their real world
16:58
performance.
16:59
Draft them into weekly contests for your
17:01
shot at a share of $30 million in prizes. Or
17:05
sell them any time on the DraftKings
17:07
marketplace. Rainmakers
17:09
contests require no fee to join, as
17:12
long as you have enough cards to complete a lineup.
17:14
Build your collection for your chance at some
17:16
big wins. Wondering how to get started?
17:19
New customers visit DraftKings.com
17:21
slash audio today and use promo
17:23
code unexplained to claim a free
17:25
starter pack. Only at DraftKings.com
17:28
slash audio with code unexplained.
17:31
Gambling problem? Call 1-800-GAMBLER. Age
17:33
and eligibility
17:34
restrictions apply.
17:35
Rainmakers contests are not available in certain states. One
17:38
starter pack per customer. Starter pack player cards
17:40
are ineligible for resale. See terms at DraftKings.com
17:42
slash Rainmakers.
17:47
The mind of
17:49
this young
17:50
researcher is as frantic and
17:52
busy as a, say,
17:55
as a city.
17:59
that can look at a bunch of brain data and
18:02
translate it to tell researchers
18:04
what a subject is hearing or thinking.
18:07
It's amazing, but at least
18:09
for now, it involves a lot of clunky
18:12
technology, a lot of time, and
18:14
a lot of cooperation from the person whose
18:16
mind is being decoded. So
18:19
most people
18:19
are probably not going to have machines spitting
18:22
out all their exact thoughts anytime soon.
18:25
But... Don't let that come for you. Nita
18:28
Farhani is still concerned. She
18:30
is a bioethicist who studies the effects
18:32
of new technologies, sort of what they mean for
18:34
all of us legally, ethically,
18:37
and culturally. And recently,
18:39
she published a whole book about
18:41
tools that read the brain. I was
18:44
somebody who had already been following this stuff for a long time,
18:46
and as I dove into the research for
18:48
the book, I mean, I
18:50
was like, what? Really? Nita
18:53
is less focused on fMRI research
18:55
trying to get at exact thoughts, and
18:57
instead, most of her book focuses on different
19:00
brain reading tools, these tools that are becoming more
19:03
and more commonplace. Everyday
19:05
wearables, primarily that are reading
19:07
electrical activity in the brain. Basically,
19:10
when you think, or when your brain
19:12
sends instructions to your body, your
19:14
neurons give off a little electrical discharge.
19:17
And because hundreds of thousands of neurons
19:20
are firing on your brain at the same time, you
19:22
can pick up using brain sensors
19:25
the broader signals that are
19:27
happening. This is electroencephalography,
19:30
or EEG, this technology we've mentioned
19:32
before. It's less precise than
19:35
something like fMRI. It doesn't tell
19:37
you where in the brain the signals are coming from. But
19:40
it also doesn't require you to sit in a loud
19:42
machine for hours. EEG devices
19:44
can take readings by being applied to the head. And
19:47
also, when the brain sends signals out into the body,
19:49
like, say, into the wrist, your sensors
19:52
can measure the electrical activity of the
19:54
muscles that happens as a result. And
19:57
they can be miniaturized and put into earbuds
19:59
and... watches and headphones.
20:02
Because the level of detail is lower, there isn't a
20:04
way, at least right now, to kind of use
20:06
EEG readings to do what Alex can do with
20:08
an fMRI machine, to decode brain
20:11
activity into words running through people's
20:13
heads. But these devices
20:15
can detect things like alertness, tiredness,
20:18
focus, or reactions
20:19
to stimuli.
20:21
And these readings aren't always very precise,
20:24
but as Nita dove into her research,
20:27
she found that these devices are already
20:29
being used in
20:31
all kinds of contexts. It
20:33
would be like, oh, imagine if it was used
20:35
in this way. And then I would find
20:37
an example of it being used in that way. And I'm like,
20:39
what? You know? Some of
20:41
the uses or potential uses for
20:43
these EEG tools are actually
20:45
kind of promising. They could help
20:48
people track their sleep better, potentially
20:50
track cognitive deterioration. Nita
20:53
says they could maybe help people with epilepsy
20:57
get alerts about changes in their brain
20:59
that could mean a seizure, and they
21:01
could help people measure their own pain
21:03
more accurately. But they also
21:06
have a lot of uses that feel a little closer
21:08
to invasions of privacy. So,
21:12
for example, these wearable EEGs
21:14
can be used to measure recognition. Like
21:17
when your brain sees something, any kind of
21:19
stimulus, like a house or
21:21
a face or a goose,
21:23
you say. Your brain reacts to
21:25
the stimulus and it reacts differently
21:28
if you recognize it versus if you don't
21:30
recognize it.
21:31
It does this super fast, like even before you're
21:33
consciously aware of it. And if you recognize
21:36
that goose or face or house, your
21:39
brain then fires a signal that says, I
21:41
know that goose or face or house.
21:44
And because an EEG reader can
21:46
then detect that signal, a
21:48
researcher named Dawn Song, along with some
21:51
collaborators, showed that this can be used
21:53
in pretty concerning ways. What
21:56
they did was, as people were playing video
21:59
games...
21:59
wearing EEG devices, subliminally
22:03
they flashed up images of
22:05
numbers and they were able to go
22:07
through and figure out recognition
22:09
of numbers without the person even knowing
22:12
that the numbers were being flashed up in the video game. And
22:15
just by doing this, just by supplying sort of subliminal
22:18
prompts and then measuring reactions, these
22:21
researchers were able to get some pretty personal
22:23
data. Things like your PIN
22:25
number, even home addresses through
22:27
this recognition-based interrogation
22:30
of the brain.
22:33
That same recognition measurement has also
22:35
been used in criminal investigations.
22:38
Police have interrogated criminal suspects
22:41
to see whether or not they recognize details
22:44
of crime scenes. This is not a new
22:46
thing. Like as early as 1999, a
22:49
researcher in the U.S. claimed that he could use
22:52
an EEG lie detector test to see
22:54
if convicted felons recognize
22:56
details of crimes. This has
22:58
been used by the Singapore police force
23:00
and by investigators in India as
23:03
evidence in criminal trials. And
23:05
there are lots of arguments that the data that comes
23:07
from these machines is not good enough or
23:09
reliable enough to base a criminal
23:11
conviction on. But whether
23:14
or not this technology really works,
23:17
if people believe the results of an EEG
23:19
lie detector like this, it can have
23:21
really serious consequences.
23:24
And not just in the court system. Like
23:26
an Australian company came up with a hat
23:28
that monitors EEG signals
23:31
of employees. There's already a lot
23:33
of employers worldwide who've required employees
23:36
to wear devices that monitor
23:38
their brain activity for whether they're tired
23:41
or wide awake, like for commercial drivers.
23:44
It's also big in the mining industry. Reports
23:46
like this have been worn by workers not just
23:48
in Australia but across the world. And
23:51
while that might seem worthwhile if it prevents
23:54
accidents, some places have started monitoring
23:57
more than just
23:58
tiredness. Like there are reports...
23:59
of Chinese companies rolling out hats
24:02
for their employees. Testing for
24:04
boredom and engagement. Even depression
24:06
or anxiety. The
24:08
reporting around these suggests that EEG
24:11
is way too limited to do a great job
24:13
at reliably detecting those kinds of
24:15
emotions. But again, these
24:18
tools don't need to work well to have professional
24:21
or privacy
24:21
consequences.
24:23
There's risks on the side of if it's really accurate
24:26
and what it reveals. And then there's risks
24:29
on it not being perfectly accurate and
24:31
how people will use or misuse or
24:33
misinterpret that information.
24:35
I think this workplace stuff is especially startling
24:37
to me. Because when I first started reading
24:40
about these EEG devices, I thought, OK,
24:43
I will simply never purchase
24:46
a watch that monitors my brainwaves. Problem
24:48
solved. Yeah, I mean, so most people's
24:52
first reaction to hearing about this stuff is like,
24:54
OK, I'm just never going to use one of those. Great,
24:58
thank you for letting me know. I will avoid
25:00
it at all costs. But if you
25:02
have to have one of these for work, that
25:04
takes away that element of choice. Or
25:07
similarly, Mita told me about this
25:09
EEG tool in the works right now that lets you type
25:12
just by thinking. And if something
25:14
like that becomes the default way
25:16
of typing, then maybe
25:19
having a brain monitoring tool like this also
25:21
becomes the default. Like having
25:24
a cell phone. Technically, you can live
25:26
without one, but it is logistically
25:28
difficult.
25:29
It both becomes inescapable. And
25:32
people are generally
25:35
outraged by the idea that most companies
25:37
require the commodification of
25:40
your personal data to use them as
25:42
free services, whether that's a Google search
25:44
or that's a Facebook app or a
25:46
different social media app. And
25:48
then they seem to forget about it and do it anyway.
25:50
And so there's all kinds of evidence that
25:52
people trade their personal
25:54
privacy for the convenience
25:58
all the time. Right. This
26:00
is why Anita says that we should think
26:02
seriously about the implications of technologies
26:04
like these EEG readers right now,
26:07
as well as the implications of more
26:10
advanced thought-reading technologies
26:12
like the fMRI-based ones
26:14
that researchers like Alex are working on.
26:17
It's really exciting
26:19
to make our brains transparent to ourselves,
26:21
but once we make our brains transparent to ourselves,
26:24
we've also made it transparent to other people. At
26:28
the simplest level, that's terrifying.
26:31
I think from my perspective, there
26:34
is nothing more fundamental
26:36
than the basic sanctity
26:38
of our own minds. What
26:40
we're talking about is a world in which we
26:43
had assumed that that was inviolate
26:47
and it's not.
26:48
All this made me wonder, should
26:51
we shut all this down? Should
26:53
we stop trying to find ways to read minds
26:55
and just tell researchers like Alex Huth
26:57
to stop working on stuff like his
27:00
fMRI brain decoder? For
27:02
Alex, it's tricky because this research
27:05
isn't like working on the nuclear bomb,
27:07
for example. It's not a tool that
27:09
is pretty much only good for killing people.
27:12
I think it's more like, I don't know,
27:15
computers themselves.
27:17
We have shown that computers can be used for bad
27:19
things, right? They can be used to survey lists
27:21
or collect data about us as
27:23
we browse the internet.
27:24
They're also very good. They're used in all kinds of ways that are very
27:26
good. Similarly,
27:27
if EEG devices
27:29
are used to monitor brain waves
27:31
and then detect problems like Alzheimer's
27:34
or concussions, that would
27:36
be a win. If the fMRI
27:39
work in Alex's lab helps us understand
27:41
the fundamental workings of the brain, how our
27:44
mind processes language, I think that's good,
27:47
and other versions of brain reading talk are being used
27:49
to help people with paralysis communicate.
27:52
I think in the same way that it's like
27:54
something can be big and have
27:56
implications in a lot of different ways, it kind
27:59
of matches that.
27:59
mold rather than like nuclear bomb mold.
28:02
But he does worry. After his paper
28:04
came out, Alex actually reached out to Nita
28:06
to ask about the
28:08
ethical implications of his work.
28:10
And he was not particularly surprised
28:12
when she told him that decoding minds could
28:14
lead to pretty concerning consequences for
28:16
privacy. Yeah, I mean, I've
28:20
been reading her books, so I think
28:22
I kind of knew what page she was on. The
28:25
thing that did surprise him was
28:26
when he started asking
28:27
her about some further experiments
28:29
his team is considering.
28:31
Right now, for example, Alex says
28:33
their decoder can pick up the stories
28:35
someone is hearing, but not
28:38
the stray random thoughts they're
28:40
having about that story. Like incidental
28:42
thoughts? It's not clear whether or not
28:44
it's even possible to pick up those kinds of thoughts.
28:47
But when Alex was talking to Nita,
28:49
he asked her, should
28:51
he try and figure out if it's possible?
28:54
Like, should he try and
28:55
probe deeper into people's minds?
28:57
Are there things that we shouldn't do? Like, is this a thing that
28:59
we shouldn't do?
29:00
He thought she'd say, Alex,
29:03
shut it down.
29:04
Like stop going deeper.
29:07
But she didn't.
29:09
If we don't have the facts, it's
29:12
very difficult to know what the ethics
29:14
should be.
29:15
Her view was,
29:18
you
29:19
know, her community, the ethicists,
29:22
philosophers, so on, lawyers,
29:25
they need data.
29:27
They need information to do what they do. And
29:30
they need information like, is this possible or
29:33
not?
29:33
You know, unless you know what you're
29:35
dealing with, how do you develop effective
29:37
countermeasures? How do you develop effective safeguards?
29:40
So she was like, you should do that. Like you kind
29:42
of have a responsibility to do that.
29:44
So now Alex is in a kind of an odd
29:46
position.
29:47
It's a little weird.
29:49
It's a little weird, like feeling maybe we have a responsibility
29:51
to do these things now that are
29:54
creepier because I
29:56
don't know. So we can see like what the limits are and
29:58
we can talk to people about that openly instead of
30:00
somebody just going and doing it and hiding it away.
30:02
I don't know.
30:03
I don't know either. But
30:06
I do understand this argument, that
30:08
it's important to figure out the unknowns
30:10
here. Some of this stuff still feels
30:12
kind of like science fiction to me, and
30:15
it's hard to know really how
30:17
far this tech will advance or how
30:19
transparent it
30:20
could make our brains.
30:22
But I do think there is at least a case
30:24
here for mapping things
30:26
out, right? To understand what the limits
30:29
of this technology might be so
30:31
that we can put safeguards in place
30:33
if we need to.
30:41
Yudhaparahani is the author of The
30:43
Battle for Your Brain, Defending the Rights
30:45
to Think Freely in the Age of Neurotechnology.
30:48
If you want to hear more from her, Fox's Sigal
30:51
Samuel did a great interview with her on
30:53
the Gray Area podcast.
30:54
Look for Your Brain Isn't
30:56
So Private Anymore. And Sigal
30:59
also has a great text piece
31:00
about mind decoding on our site,
31:02
vox.com. You can
31:04
find out more about Alex Huth's work by
31:07
looking up the Huth Lab at the University
31:09
of Texas at Austin. This
31:12
episode was produced by me, Bird Pinkerton.
31:14
It was edited by Brian Resnick and Meredith
31:16
Hodnot, who also manages our team. We
31:19
had sound design and mixing from Christian
31:21
Ayala and music from Noam Hassenfeld.
31:24
Serena Solon checked our facts and
31:26
Manning Len's
31:27
favorite fruit is mango.
31:29
This podcast and all of Vox is
31:31
free in part because of gifts
31:34
from our readers and listeners. You
31:36
can go to vox.com slash give
31:39
to give today. And if you have thoughts about
31:41
our show or ideas for episodes
31:43
that we should do in the future, please email
31:46
us. We are at unexplainable
31:49
at vox.com.
31:49
You can
31:50
also leave us a review. Both
31:53
would be very much appreciated. Unexplainable
31:56
is part of the Vox Media Podcast Network,
31:58
and we will be back. next
32:00
week.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More