Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Hey, it's Jason. Before
0:02
we get to the conversation, we wanted to
0:04
mention that from June 11th through EFF's anniversary
0:06
on July 10th, joining or renewing your EFF
0:09
membership is just $20. And
0:11
Craig Newmark Philanthropies will match up to
0:14
$30,000 of donations for your first year
0:16
if you're a new sustaining donor. Many
0:19
thanks to Craig, founder of Craigslist and
0:21
a persistent supporter of digital rights for
0:23
making this possible. And don't
0:25
miss out on limited edition merch featuring
0:27
mysterious creatures who protect digital rights that
0:30
we're calling the Encrypteds. You
0:33
know, like Bigfoot, a longtime privacy
0:35
advocate. Just go
0:37
to eff.org/summer. Enjoy
0:39
the show. Contrary
0:45
to some marketing claims, AI is not
0:48
the solution to all of our problems.
0:50
So I'm just going to talk about
0:52
how AI exists in Kytopia. And
0:55
in particular, the technology is
0:58
available for everyone to understand.
1:01
It is available for everyone to use
1:03
in ways that advance their own values,
1:06
rather than hard coded to advance
1:09
the values of the people who are providing
1:11
it to you and trying to extract something
1:13
from you. And as opposed
1:15
to embodying the values
1:17
of a powerful organization,
1:20
public or private, Kytopia
1:22
wants to exert more power
1:24
over you by virtue of automating its
1:26
decisions. So it can make more decisions
1:29
classifying people, figuring out whom
1:31
to favor, whom to disfavor. I'm
1:33
defining Kytopia a little bit in terms of what
1:35
it's not. But to get back
1:37
to the positive vision, you
1:40
have this intellectual
1:42
commons of research, development
1:46
of data that we haven't really touched
1:48
on privacy yet, but data that is
1:50
sort of sourced in
1:53
a consensual way. And
1:56
when it's essentially one of the things that
1:58
I would love to have is a little We're
4:02
talking to Kit and Jacob both because this is
4:04
such a big topic that we really need to
4:06
come at it from multiple angles to make sense
4:08
of it and to figure out the answer to
4:10
the really important question, which is how can AI
4:12
actually make the world we live in a better
4:15
place? So while many other people
4:17
have been trying to figure out how to
4:19
cash in on AI, Kit and Jacob have
4:21
been looking at AI from a public interest
4:23
and civil liberties perspective on behalf of the
4:25
EFF and they've also been giving
4:28
a lot of thought to what an ideal
4:30
AI world looks like. AI
4:32
can be more than just another tool that's
4:34
controlled by big tech. It really does have
4:36
the potential to improve lives in a tangible
4:38
way and that's what this discussion is all
4:40
about. So we'll start by
4:42
trying to wade through the hype and really
4:44
nail down what AI actually is and how
4:47
it can and is affecting our daily lives.
4:52
The confusion is understandable because AI is
4:55
being used as a marketing term quite
4:57
a bit rather than as an
5:00
abstract concept rather than as
5:02
a scientific concept. And
5:05
the ways that I think about AI,
5:07
particularly in the decision making context,
5:10
which is one of our top
5:12
priorities in terms of where we
5:14
think that AI is impacting people's
5:17
rights, is first I think about
5:20
what kind of technology are we really
5:22
talking about. Because sometimes
5:25
you have a tool that actually
5:27
no one is calling AI but
5:29
it is nonetheless an example of
5:31
algorithmic decision making. That
5:33
also sounds very fancy. This can
5:35
be a fancy computer program to
5:38
make decisions or it can be
5:40
a buggy Excel spreadsheet that litigators
5:42
discover is actually just omitting
5:44
important factors when it's used to decide
5:47
whether people get healthcare or not in
5:49
a state healthcare system. You're
5:52
not making those up, Kit. These are real
5:54
examples. That's not a hypothetical. Unfortunately,
5:56
it's not a hypothetical and the
5:58
people who litigated the that case
6:01
lost some clients because when you're talking about
6:03
not getting healthcare, that can be life or
6:06
death. And machine learning can either be
6:08
a system where you humans
6:13
code a reinforcement mechanism. So
6:16
you have sort of random changes
6:18
happening to an algorithm and
6:20
it gets rewarded when it succeeds according
6:23
to your measure of success and
6:25
reject it otherwise. It can
6:27
be training on
6:29
vast amounts of data. And
6:32
that's really what we've seen a huge
6:34
surge in over the past few years.
6:37
And that training can either be
6:39
what's called unsupervised where you just
6:41
ask your system that you've created
6:43
to identify what the patterns are
6:46
in a bunch of raw data, maybe
6:48
raw images, or it can be
6:50
supervised in the sense that humans,
6:53
usually low paid humans,
6:56
are coding their
6:58
views on what's reflected in the
7:00
data. So I think that this is
7:02
a picture of a cow, or
7:05
I think that this picture is adult
7:08
and racy. So some of these
7:10
are more objective than others. And
7:13
then you train your computer
7:15
system to reproduce those kinds
7:17
of classifications when it
7:19
makes new things that people ask for
7:21
with those keywords, or when it's asked
7:23
to classify a new thing that it
7:25
hasn't seen before in its training data.
7:28
So that's really a very high
7:31
level over simplification of the technological
7:33
distinctions. And then because
7:35
we're talking about decision making, it's really
7:37
important who is using this tool.
7:40
Is this the government which
7:43
has all of the power of the state
7:45
behind it and which administers a whole lot
7:48
of necessary public benefits that
7:50
is using decisions to decide who is worthy
7:52
and who is not to obtain
7:55
those benefits, or who should
7:57
be investigated, what neighborhoods, should be investigated.
8:00
We'll talk a little bit more about
8:02
the use in law enforcement later on.
8:05
But it's also being used quite a bit
8:07
in the private sector to
8:09
determine who is allowed to get
8:12
housing, whether to employ someone, whether
8:15
to give people mortgages. And
8:18
that's something that impacts
8:20
people's freedoms as well.
8:22
So Jacob, two questions I used to
8:25
distill down on AI decision-making are, who
8:27
is the decision-making supposed to be serving? And
8:29
who bears the consequences if it gets it
8:31
wrong? And if we think of those
8:34
two framing questions, I think we get to add
8:36
a lot of the issues from a civil liberties
8:38
perspective. That sound right to you? Yeah,
8:41
and talking about who bears the consequences when
8:44
an AI or technological system gets it
8:46
wrong. Sometimes it's the person
8:49
that system is acting upon, the person who's being
8:51
decided whether they get healthcare or not. And
8:54
sometimes it can be the operator. It's
8:56
popular to have kind of human in the
8:58
loop, like, oh, we have this AI
9:00
decision-making system that's maybe not
9:03
fully baked. So there's a
9:05
human who makes the final call. The AI just
9:07
advises the human. And there's
9:09
a great paper by Madeline Claire
9:12
Elish describing this as a form
9:14
of moral crumple zones. So
9:16
you may be familiar in a car, modern
9:19
cars are designed so that in a collision,
9:22
certain parts of the car will collapse
9:24
to absorb the force of the impact.
9:26
So the car is destroyed, but the
9:28
human is preserved. And
9:30
in some human in the
9:32
loop decision-making systems, often involving
9:35
AI, it's kind of the reverse.
9:37
The human becomes the crumple zone for when the
9:39
machine screws up. You were supposed to
9:41
catch the machine screw up. It didn't screw
9:43
up in over a thousand iterations. And then the one time
9:45
it did, well, that was your job to catch it. And
9:49
these are obviously, a
9:51
crumple zone in a car is great. A moral
9:53
crumple zone in a technological system is
9:55
a really bad idea. And it takes away
9:57
responsibility from the deployers. of
10:00
that system who ultimately need
10:02
to bear the responsibility when their system harms people.
10:08
So I want to ask you, what would it
10:10
look like if we got it right? I
10:12
mean, I think we do want to have some
10:14
of these technologies available to help people make
10:16
decisions. They can find patterns in giant data, probably
10:18
better than humans can most of the time, and
10:21
we'd like to be able to do that.
10:23
So since we're fixing the internet now, I want
10:25
to stop you for a second and ask you
10:27
like, how would we fix the moral crumple zone
10:29
problem? Or what were the things we think
10:31
about to do that? I
10:33
think for the specific problem of holding,
10:37
say, a safety driver or
10:39
a human decision maker
10:41
responsible for when the AI system
10:43
they're supervising screws up, I think
10:45
ultimately what we want is that
10:47
the responsibility can be applied all
10:49
the way up the chain to the folks who decided
10:51
that that system should be in use. They
10:54
need to be responsible for making
10:56
sure it's actually a safe, fair
10:58
system that is reliable and suited
11:00
for purpose. And when
11:03
a system is shown
11:05
to bring harm, for instance, a
11:07
self-driving car that crashes into pedestrians
11:09
and kills them, that needs
11:12
to be pulled out of operation and either
11:14
fixed or discontinued. Yeah, it made
11:16
me think a little bit about kind
11:18
of a change that was made, I think, by Toyota
11:20
years ago where they let the people on the front
11:23
line stop the line. I
11:25
think one thing that comes out of that is
11:27
you need to let the people who are in
11:29
the loop have the power to stop the system.
11:33
And I think all too often we
11:35
don't. We devolve the responsibility down to
11:37
that person who's kind of the last
11:39
fair chance for something, but we don't
11:41
give them any responsibility to raise concerns
11:43
when they see problems, much less the
11:45
people impacted by the decisions. And
11:48
that's also not an accident
11:50
of the appeal of these
11:52
AI systems. It's true
11:54
that you can't hold a machine
11:56
accountable, really, but that doesn't
11:58
deter all. of the potential
12:01
markets for the AI. In fact,
12:03
it's appealing for some regulators, some
12:05
private entities to be able to
12:08
point to the supposed wisdom and
12:10
impartiality of an algorithm,
12:12
which, if you understand where
12:14
it comes from, the fact that it's
12:16
just repeating the patterns or biases that
12:18
are reflected in how you trained it,
12:20
you see it's actually, it's just sort
12:22
of automated discrimination in many
12:25
cases. And that
12:28
can work in several ways. In
12:30
one instance, it's intentionally
12:33
adopted in order
12:35
to avoid the possibility of
12:37
being held liable. We've heard
12:40
from a lot of labor
12:42
rights lawyers that when
12:45
discriminatory decisions are made,
12:48
they're having a lot more trouble proving
12:50
it now because people can point to
12:52
an algorithm as the source
12:55
of the decision. And
12:57
if you were able to get insight
13:00
in how that algorithm were developed, then
13:02
maybe you could make your case. But
13:04
it's a black box. A lot of
13:06
these things that are being used are
13:09
not publicly vetted or understood. And
13:11
it's especially pernicious in the context
13:13
of the government making decisions about
13:15
you, because we have
13:18
centuries of law protecting your
13:21
due process rights to understand
13:23
and challenge the ways that
13:25
the government makes determinations about
13:27
policy and about your specific
13:30
instance. And when
13:32
those decisions and when those
13:35
decision making processes are hidden
13:37
inside an algorithm, then
13:39
the old tools aren't
13:42
always effective at protecting your due
13:44
process and protecting the public participation
13:46
in how rules are made. It
13:51
sounds like in your Better Future kit,
13:54
there's a lot more transparency into these
13:56
algorithms, into this black box that's hiding
13:59
them from us. Is that part of what you see
14:01
as something we need to improve to
14:03
get things right? Absolutely.
14:07
Transparency and openness of AI systems
14:09
is really important to make sure
14:11
that as it develops,
14:13
it develops to the benefit of
14:16
everyone. It's developed in plain sight.
14:18
It's developed in collaboration with communities
14:21
and a wider range of people
14:23
who are interested and
14:25
affected by the outcomes, particularly
14:28
in the government context. They'll
14:30
speak to the private context as well. When
14:33
the government passes a new law,
14:36
that's not done in secret. When
14:39
a regulator adopts a new rule, that's
14:41
also not done in secret. Hopefully. There's
14:44
either, sure. There are exceptions.
14:46
Right, but that's illegal. Yeah,
14:48
that's the idea. Right. We
14:52
want to get away from that also. Yeah,
14:54
if we can live in
14:56
Kitopia for a moment where
14:58
these things are done more
15:00
justly, within the
15:03
framework of government rulemaking, if
15:06
that's occurring in a way that
15:08
affects people, then there is participation.
15:11
There's meaningful participation. There's meaningful accountability.
15:13
In order to meaningfully have public
15:15
participation, you have to have transparency.
15:18
People have to understand what
15:20
the new rule is that's going to come into force.
15:24
Because of a lot of the hype and
15:26
mystification around these technologies, they're
15:28
being adopted under what's called
15:30
a procurement process, which is the process you
15:32
use to buy a printer. It's
15:35
the process you use to buy an appliance,
15:37
not the process you use to make policy.
15:40
But these things embody policy. They are
15:42
the rule. Sometimes when
15:44
the legislature changes the law, the tool
15:47
doesn't get updated, and it just keeps
15:49
implementing the old version. That
15:52
means that the legislature's will is being
15:54
overridden by the designers of the tool.
16:00
You mentioned predictive policing, I think, earlier, and
16:02
I wonder if we could talk about that
16:04
for just a second, because
16:06
it's one way where I think we
16:08
at EFF have been thinking a lot
16:10
about how this kind of algorithmic decision-making
16:12
can just obviously go wrong and maybe
16:14
even should never be used in the
16:17
first place. What we've seen is
16:19
that it sort of, you know,
16:21
very clearly reproduces the problems
16:23
with policing, right? But
16:25
how does AI or this
16:28
sort of like predictive nature
16:30
of the algorithmic decision-making for
16:32
policing exacerbate these problems?
16:34
Like, why is it so dangerous, I guess
16:36
is the real question. So
16:38
one of the fundamental features of
16:40
AI is that it
16:43
looks at what you tell it to look at,
16:45
it looks at what data you offer it, and
16:47
then it tries to reproduce the patterns that are
16:49
in it. In
16:52
the case of policing, as
16:54
well as related issues around
16:57
decisions for pretrial release and
17:00
parole determinations, you are
17:02
feeding it data about how
17:05
the police have treated people, because
17:07
that's what you have data about.
17:10
And the police treat people
17:13
in harmful, racist, biased,
17:15
discriminatory, and deadly ways that
17:19
it's really important for us to change,
17:22
not to reify into
17:24
a machine that
17:27
is going to seem impartial
17:29
and seem like it creates a veneer
17:32
of justification for
17:34
those same practices to continue. And
17:37
sometimes this happens because the machine
17:40
is making an ultimate decision, but
17:42
that's not usually what's happening. Usually
17:44
the machine is making a recommendation.
17:47
And one of the reasons we don't think
17:49
that having a human in the loop is
17:52
really a cure for
17:54
the discriminatory harms is
17:57
that humans are more likely by
32:00
certain laser printers, by
32:05
most laser printers that you can get as
32:10
an anti-counterfeiting measure. This
32:15
is one of our most popular discoveries that
32:20
comes back every few years, if I remember right, because
32:25
people are just gobsmacked that they can't see them, and
32:28
they can't make money. Indeed,
32:33
yeah. The
32:38
other thing people really worry about is that
32:40
AI will make it a lot easier
32:43
to generate disinformation and then
32:45
spread it. And
32:48
of course, if you're generating disinformation, you
32:53
can actually run it through a program. You
32:58
can see what the shades of all the
33:00
different pixels are, and
33:03
you in theory probably know what
33:05
the watermarking system in use
33:07
is. And given that degree
33:09
of flexibility, it seems very, very likely, and
33:13
I think past technology has proven this out, that
33:16
it's not going to be hard to strip out the watermark. You
33:20
end up in a cat and mouse game where
33:23
the people who you most want to
33:25
catch, who are doing sophisticated disinformation, say,
33:27
to try to upset election, are going
33:29
to be able to either strip out
33:31
the watermark or fake it, and
33:34
so you end up where the things that you most
33:36
want to identify are probably going to trick people. Is
33:38
that the way you're thinking about it? Yeah,
33:41
that's pretty much what I'm getting at. I
33:44
wanted to say one more thing on watermarking.
33:47
I'd like to talk about chainsaw dogs.
33:50
Oh, yes. Yeah. There's
33:52
this popular genre of image on Facebook
33:54
right now of a
33:56
man and his chainsaw-carved wooden dog,
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More