Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:15
Pushkin, I'm
0:19
may have Higgins, and this is Solvable
0:22
Interviews with the world's most innovative
0:24
thinkers who are working to solve the
0:26
world's biggest problems. In
0:29
this episode, Anne Applebaum is in conversation
0:32
with researcher and data analyst Renee
0:34
Deresta about her solvable, which
0:37
is the growing spread of dangerous
0:39
misinformation online, especially
0:42
on social media. For the
0:44
most solvable, I think we need increasing
0:46
awareness, increasing cooperations,
0:49
helping algorithms make better decisions, recognizing
0:52
that recommendation engines
0:55
are not functioning as they should, and that we should
0:57
be taking tangible steps to think about
0:59
ways in which algorithm curation serves
1:02
information to people. In
1:05
late twenty sixteen, Oxford Dictionaries
1:07
selected post truth as their
1:09
word of the year, defining it as
1:11
relating to or denoting circumstances
1:14
in which objective facts are less
1:16
influential in shaping public
1:19
opinion than appeals to emotion
1:21
and personal belief. It's
1:23
like, I want to believe that nachos
1:25
are the ideal balanced nutritional
1:28
snack that appeals to my emotional
1:30
and personal belief system, because God,
1:32
I love nattos. So
1:34
I'll go and I'll find some vague chitchat
1:36
online that tells me, you know, something
1:39
melted cheese is totally full of calcium
1:41
that is good for your bones, and it's important
1:44
for you, as an immigrant to the US, to assimilate
1:46
by eating their national dish of natos.
1:50
So I'll convince myself of
1:52
that, and I'll maybe even eat myself
1:54
into a delicious early grave. The
1:58
age we live in, the digital age,
2:00
affects every narrative we see and absorb,
2:03
and that can be news based, or cultural
2:05
or artistic. We have always
2:07
had an instinct to find information that
2:09
sinks with our perspective, and
2:12
now a host of new platforms
2:14
are only too happy to oblige
2:16
that part of us. Pew reports
2:18
that an analysis of almost
2:21
four hundred million Facebook users interactions
2:23
with over nine hundred news outlets
2:26
found that people tend to seek information
2:29
that aligns with their views.
2:31
That makes many of us vulnerable to accepting
2:33
and acting on misinformation. Social
2:37
media firms are under pressure to halt the
2:39
spread of fake contents on their platforms,
2:42
and we know that the problem has both
2:44
human and technical side, and
2:47
so too does any potential solution.
2:50
Reneed Arresta is the director of research
2:53
at New Knowledge and a Mozilla
2:55
Fellow in Media Misinformation and Trust.
2:57
She investigates the spread of malign narratives
3:00
across social networks and helps policymakers
3:03
to understand and respond to the problem.
3:06
Renee has advised Congress and the State Department,
3:09
and she studies some fascinating areas
3:11
of disinformation in contexts
3:14
like pseudoscience, conspiracies,
3:17
terrorism, and state sponsored
3:19
information warfare, all
3:21
that spooky stuff. I'm so glad she is
3:23
scouting ahead and sending us back
3:26
the best ways to deal with this. Let's
3:28
take a listen and I'll speak to you after.
3:31
So, Renee, you're one of the few people
3:34
who identified the problem
3:36
of online anti
3:38
vax disinformation very
3:41
early on. How did you first come into contact with
3:43
the problem? How did you know it was a problem at all. I
3:45
started working on a law in California
3:47
called SP two seventy seven, and it was a law to
3:49
eliminate vaccine opt outs. And I was
3:51
a parent at a new baby, and I
3:53
wanted, as a mom, just to volunteer
3:55
to help get this law passed. So I am
3:58
a data analyst, and I offered to do
4:00
some analysis into things like the
4:03
social media conversation around the law. And
4:05
I was really surprised because the legislators,
4:07
there are a number of legislators on both
4:09
parties who were supporting the law. They were saying that
4:11
their constituents were polling at around eighty five
4:14
percent in favor, but the social media
4:16
conversation was almost one hundred percent negative,
4:18
and that was on Facebook and Twitter. So
4:20
I started working with another data scientist named
4:22
Glad Latan to look at
4:25
the conversation on Twitter, to look at the different
4:27
distinct groups, how they were evolving their messages,
4:30
how they were connecting with other activists
4:32
outside of California, how sometimes activists
4:34
outside of California. It turned out we're pretending
4:37
to be Californians, bunches
4:39
of new accounts that had been created, and we
4:41
were really looking at the idea of what had no
4:43
name then but kind of came to be called manufactured
4:45
consensus, the idea that the conversation
4:48
online was really being driven by a
4:50
relatively small number of people who
4:52
were using things like tools to be always
4:54
on, constantly being in the hashtag
4:57
Facebook groups and ads to amplify their
4:59
message, and then the way that the algorithm
5:02
was amplifying the message. In addition
5:04
to that, so ways in which I, as
5:06
a parent who had just gotten involved in the conversation,
5:08
had just demonstrated an interest in vaccine
5:10
policy, was all of a sudden getting pushed tons
5:13
of anti vaccine content on Facebook.
5:15
It was recommending groups to me, it was recommending
5:17
pages to me. And the realization
5:20
that what was really not
5:22
a very large number of people was actually
5:25
having an extreme disproportionate
5:27
amount of a share
5:29
of voice in the conversation, and
5:32
did you have to create tools in order
5:34
to begin identifying who the people were
5:36
who was being pushed. They were actually not very
5:38
quiet about that. There was a page called tweet for
5:41
Vaccine Freedom, and it was actually you know,
5:43
when out of state activists were asking how can
5:45
we help because the entire anti vaccine movement across
5:47
the entire United States decided to fight
5:50
this battle. They would say like, oh, you should
5:52
just create an account and say or from California.
5:54
So it was actually really transparent. It wasn't that hard
5:56
to figure out that there were people pretending
5:58
to be from California. There were also Twitter
6:00
accounts that all of a sudden had a vested interest
6:03
in California politics. But if you read their past
6:05
material, which again is also public it was really
6:07
right out there that that's not where they were actually from. Kind
6:09
of a very interesting because it was extremely
6:12
small, local and niche we you know,
6:14
we thought in California. But as the law
6:16
began to get more press coverage and things
6:18
or would actually be like comments section
6:20
battles, you know, the same kinds of stuff that we saw
6:22
later with you know, entities that go and are
6:25
like almost incentivized to leave
6:27
comments on news articles to shape a perception
6:30
about the topic. And actually we on the
6:32
provac side thought, oh boy, I guess we're gonna
6:34
you know, we need to do this too. Are we
6:36
really engaging in Okay? They commented
6:38
over here, so you know, we have to go comment over here. They
6:40
have bots that are on twenty four to
6:43
seven? Do we need bots that are on twenty four seven?
6:45
Just became this this interesting firsthand experience
6:47
of what it was going to be like to try to run
6:49
any kind of influence or policy campaign in the
6:51
future. I found it really troubling, especially when
6:54
the algorithms just began recommending anti
6:56
vaccine content to me constantly. And
6:58
how did the Facebook and Twitter and other algorithms
7:01
work. Were they affected by this campaign? The
7:03
where the search engines affected by it. I don't think
7:06
the search engines as much because it was
7:08
you know, the Google is a little bit more sophisticated
7:10
about this stuff than the social platforms. Social
7:13
platforms the number one signal that they're using as
7:15
popularity, and so you either
7:17
if you have real popularity or if you can feign
7:19
popularity. The number of likes and
7:21
engagements and comments and things is what decides,
7:24
you know, whether this is how Facebook was deciding
7:26
what gets pushed into your feed. Instagram
7:28
is like that too. Google has
7:31
a framework now it has a proper
7:33
name. It's called Your Money or Your Life, and it
7:35
says that on topics related to
7:38
health issues and financial issues they have to have
7:40
a higher standard of care to make sure that it
7:42
isn't just what's popular that's rising to the top.
7:45
But even with that policy, one of
7:47
the things that we consistently see is anti
7:49
vaccine activists producing content
7:52
at a higher rate and also candidly
7:54
more engaging content, you know, a much more emotionally
7:56
resonant versus more authoritative
7:59
medical quote unquote establishment doctors,
8:01
the CDC, the National Instituites for Health,
8:03
their contents not as emotionally resonant. It doesn't
8:05
get as much engagement, and so the search
8:08
engines and the algorithm aren't amplifying
8:10
the more factual, reality
8:13
based content, and instead what we're getting is this conspiratorial
8:15
stuff. Walk me through what it means
8:17
to be emotionally resonant online. Is
8:19
this something that's being done deliberately to the people who
8:21
are creating it understand that that's what it is?
8:24
Or is it that the human brain is just tuned
8:26
to conspiracies and prefers them. Some
8:28
of it is platform culturism, of it
8:31
is the way that the algorithm understands
8:33
engagement. So there's the human element which gets
8:36
kind of the initial signal shows that
8:38
there's a lot of people who are watching this, and the algorithm
8:40
recognizes that a lot of people are watching it and
8:42
then begins the amplification process. But
8:44
the first step is actually the content, of course, and
8:47
in that particular area, it's usually
8:49
a first person, you know, looking directly
8:51
at a camera, speaking about a personal
8:54
experience they've had, recounting a
8:56
narrative or an interesting story. So
8:58
a lot of times with the anti vaccine movement, that's a
9:00
person claiming that their child has autism
9:02
and telling a story, you know, usually
9:04
very sad story about their child's
9:06
health, and so it is engaging.
9:09
It is much more resonant versus
9:11
seeing kind of infomercial about how
9:13
vaccines don't cause autism because thousands
9:16
and thousands and thousands of studies have said that they do
9:18
not. I know that you were part of the Senate
9:20
commission that looked through material
9:22
that we knew that Facebook handed over to Congress
9:25
which was originally created by the IRA, the Russian
9:28
Internet Agency, in order to influence the
9:30
US elections. When you looked over that material,
9:33
did it seem to use those same tactics?
9:35
Can you see a relationship between the way the
9:37
Russian influence campaign worked and the anti
9:39
vax campaigns? The Russian content was distinct,
9:41
and that this was a foreign intelligent
9:44
service of a foreign entity that was trying
9:46
to pretend to be American. So it
9:48
was far more duplicitous than anything that we've
9:50
seen related to domestic
9:53
activists pushing for a cause. Really, but
9:55
what was happening there was again they were taking these
9:57
extremely big topics
9:59
things like who is America for? What
10:01
does it mean to be an American? How do we feel about immigration?
10:04
How do we feel about gay rights? How do we feel about
10:06
police brutality? They
10:09
were creating these pages, and each page was
10:11
designed for a very particular type of
10:14
person, So they were really creating these tribes,
10:16
again relying on the sort of first person experience,
10:18
first person concerns and fears,
10:21
and putting out content that was again very
10:23
much focused on achieving an emotional response.
10:26
So for the black community, the content took
10:28
the form of constant references
10:30
to police violence mixed in
10:32
with narratives of pride, and so it was
10:34
really very much designed to evoke
10:37
cultural pride and then also a
10:39
sense of deep harm. And on the right
10:42
leaning pages, it was really concerned about
10:44
what America is and who it's for, and so a
10:46
lot of photos of things like homeless veterans.
10:49
This is a very real problem
10:51
that we have in this country, and they were using the images
10:53
of homeless veterans to say, why are
10:55
we allowing in all of these outsiders when we can't take
10:58
care of our own. This is how propaganda is most
11:00
effective. It's when it has some degree of truth to
11:02
it, and it spins it just enough
11:04
that it doesn't necessarily trigger the
11:07
part of the brain that says, hey, this is false. Instead
11:09
it the person relies on
11:11
the emotional reaction to it, and that's how
11:13
they begin to develop
11:15
a sustained relationship with the page and sustained
11:17
engagement with that type of content. You know, as
11:19
I'm listening to you, I'm wondering whether different
11:22
kinds of propagandists they understand now that
11:24
they need to tailor messages to particular audiences.
11:26
Is it the case that some of the solutions to
11:29
this they're also going to involve thinking
11:31
differently about different audiences or offering
11:33
different kinds of counter messaging or counter
11:36
strategies to different audiences. Yeah. Absolutely,
11:38
And this is something that you know. A third area
11:40
I worked in was countering violent extremism. Briefly
11:43
was Isis. The idea that we would kick Isis off
11:45
the platforms was sort of a stretch at the time.
11:47
There were a lot of people who were very concerned
11:50
at the idea that we would delete terrorists accounts,
11:52
and so a lot of the focus instead was on counter
11:54
messaging. How do we reach these audiences that
11:56
are receptive to ISIS propaganda and present
11:59
counter narratives to them. Who is the authentic
12:01
voice for the counter narrative. It's definitely not
12:03
the United States State department, So who
12:06
is it and what are the ways in which we can
12:08
come together to think about ways to counter
12:10
message to try to present people
12:13
with it an alternate, also emotionally
12:15
resonant narrative instead of just saying it's
12:17
a bad idea to be a terrorist because you're going
12:19
to go to jail or you're going to die. A lot of the tribal
12:22
deep affinity ties is what is my place
12:25
in society? This is something that comes up with conspiracy
12:27
theorists also, they're looking for
12:29
answers, they're looking for an explanation. What
12:32
you get hooked into oftentimes is what
12:34
is most visible to you, what's most prevalent
12:37
in your space at that moment.
12:39
Now that we're spending so much more of our time online,
12:42
things like ad targeting and
12:45
participation in Facebook groups where
12:47
you're kind of declaring a particular alignment
12:50
mean that bad actors who want
12:52
to target you with certain types of propaganda can find
12:55
you very easily. And can we reapply
12:57
some of that thinking back, for example, to
13:00
the anti vax problem. Can we think
13:02
about counter messaging there? Can we think about
13:05
how to reach people using counter
13:07
emotional stories? Yes, absolutely, that
13:09
is That's something that our groups
13:11
like Voices for Vaccines are trying to work
13:13
on that. The group that was formerly called every child
13:16
by two, it now goes by vaccinate your family is trying
13:18
to do that. We have to get out of statistics
13:20
and get into storytelling. That's
13:23
the one of the key takeaways of how the
13:25
information ecosystem has evolved. If
13:27
you look at even just from a design perspective, one
13:29
of the things I always get at is the the
13:32
subject of the narrative is
13:34
interesting when you're thinking about how to counter message
13:36
to a particular group of people. But when you think
13:38
about this as a problem written large, a
13:40
lot of it comes down to the algorithms and
13:42
the design. And so memes
13:45
in particular are getting more and more important
13:47
in our lives. And that's because the design
13:49
of the platform itself is privileging
13:51
this large you know, this large square image
13:54
or this piece of video, this short video
13:56
clip. So what can you convey in
13:59
the construct of that design.
14:01
As people are scrolling by, they see your
14:03
message in it immediately sticks. The fact
14:06
that the algorithm will continue to serve up
14:08
types of content that you've engaged within the past
14:10
means that if you do engage with anti vaccine
14:13
content, you're likely to see more of it. The
14:15
challenge of algorithms
14:18
that don't know what they're pushing because they
14:20
have no actual awareness of what the underlying content
14:22
is. So they treat something that's potentially
14:24
radical, they treat something that's potentially blatantly
14:28
false the exact same way that they would treat something
14:30
that's accurate or uplifting. They don't
14:32
actually know. They just know that this content
14:35
drives engagement, and so they continue to show
14:37
it to people. We see all this disinformation
14:40
online. We you know, we hear about it.
14:42
You know, we can sometimes see it in our Google searches.
14:44
But doesn't really matter. I mean, for example, in
14:47
the anti vax campaign, has this really
14:49
affected anything, Does it make any difference? Or is
14:51
this just stuff that exists somewhere in the ether and
14:53
if we ignore it will go away. Let me give
14:56
you two quick examples on that. First of all, with the anti
14:58
vaccine movement, Yes, it absolutely has an impact.
15:00
It really creates a lot of fear and hesitancy,
15:03
and that translates very directly into
15:05
vaccination rates declining in the communities
15:08
that are that are seeing it. And so this is
15:10
something that in California. The reason I
15:12
started looking at it was because immunization
15:15
rates in California communities had declined, and when
15:17
I was trying to find a preschool for my son,
15:20
I was actually looking at these rates, and there
15:22
are certain schools in California with thirty percent
15:24
immunization rates, which is terrifying.
15:27
That's like South Sudan. The
15:29
reason that we passed the lawn California was because we
15:31
wound up with the Disneyland measles outbreak, where
15:33
two hundred and something people got sick and I believe
15:36
a quarter had to be hospitalized. So
15:38
this was a very real outcome of
15:41
that kind of misinformation becoming so
15:43
pervasive to people, creating
15:45
that very real fear and then leading to an
15:48
outbreak in the case of Russia.
15:50
Just because a lot of people think about this is just related
15:52
to the election. No, what they were doing was they were also
15:54
creating real world events. So they were
15:57
sponsoring protests, and one
15:59
of the things that they sponsored was an incident in
16:01
Texas where they had two competing
16:03
protests on the same day at the same time. So
16:06
from Saint Petersburg, Troll created
16:08
a Facebook event saying that people
16:10
with Texas Pride had to come and protest outside
16:13
of an Islamic center to defend their way of life.
16:16
They also posted an event calling
16:18
on members of the Islamic Center to come out and
16:20
defend the Islamic faith. So they
16:22
sponsored two protests on the same day
16:24
at the same time, across the street from each other.
16:27
And you can go on YouTube and you can see the video
16:29
footage from that day of people showing
16:31
up with kind of anti Islamic
16:33
material on one side of the street and then people on the
16:35
other side of the streets screaming back at them, and police
16:38
getting involved in breaking up altercations.
16:41
So this is an example of very
16:43
real world tension erupting
16:45
as a result of online disinformation.
16:48
When you first started looking at this problem, did
16:51
people believe it was a problem. Opinion
16:53
polls all showed people were in favor of vaccinations.
16:56
You saw something quite different online. How
16:58
did you convince people that this was something they need to take
17:00
seriously. In the California case in
17:02
particular, I sent what
17:04
I was seeing, you know, kind of quantifiable
17:07
evidence to the legislators
17:09
and said, I don't think that people
17:11
are screaming at you online, they're threatening you online,
17:14
You're seeing all of this anger and rage
17:16
in the hashtags, I don't think that
17:18
these are not your constituents, where it's pretty
17:21
pretty clear that these are not all even
17:24
Californians. So when you make your decision,
17:27
I would lean into the polling
17:29
numbers and the communications with your actual constituents.
17:31
I don't think that we can treat the online conversation
17:34
as representative of the reality of
17:36
the population of California. So
17:38
in that particular case, it
17:40
was just really kind of appealing directly to
17:42
the legislators with the evidence the challenges
17:45
it really does bump up against things like freedom
17:47
of expression right. So you have a right to
17:49
have an anti vaccine opinion. Of course you have
17:51
a right to put the content online. The
17:54
challenge was at the time, the
17:56
recommendation engine, the trending algorithm,
17:59
the ways in which Twitter
18:01
and Facebook were amplifying information
18:04
was very different, far
18:07
more primitive then than it is even now two
18:09
and a half years later. After those of us who
18:11
work on this challenge have kind of been constantly beating
18:13
the drum with example after example
18:15
of example of how this is manifesting
18:18
in the real world. How do we preserve
18:20
freedom of expression while at the same time recognizing
18:23
that the platform is
18:25
pushing this point of view at people people
18:27
aren't even know me. In particular, I'm not going
18:30
out there typing in anti vaccine search
18:32
terms. The recommendation engine is just
18:34
pushing it to me because it's seeing that I've expressed
18:36
an interest in vaccines in general.
18:39
As part of working on this law, I suppose
18:41
there's also a question of Okay, you have a right
18:43
to write something, but then do you have a right
18:46
to artificially amplify it using
18:48
bots and search engine optimization?
18:51
So everyone has the right to freedom
18:53
of expression online. The secondary
18:56
piece of that, though, is do you have a right to free
18:58
reach your right to algorithmic
19:00
amplification. Nobody has that right.
19:02
That is not part of the First Amendment, That is not
19:05
part of our cultural experience
19:07
of what it means to have a right to express, or you
19:09
have never had the right to free
19:12
mass dissemination as well. That's
19:14
the piece where as people begin
19:16
to talk about how the platform should think about
19:18
these things. One of the ways that we can continue to
19:20
maximize freedom of expression is
19:23
to allow people to speak, but also for
19:25
the algorithm to perhaps not
19:27
begin to take that kind of
19:30
sensationalist content and
19:32
proactively broadcast
19:35
it out to massive quantities
19:37
of people because it checks
19:39
the boxes of being sensational and emotional.
19:42
And do you think that it's going to be enough
19:45
to discuss this with the platforms, for people
19:47
like you who have you, who are respected
19:49
on these issues, to talk about it with people at Facebook
19:52
and Google, or is this something that we're
19:54
going to need to regulate or have Congress
19:56
step in on. I don't think you can
19:58
have Congress regulate what
20:00
algorithms amplify. I think that that would probably
20:02
be a little bit too close to Congress
20:05
making decisions on speech. A
20:07
lot of the dissemination that come about
20:09
through inauthentic amplification
20:12
through things like bots and stuff, can be addressed
20:14
without even knowing what the narrative is actually about.
20:16
So you're not looking for content related
20:18
to a particular topic. You're looking for particular
20:21
dissemination patterns. So you're looking
20:23
at the authenticity of the accounts. Are these
20:25
real accounts where they all created yesterday?
20:28
Are they bots? Are they majority automated?
20:30
Are they Twitter does have now a designation
20:32
of something that considers a low quality account
20:35
ways in which it surfaces
20:37
top tweets, as opposed to just
20:40
the straight up reverse chronological order where
20:42
you see every single tweet about a particular
20:44
hashtag, giving the user some control.
20:46
So people who do want to go see that kind
20:48
of fire hose of every single tweet
20:51
coming through about a topic can go and do that. But
20:54
the majority of people who just want to get
20:56
the kind of quick takeaways are seeing more
20:59
kind of higher caliber content. And
21:01
that sounds like you do believe algorithms
21:03
could eventually identify
21:06
quality content that they could encompass
21:08
a notion of better or more comprehensive
21:12
or more fact based. Remember
21:14
the olden days of the Internet
21:16
where you had email spam, right, we did build
21:19
classifiers, We did build tools
21:21
to think about how to ensure that crap
21:24
wasn't flooding people's inboxes, that there
21:27
wasn't this mass cognitive load
21:29
every time you opened your inbox of having to sift
21:31
through all of the garbage to find the communications
21:34
from people that you actually wanted or or find
21:36
the things that were really intended for you. We
21:38
need to put some things in place here to
21:41
improve the system, to improve the user
21:43
experience, to improve the outcomes. There
21:45
were things like recognizing that certain
21:47
domains were just not reputable
21:50
domains that most people wanted in their inbox, and
21:52
so some of this was user filtering, you know, feedback.
21:55
You remember you used to kind of mark things as spam much more
21:57
regularly then. It didn't mean that there were never
21:59
false positives. There are still false positives today,
22:01
But it was how can we create greatest
22:05
value while at the same time recognizing
22:07
that there are extremely
22:10
coordinated, deliberate groups of people
22:12
working to manipulate and evade that detection,
22:14
in the case of spam, to wind up in your inbox
22:17
and in the case of social algorithmic manipulation
22:19
to wind up in your feed People who are concerned
22:21
about this problem, people who worry about online
22:23
disinformation, people who worry they're getting bad
22:25
information. Is there anything they
22:28
can do about it? Is there something that ordinary
22:30
people can do to fight back? Stopping
22:32
this spread a lot of the time is something where
22:35
individuals really have a lot of power. It's
22:38
been for a long time, you know, kind of a cultural
22:40
norm where if you see someone sharing something
22:42
a little bit nutty to just kind of ignore it, just
22:44
let it go by. I don't think that
22:46
that's necessarily really helped us. I've tried
22:48
lately to try, like commenting gently
22:51
or sending a private message saying hey, I don't
22:53
think this is necessarily the most reputable source.
22:55
Maybe you know, here's a fact check
22:57
on that. There's a lot of evidence that says that interventions
23:00
from people, you know, even
23:03
in the kind of counter radicalization space,
23:05
that really engagement with friends
23:07
and family and people were there's a base of
23:09
trust and an assumption of goodwill. People
23:12
are receptive to rethinking maybe
23:15
why they chose to share something. And then when
23:17
you see something that makes you feel highly emotional
23:19
and you go to click the share button or the retweet button
23:21
just because you know, you feel outraged
23:24
and you need to tell the world, that's where I think
23:26
taking the extra second to stop and do the fact
23:28
check, to stop and see is this a reputable
23:31
domain or a reputable account, it really
23:33
makes a difference. So friends, don't let
23:35
friends share disinformation, and
23:38
always check whose account you're
23:40
retweeting or reposting before you do it.
23:42
Yeah, I mean, I've made this mistake a couple of times.
23:44
I remember I once retweeted something
23:46
and a friend of mine ping me and said, hey,
23:48
I think you should go read the rest of that accounts
23:51
tweets, And I went and looked,
23:53
and I ninety nine percent sure it was a bot,
23:55
and I was like, oh, I fell
23:57
for it, you know so.
24:00
But that's the kind of thing where it's far
24:03
better to tell somebody. I mean, you can just unretweet,
24:05
you just click the button again. And it's more challenging
24:07
if you are a person with a very, very large following,
24:09
and it usually helps to send a follow up or something
24:11
and say, hey, I inadvertently spread some misinformation.
24:14
It's come to my attention that this is not real, or
24:17
here's the actual story.
24:19
It's so wild to hear about these
24:21
disinformation campaigns online right
24:24
now because here in the US there have been
24:26
eight hundred and eighteen measles cases reported
24:29
in this year's outbreak. It's already
24:31
the largest since nineteen ninety four.
24:33
People are in hospital here because
24:36
of misinformation, and New
24:38
York is seeing the fastest spread, particularly
24:41
in Orthodox Jewish communities. The
24:43
thing is that in that specific case,
24:45
the misinformation about vaccines was
24:47
not spread online, but through physical
24:50
handbooks and phone conferences. The
24:52
internet amplifies what we already
24:55
do, so changing algorithms
24:57
and policies and our own behavior online.
25:00
It's all going to take a lot of changing,
25:02
and I'm really grateful to people like
25:04
Renee who work towards that
25:06
every day.
25:11
Solvable is a collaboration between Pushkin
25:14
Industries and the Rockefella Foundation,
25:16
with production by Chalk and Blade. Pushkin's
25:19
executive producer is Mia LaBelle. Engineering
25:22
by Jason Gambrell and the fine folks
25:24
at GSI Studios. Original
25:27
music composed by Pascal Wise. Special
25:30
thanks to Maggie Taylor, Heather Fame,
25:32
Julia Barton, Carlie Migliori, Sherif
25:35
Vincent, Jacob Weisberg, and Malcolm
25:37
Gladwell. You can learn more about solving
25:40
today's biggest problems at Rockefella
25:42
Foundation dot org, slash solvable.
25:45
I'm Mave Higgins. Now go solve
25:48
it.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More