Podchaser Logo
Home
Why Casey Left Substack + Elon’s Drug Use + A.I. Antibiotic Discovery

Why Casey Left Substack + Elon’s Drug Use + A.I. Antibiotic Discovery

Released Friday, 12th January 2024
Good episode? Give it some love!
Why Casey Left Substack + Elon’s Drug Use + A.I. Antibiotic Discovery

Why Casey Left Substack + Elon’s Drug Use + A.I. Antibiotic Discovery

Why Casey Left Substack + Elon’s Drug Use + A.I. Antibiotic Discovery

Why Casey Left Substack + Elon’s Drug Use + A.I. Antibiotic Discovery

Friday, 12th January 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:01

You know, it's like every man of a certain age, all

0:03

he wants to do is just sit on the couch and

0:05

watch the History Channel and read about how the Americans fought

0:08

World War II. And like watching my own dad do this

0:10

growing up, I thought, oh, that's like, that's so nice, you

0:12

know, how nice that that story ended

0:14

so positively. And like he fast forward to 2024, it's

0:16

like, is World War II over? No, we're still fighting

0:18

it out of the comments section. Yep.

0:26

I'm Kevin Arus, a tech columnist at The

0:28

New York Times. I'm Casey Newton from Platformer.

0:30

And this is Hard Fork. This week, how

0:32

Substax Nazi problem left me with a hard

0:35

question of my own. Then, The Wall Street

0:37

Journal's Kirsten Grind joins us to talk about

0:39

her reporting on Elon Musk's drug

0:41

use. And finally, how

0:43

AI helped researchers discover a

0:45

new class of antibiotics. It's

0:48

a drug show. So, Casey, on

0:50

this show, we sometimes answer

0:54

hard questions from our

0:56

listeners about ethical dilemmas

1:08

that they are dealing with in their

1:10

lives. And this week,

1:12

you have actually been dealing with your own

1:15

very hard question. It's about

1:17

Substax and Nazis and the

1:19

future of Platformer, your newsletter

1:21

business. And so I want

1:23

to talk about that this week. And I want

1:25

to just make clear that we're not just talking

1:27

about this, because this is a thing that has

1:30

happened to you. But I think it really is

1:32

sort of a microcosm for some of these larger

1:34

debates that we cover on this show about free

1:36

speech and content moderation and the role of internet

1:39

platforms in policing the public square. Well, I want

1:41

to talk about Elon Musk's drug use, Kevin, but

1:43

I'm open to any questions you have about me.

1:47

OK. And I

1:49

think we should just also acknowledge that

1:51

this is going to feel a little

1:53

weird, because even though we are both

1:55

journalists who have covered content moderation by

1:57

big tech platforms and weighed in on.

2:00

on various controversies involving people like

2:02

Alex Jones or whatever, this

2:04

is an instance in which you are actually

2:07

directly involved in the controversy, not

2:09

only because you are covering it and have become

2:11

sort of part of the news story, but because

2:13

you run a business on Substack and are sort

2:15

of directly financially involved in this story. Yes, that's

2:17

very true. And listeners should just sort of like

2:19

keep that in mind as we are talking about

2:22

this. It's like, yeah, this is a weird case

2:24

where we're actually talking about my business. I think

2:26

if you sort of bracket out all of the

2:28

business implications for me, though, there is still

2:30

a really important story to be told about

2:33

how the modern internet should work and what

2:35

people should be allowed to say there. Yeah,

2:37

and it goes to one of the questions

2:39

that we have asked on this show before,

2:41

which is like, when do you know that

2:43

it is time to leave a platform? And

2:46

how do you draw your own

2:48

personal line for what is and

2:50

isn't acceptable on the internet? Absolutely.

2:53

So let's just talk about the

2:56

nuts and bolts of what has happened. Over

2:58

the last few weeks, Substack has

3:00

been fielding lots of criticism about

3:02

its content moderation policies and

3:05

specifically how it treats

3:07

pro-Nazi content. And we'll

3:09

get into what we mean by pro-Nazi content

3:11

in a minute, but this is sort of

3:14

something that has flared up for them in

3:16

the past and that flare up again recently

3:18

and came to a head just the other

3:20

day when Substack announced that it would take

3:22

down some newsletters that promoted Nazi ideas and

3:25

ideology, but wouldn't make changes

3:27

to its broader content moderation policy,

3:29

which it has described as decentralized

3:32

and hands off. And

3:34

Casey, you as a Substack partner, I guess,

3:36

you publish on Substack and have, since you

3:38

started your newsletter, you have ended up in

3:40

the middle of this story. So I think

3:42

we should start with just like, what is

3:45

your news? What do you have to announce

3:47

today? So I've

3:49

decided this week that Platformer is going

3:51

to move off of Substack. So by

3:53

next week, we will have a new

3:56

website and we'll no longer be part

3:58

of that network. Yeah. Can

4:00

you just tell the story of

4:02

your relationship with Substack, maybe starting

4:04

from when it started, when

4:07

you started Platformer? Yeah. So, you know, Substack

4:09

has been around since 2017. And

4:12

it was actually around the time that Substack

4:15

started that I started to write another newsletter

4:17

for The Verge on another platform because Substack

4:19

didn't exist yet. But in 2020, I left

4:21

to start Platformer, my own email newsletter. And

4:24

Substack was the best tool to do that at the

4:26

time. They made it very, very simple to do so.

4:29

It was very fast. And so since

4:31

October 2020, I've been there. If

4:33

you're not familiar with Substack, the basic idea is that

4:35

while anyone can set up a free newsletter and send

4:38

it out to as many people as you can get

4:40

to subscribe to you, if you want to build a

4:42

business, you just connect your Substack to a Stripe account,

4:44

and then you can sell subscriptions. So, you know, in

4:46

my case, people pay 10 bucks a month or 100

4:48

bucks a year. And in exchange for that, you

4:51

get three newsletters every week. So that's

4:53

how the business works. And

4:55

for some number of people, it

4:58

has been amazing. It's built these

5:00

incredible businesses. And I

5:02

think beyond that, Substack has also created

5:04

a really large cultural footprint, right? It's

5:07

not just journalists like me who are on there.

5:09

There are a lot of artists, you know, like

5:11

the novelist George Saunders is on there. You know,

5:14

some of my favorite cooking writers are on the

5:16

platform. Some of my favorite musicians, like Patti Smith,

5:18

are on the platform. So at

5:20

a time when the media industry is contracting and

5:23

it often feels really scary and bad, Substack

5:25

has been this real bright spot where if you

5:27

go there, chances are you'll find something there that

5:29

is like really cool, that's really well suited to

5:31

your interest. Yeah, I'd say it's been one of

5:33

the biggest changes in the media ecosystem over the

5:36

past few years. It's just this

5:38

sort of transition where a lot of journalists

5:40

and writers and creators of all sorts have

5:42

decided to sort of hang out their own

5:44

shingle, set up a Substack and start charging

5:46

people directly for it rather than sort of

5:49

joining some larger media company. Yeah, that's right.

5:51

And my understanding is that Substack takes like

5:53

a 10% cut of everything

5:55

that you and other Substack creators who

5:57

have paid newsletters charge customers on the

5:59

platform. the platform. That's right. So

6:01

I think it's fair to say that

6:03

in addition to musicians and

6:06

cooking writers and journalists, Substack has

6:08

also become home to sort of

6:11

this like alternative media network, these

6:13

people who are sort of dissatisfied

6:15

or disgruntled with sort of mainstream

6:18

media, people like Glenn Greenwald and

6:20

Matt Taibbi and Barry Weiss were

6:23

sort of like these dissenters

6:25

from sort of media orthodoxy and sort

6:27

of more right wing folks have set

6:30

up on Substack and for them it

6:32

sort of seemed like this was the way

6:34

to sort of avoid censorship, right? If you

6:37

were on Substack rather than working at a

6:39

big media institution, no one could tell you

6:41

what to publish and not publish. And for

6:43

them that was part of the appeal. Yeah.

6:45

And this was something that the founders of

6:47

Substack really touted when people would ask them

6:49

about it. You know, they very much leaned

6:52

into the idea that there was too much

6:54

orthodoxy in the mainstream media and that Substack

6:56

would be a place where people could come

6:58

and say just about anything and that Substack

7:00

was always going to take a really laissez-faire

7:02

approach to moderating that stuff. Yeah. And it

7:05

always seemed like that was

7:07

a premonition to me. Like when Substack's

7:09

executives in the early days started coming

7:11

out and saying like, we're not going

7:13

to ban basically anything because my understanding

7:16

is like they have a content moderation

7:18

policy, but it's very, as you said,

7:20

laissez-faire. It's very permissive when it comes

7:22

to what they will and won't host

7:25

on their platform. And for

7:27

me, that was always, it reminded me of

7:29

the so-called Nazi bar problem, which Mike Masnick,

7:31

the tech blogger has written a lot about.

7:33

And it's basically this sort of perennial thorny

7:36

issue that online platforms face

7:38

when they're tasked with dealing

7:40

with Nazis or people

7:43

with other hateful speech. And

7:45

the Nazi bar problem is sort of this

7:48

maybe apocryphal story about like

7:50

a bar owner who, you know, sees a

7:52

guy come in wearing like Nazi regalia and

7:54

just says like, no, you got to get

7:56

out. You're out. No questions

7:58

asked. And someone else. at the bar is

8:00

like, why'd you do that? The guy was just trying to

8:02

have a drink. Why would you kick him out? And he

8:04

basically explains, look, it starts with one

8:07

Nazi. And then that Nazi gets

8:09

allowed to have a drink at the bar and

8:11

then he brings his friends back. And

8:14

pretty soon you're a Nazi bar and they

8:16

are so entrenched and established that it becomes

8:18

very hard to get rid of them. And

8:21

the point is that you shouldn't be allowed to run

8:23

like this is America, you can have a Nazi bar.

8:25

It's just, you shouldn't be confused

8:27

about what you are if you're letting in

8:30

Nazis. Yeah. And you know, I appreciate that

8:32

analogy. I think it has its limits in

8:34

this case, because on the internet, there just

8:36

are Nazis in most places that will show

8:38

up on any platform. And I think just

8:41

because there are three or four of them

8:43

doesn't mean that you're running a Nazi bar,

8:45

it just means that you have a place

8:47

on the internet, you know, at the same

8:50

time, this did eventually snowball for reasons I'm

8:52

sure we'll get into. Yeah, so let's talk

8:54

about that. So when did the permissive content

8:56

moderation policies of sub stack become an issue

8:58

for you? Well, so in November, a journalist

9:01

named Jonathan M. Katz, who was also my

9:03

college classmate, hi, Jonathan, he wrote a story

9:05

for the Atlantic saying sub stack has a

9:07

Nazi problem. And he went through, he said

9:09

he'd identify 16 cases where

9:11

he felt like there were Nazis on

9:13

the platform and suggested sub stack ought to

9:15

do something about it. Sub stack was pretty

9:18

quiet during that period. But then a

9:20

group of 247 sub stack writers sends this

9:22

open letter asking sub stack, are you planning

9:24

to do anything about this? Does this content

9:27

violate your policies? And then

9:29

on December 21, Hamish McKenzie is

9:31

one of the co founders of sub

9:33

stack wrote a blog post in which

9:35

he said that at sub stack, they

9:37

don't like Nazis. But given that they

9:40

exist, sub stack did not believe that

9:42

censorship was the best approach. And it

9:44

did not believe that demonetizing them, you

9:46

know, preventing them from selling subscriptions was

9:48

the best approach. And so sub stack

9:50

said, if you know, anyone could be

9:53

found to be directly inciting violence, they

9:55

would be removed, but nothing else would

9:57

be removed. I read that

9:59

as a statement essentially declaring that

10:01

Nazis were welcome to sell subscriptions on Substack,

10:03

and that's when I thought, okay, I actually

10:05

have a problem now. Now, I want to

10:08

talk more about your decision, but first, can

10:10

we clarify what we mean by Nazi content?

10:12

Yeah. Because that word, I think, gets thrown

10:14

around a lot and has been

10:16

used to mean lots of different things. So

10:19

what was the content that Jonathan M.

10:21

Katz identified that you took issue with?

10:23

Well, so this was the exact question

10:25

that I had, because the Atlantic article

10:27

doesn't actually link to any of the

10:29

Nazi blogs for good reasons. You don't

10:31

want to give undue amplification to extremist

10:33

material. But I thought, well, if I'm

10:35

going to have to make a decision about my business,

10:37

then the first thing I need to do as a

10:39

reporter is to examine the problem myself, right? So

10:42

I reached out to a number of journalists and

10:44

researchers and asked them to share with me what

10:46

they viewed was the worst of the worst

10:48

content on Substack, and I wound up with

10:50

about 40 different publications that had been flagged

10:52

to me. And together with my colleagues at

10:55

Platformer, Zoe Schiffer and Lindsay Chu, we spent

10:57

a few days just going through those. And

11:00

what I was looking for when I was

11:02

planning to try to flag some things for

11:04

Substack were just literal 1930s Nazis. I

11:07

was looking for people who were

11:09

praising Hitler, who were using Nazi

11:11

iconography like the swastika, who were

11:13

talking about the virtues of German

11:15

national socialism. And I

11:17

decided to myself that if I find any of

11:19

that to send a subject, that's all I'm going

11:21

to send them, because they made it very clear

11:23

that they're not going to do anything about right

11:25

wing extremism generally. But I do want to know

11:27

what they have to say about the literal Nazis.

11:29

Right. The sort of clear cut,

11:32

like you are wearing a swastika or you are

11:34

declaring your affinity for Adolf

11:36

Hitler. Like that's the that's sort of the bar

11:38

that you were looking for. Yeah, that's right. And

11:40

what did you find? Yeah. So

11:43

of all of those, we found six

11:45

things that had not yet been removed by the time

11:48

that we submitted them that we thought to sort of

11:50

clearly met the definition of pro Nazi content. We

11:53

submitted this to Substack and we're just waiting.

11:55

And at this point, there was a little

11:57

bit of drama, Kevin, which is I

11:59

had never intended platform

14:00

in the United States that says we are a welcome

14:02

place for Nazis to monetize. Like this was an extremely

14:04

unusual position for anyone to take. So I was just

14:06

hoping that Substack would come along and say what all

14:08

the other platforms say. And they

14:10

kind of didn't. They said that essentially,

14:13

thank you for raising these

14:15

things. We removed five of

14:17

the six. And if

14:19

other people flag similar content to us,

14:21

we will review it on a case

14:23

by case basis. That's what they said,

14:26

which was a very long way

14:28

of saying we're not changing our policies. And

14:30

I think can be fairly interpreted to

14:33

say there will be some content

14:36

that is widely viewed as praising Nazis

14:38

that Substack for whatever reason is not

14:40

going to remove. They did not offer

14:42

me any kind of additional clarity on

14:44

that point. And so that was kind

14:46

of a disappointing thing. And so I write this news

14:48

story that sort of like they've taken down some material.

14:51

I didn't want to get into like exactly how many

14:53

blogs. This sort of came back

14:55

to bite me, because eventually the whole thing

14:57

did get out. And so now

14:59

there are sort of like two sides who are

15:01

mad at me, you have this sort of, like

15:04

the free Spreech Brigade, that's like, you're

15:06

making them out of a molehill. Oh, you found

15:09

five Nazi blogs. What's the big deal, right? And

15:11

then on the sort of more liberal side that

15:13

wants to see stronger action, they say wait, like

15:15

after this whole thing, Substacks only taking down five

15:17

Nazi blogs, they didn't even take down the sixth

15:19

Nazi blog, they're not even saying they're going to

15:21

change their policy. And so

15:23

the situation just became like even

15:26

more polarized. And again, like, this

15:29

whole thing for me did not start as like,

15:31

I'm going to have the last word about the

15:33

quality of content on Substack, it started in the

15:35

spirit of inquiry of like, well, if I

15:37

find some Nazi blogs, will this company take

15:39

them down? Because to me, that's the first

15:41

step to decide whether to do anything else.

15:43

So I just want to say that because

15:45

it's really unfortunate to me that there's been

15:47

so much focus on the specific number of

15:49

Nazi blogs I reported, when again, we found

15:51

dozens of blogs with some like really disturbing

15:53

material that it's now clear will just be

15:55

up there forever. Yeah, I was I was

15:57

looking through some of these Well,

18:01

you know, if it's if they're using like

18:03

open source software and they find some sort

18:05

of web provider that will accept them I

18:08

think the answer is like basically yes, right if we're

18:10

gonna have an internet that is open to all then

18:12

yes Nazis should be able to send out an email

18:14

newsletter. Okay, that's like that's just something that's gonna happen

18:18

I think where it starts to get more

18:20

complicated is when you

18:22

have built recommendation algorithms

18:24

and other tools that surface this

18:26

content to other people who were not

18:28

looking for it And help these

18:31

folks build audiences and it might be helpful

18:33

to talk about how sub stack has evolved

18:35

over the past couple of years Right because

18:38

when it started it was just kind

18:40

of dumb email infrastructure You sign up

18:42

for your account you start sending out

18:44

your emails and you're not the emails

18:47

Never come anywhere near what I'm doing at platformer,

18:49

right? It's your own thing You're just using their

18:51

infrastructure at that point I probably don't make a

18:53

fuss about it because again, this is just kind

18:55

of the cost of doing business on the internet

18:58

But then sub stacks start to do a few things like

19:00

they'll start sending you a personalized digest based on

19:02

the stuff that you're reading That says you might

19:04

want to read these other publications, right? They

19:07

build this social network called notes where anyone

19:09

could publish anything to it looks a lot

19:11

like Twitter And so if you're a Nazi

19:13

and you want to get some attention You can just start putting

19:15

stuff right in that feed and if I don't

19:17

block it I might see it and so

19:19

now there's a chance that my post and

19:21

platformer are showing right up next to Nazi

19:23

post Well, that doesn't feel good. But also

19:25

because these things are getting all this algorithmic

19:27

amplification It means that their audiences can grow

19:29

It means they can make a

19:31

lot more money than they might otherwise

19:33

and all of a sudden the platform

19:35

is in position of being a sort

19:38

of unwitting assistant Fundraiser and growth hacker

19:40

to people who I believe are very

19:42

dangerous So to me that

19:44

was the kind of threshold that this cross

19:46

where I thought I have to have an

19:48

opinion about this, right? So this is a

19:50

more nuanced argument than I think some people

19:52

like to portray it as which is like

19:54

There's this free speech brigade that wants like,

19:56

you know, every social platform and and website

19:58

to be open to any kind of speech,

20:01

no matter how offensive or potentially

20:03

harmful. And then there are these

20:05

kind of internet hall monitors who

20:07

walk around websites like flagging stuff

20:10

that they find objectionable and saying,

20:12

like, you have to take this

20:14

down. Like, what you're saying is

20:16

there actually are some layers of

20:18

the internet that maybe shouldn't be

20:20

censored, but that when you start

20:22

moving into more recommending content, using

20:24

algorithms to filter and rank content

20:26

for people, showing content to people

20:28

who maybe didn't go looking for

20:30

it, that's where you start to take on

20:32

more responsibility for moderating what's on your service.

20:34

Is that what I'm hearing you say? That's

20:37

right, Kevin. And this is a story we

20:39

know so well. What was your last podcast

20:41

about rabbit hole? It's about this exact same

20:44

phenomenon on YouTube, right? It's not about like

20:46

Nazis and direct monetization, but it is about

20:48

people who are discovering stuff via an algorithm

20:50

on YouTube that is potentially drawing them into

20:53

an ideology that could radicalize them and lead

20:55

to some kind of harm. Right? And when

20:57

I think about the past decade on the

20:59

internet, I think about some of

21:01

the harmful characters who have appeared,

21:03

folks like Alex Jones, folks like

21:06

the QAnon movement. When these

21:08

started, these were just individual posts

21:10

on web pages, posts here on

21:12

a social network, but they were

21:14

able to harness the power of those

21:16

recommendation algorithms to grow large audiences.

21:18

And in the case of Alex

21:20

Jones, really enact real harm against real

21:23

people in ways that platforms just

21:25

kind of took too long to

21:27

catch up to. So as

21:29

I sat with this problem, I just

21:31

kept thinking, I know what

21:33

happens next. I know what happens when

21:36

you build this plumbing into

21:38

your little infrastructure company, you

21:40

help things grow that might

21:42

not otherwise grow. And

21:45

if you're someone like me and you don't want

21:47

that to happen, you know, my only real alternative

21:49

is to just sit back and wait for it

21:51

to happen and then say, well, now that it's

21:53

happened, I can leave, you know, and that doesn't

21:55

feel like a very satisfying solution. Yeah, I mean,

21:57

it's been very bizarre to watch this because as

21:59

you eliminating

24:00

that kind of thinking was

24:02

essentially like their number one priority

24:05

was just like making sure that

24:07

didn't happen. So they really want

24:09

to be a home for the

24:11

absolute maximum amount of speech. And

24:13

that's how it was communicated to me. Now, you know,

24:15

my criticism of that would be, while

24:17

I do believe that the founders are sincere,

24:19

it is also true that that is the

24:21

absolute cheapest way to run a platform. When

24:24

you say almost anything goes, that means you

24:26

don't have to hire content moderators. It means

24:28

you don't have to hire policy people. It

24:30

doesn't mean you have to do these tedious

24:32

ongoing reviews of what is on the platform

24:34

and whether your policies should change. Companies like

24:37

Meta spend conservatively hundreds

24:39

of millions of dollars on this stuff, right?

24:41

And Substack is, you know, relatively small. And

24:43

I can imagine why it wouldn't want to

24:45

do that. But at the end of the

24:47

day, I will give the founders credit for

24:49

the fact that when we had these discussions,

24:51

they wanted to talk about them on the

24:53

principles. Yeah. And did

24:55

Substack actually give you a comment

24:58

about their decision to take

25:00

down these five Nazi blogs?

25:02

Yes. So I

25:05

will read most of the statement that Substack

25:07

sent to me. They said, if and when

25:09

we become aware of other content that violates

25:11

our guidelines, we will take appropriate action. Relatedly,

25:13

we've heard your feedback about Substack's content moderation

25:16

approach, and we understand your concerns and those

25:18

of some other writers on the platform. We

25:21

sincerely regret how this controversy has affected writers

25:23

on Substack. We appreciate the input from everyone.

25:25

Writers are the back one of Substack. And

25:27

we take this feedback very seriously. We're actively

25:29

working on more reporting tools that can be

25:31

used to flag content that potentially violates our

25:33

guidelines. And we will continue working on tools

25:35

for user moderation so Substack users can set

25:37

and refine the terms of their own experience

25:39

on the platform. Okay. So

25:41

that's Substack's position. And we've now heard

25:43

your position. I want to just raise

25:45

a few potential objections to this. If

25:47

people are thinking, well, Casey seems to

25:49

be doing this rashly or blowing things

25:51

out of proportion. I want

25:54

to just give you the chance to respond. So

25:56

I'll put on my free speech warrior hat and

25:58

just run through some of So, Objection 1

26:01

would be the sort of sub-stack

26:03

argument, right? That censoring content on

26:05

the internet, it doesn't make extremists

26:07

go away. We know this because

26:09

platforms have been, you know, demonetizing

26:11

and deplatforming extremist content for years

26:13

now, and these people have not

26:16

gone away. In fact, some

26:18

people would argue that this kind of censorship

26:20

actually makes extremist views worse.

26:22

It gives them a cause

26:24

that they can claim they are being

26:26

martyred for and sort of rallies support

26:28

that way, the way that people like Alex

26:30

Jones have for years. So what's your response to that? Sure.

26:33

So that may be true, but getting rid

26:35

of extremism was not the job that I

26:37

gave sub-stack. I'm not asking sub-stack, make racism

26:39

or extremism go away. I'm asking them for

26:41

a place where I can run my business

26:44

and not have my posts appear next to

26:46

Nazis, right? Because that's not good for

26:48

my business. You know, dozens of people have

26:50

canceled their paid subscriptions to platformer, and many of

26:52

them said, hey, I really like what you do,

26:54

but I can't justify giving you money when I

26:57

know it's going to build Nazi monetization infrastructure. So

26:59

it is absolutely the case that there is

27:01

a demand side for extremism, and that has

27:03

to be solved at a society level, but

27:05

platforms can also do their part to not

27:07

help those movements grow and make money. Right.

27:10

I wonder what you would say to this other objection that

27:12

sub-stacks have raised. They pointed out

27:14

that the number of subscribers that

27:16

the sub-stacks or pro-Nazi sub-stacks had

27:19

was tiny. It was like, you

27:21

know, these were not big, thriving

27:23

publications. These were publications with a

27:25

handful of subscribers. None of

27:27

them had made any money. And so basically,

27:30

this is a tempest in a teapot. Yeah.

27:32

And here is a case where I think everyone is just

27:34

going to have to make up for themselves how big they

27:37

think the problem has to be before they take action. I

27:39

think some people will decide it's going to need to be

27:41

a lot worse than this before it rises to the

27:43

level of my attention. For me, I

27:45

just thought I have seen this movie before. I

27:47

know where this is going. And

27:49

if this is eventually going to lead

27:52

me to have to leave the platform, I would rather just

27:54

do it now and move all my life. Got it. Then

27:56

there's the objection that I actually raised with you when you

27:58

told me last week that you were considering moving

28:00

off sub-stack, I think my response was,

28:03

well, aren't there Nazis on every platform?

28:05

Like you're on Instagram, you're on threads,

28:07

you're on Facebook, you used to be

28:09

on X, you use YouTube. All these

28:12

platforms, if you looked hard enough, would

28:14

have some number of Nazis on

28:16

them using them to spread their message and

28:18

potentially even to make money. So

28:21

basically, there are no pure platforms.

28:24

Absolutely true. And I think what I would

28:26

say is that the platforms that I'm on

28:29

and I'm spending time on, while it's true

28:31

that there are bad things on there, for

28:33

the most part, these platforms at least have

28:35

policies against it. When stuff is flagged to

28:37

them, they do remove it. And

28:40

they don't have to be dragged kicking and

28:42

screaming into doing that. They don't ask their

28:44

own user base to be volunteer content moderators

28:46

for them all the time. So

28:48

to me, that was kind of the bare minimum that

28:50

I was looking for is like, well, is there at

28:52

least an affirmative policy that Nazis are banned here? And

28:54

then maybe we can figure something out. I

28:56

would also say that like, it's different when you're

28:59

running a business on the platform, right? Because

29:02

I am not just having to act on my

29:04

own principles here. I have employees who have opinions

29:06

and we were sort of aligned on this, we

29:08

had to talk it through together. And

29:10

I have customers who are very

29:12

principled. And I should say, because I write a

29:15

lot about content moderation, a lot of my customers

29:17

work in trust and safety and content moderation. They

29:19

have heard the arguments that sub-stack is making before,

29:21

like potentially at their own platforms when their own

29:23

platforms were younger and more naive. And this

29:25

stuff just doesn't fly with them. Okay? They

29:28

just do not accept the arguments that are

29:30

being given to them. So I am in

29:32

the unusual position of having a very savvy

29:34

audience that is very sensitive to this subject.

29:36

And that just made me have to take

29:39

it more seriously than like, can I have

29:41

an Instagram account? I

29:43

want to bring up this last objection

29:45

that I've heard, which was made by

29:47

among other people, Ben Thompson, who writes

29:49

the Stratecory newsletter. It seemed

29:51

like he basically had two qualms with what

29:53

you've done here. One of them was sort

29:55

of this, you know, slippery slope argument about

29:58

once you start arguing for that platform. should take

30:00

down Nazi content, if they do that,

30:03

then you sort of start asking them

30:05

to take down other sorts of objectionable

30:07

content, maybe stuff that's like opposed to

30:09

vaccines or questioning the origins of the

30:11

coronavirus pandemic, things that are

30:14

just sort of controversial and not literal

30:16

Nazis and that there's sort of a

30:18

slippery slope effect there. But he also

30:20

raised the objection that you were essentially

30:23

aiming your guns at the wrong

30:25

part of the stack, as it

30:27

were. And in particular,

30:29

he singled out this line in a story that

30:31

you wrote where you said that you were going to

30:33

be contacting Stripe about this substat

30:36

content moderation issue. Now Stripe is

30:38

a payments processing company. And so

30:40

when people sign up to subscribe

30:42

to a substat publication, Stripe is

30:44

the company that actually takes their

30:46

credit card information and charges that

30:48

credit card. And they also have

30:50

content moderation guidelines for the types

30:52

of payments that they will process.

30:55

And so basically, Ben Thompson said

30:57

by going to Stripe, you were

30:59

essentially escalating this beyond a

31:01

level of reasonable disagreement. Yeah, I think it

31:03

is a fair criticism. I do think that

31:06

if sub sec had said, like, yes, we

31:08

are going to affirmatively say that Nazis are

31:10

banned on this platform, and we will proactively

31:12

remove them, then absolutely, the next week, there

31:14

would be calls to do sort of like

31:16

the next level up. Fortunately, for me, it

31:18

never got that far, because they never made

31:21

the affirmative argument that Nazis were banned. And

31:23

I could just sort of, you know, walk

31:25

away and not not have to wonder about

31:27

that anymore. The thing about the slippery slope

31:29

argument, Kevin, is that it presupposes that

31:31

if we just drew one hard line,

31:33

we could stop talking about the boundaries

31:35

of speech forever. That's not how society

31:37

works. We are constantly renegotiating the boundaries

31:40

for speech of social norms of mores.

31:42

These change all the time. That's what

31:44

society is. It is an ongoing conversation

31:46

about how to be. So the idea

31:48

that you could just write one rule

31:50

and keep it forever is a libertarian

31:52

fantasy. Now, on the stripe side of

31:54

it all, I will admit that was

31:56

me being a little edgy. But

32:00

like, here's the thing. I

32:02

also approached Stripe in a spirit of journalistic

32:04

inquiry. And the inquiry was this. Stripe has

32:06

a policy that says that you're not allowed

32:08

to use their services to like fundraise for

32:11

violent causes. Nazism was one of the

32:13

most famous violent causes of all time. And

32:15

so I thought it was worth sending them an

32:17

email to say, hey, one

32:19

of your customers is saying that Nazis are

32:21

free to set up shop and monetize here.

32:24

Is that consistent with your policies? I sent

32:26

that email. I did not get a response.

32:28

So if Stripe had said, yes, that's fine,

32:30

I would not have led a parade down Main Street like

32:32

calling for the end of Stripe. But I did think it

32:34

was worth sending an email just to ask if it was

32:36

true. Yeah. And I imagine that some

32:38

people will hear about your decision to leave Substack and

32:40

say, well, what more does Casey

32:42

want? Like they took down the Nazi blogs that

32:44

he flagged to them. What's the

32:46

problem here? Sure. So what I was looking

32:49

for was a couple of things. You know, one was

32:51

just to say, like, you know, Nazis

32:53

are not allowed. We will proactively

32:55

monitor for this content. Here's how

32:57

we're going to define what we

32:59

view as Nazis. Right. And

33:01

then I also wanted them to look at that in

33:03

that recommendations infrastructure, because again, that's really the difference here.

33:05

You know, we will be on a new platform next

33:07

week at platformer and there will probably

33:10

be Nazis who are using that infrastructure to send

33:12

emails. The difference is going to be it is

33:14

not attached to a social network that was built

33:16

by our provider. Right.

33:18

There will not be these digest

33:21

emails like recommending the Nazi blogs

33:23

along with mine. Right. So

33:25

if something like once they get their hands around that, they

33:27

would need to come in and they would say that certain

33:30

publications are eligible for promotion and

33:32

recommendation and others are not. YouTube

33:35

did this. Meta has done this. Again, a lot of

33:37

this is just very standard stuff that happens at every

33:39

other platform that I write about. It is Substack that

33:41

is the outlet here. So that's what I wanted to

33:43

see. And this became very clear to me over the

33:45

past couple of weeks that nothing like that is coming.

33:48

Yeah. Casey, I want to say

33:50

something sincere to you, which

33:52

I know is terrifying. I'm

33:55

really proud of you for

33:57

making this stand. agree

34:00

about the finer points of online content moderation

34:02

and what shouldn't, shouldn't be allowed. But

34:05

at the end of the day, this is a judgment call

34:07

that you made. And it

34:09

just comes down to who do you want to do

34:11

business with? What kinds of businesses

34:13

do you want to enrich with

34:15

your labor? And

34:17

given that we're in this sort

34:20

of age of rising anti-Semitism and

34:23

increased polarization of all kinds, like, do you

34:25

really want to be giving

34:27

10% of your revenue to a company that

34:31

will not say, like, we don't want to

34:33

do business with literal actual 1930s Nazis? And

34:39

I just, I think about decisions a

34:41

lot through the framework of like, when

34:43

I'm old, and I'm explaining to my

34:45

grandkids decisions that I made earlier in

34:47

my life, like, will I be proud

34:49

of having made the decision that I

34:51

made? Or will I be ashamed of

34:53

the decision that I made and wish

34:55

that I could redo it? And

34:58

like, whatever happens with platformer on your

35:00

new provider, I just think that this

35:02

is going to be a decision that you feel good

35:04

about. And so I'm proud to

35:06

be your friend and your co-host. And I think we

35:08

should also just declare once and for all that

35:11

the Hard Fork podcast is anti-Nazi. This is

35:13

a Nazi free zone. And there is really

35:15

no wiggle room here. Okay. Stop

35:18

sending us your pictures, Nazis, because you're not coming here.

35:21

Woo! When

35:25

I come back, Elon Musk is officially

35:27

on drugs. We'll talk to the reporter

35:29

who nailed that story down right

35:32

after the break. Hey,

35:54

it's Anna Martin from the New York Times, and

35:56

I'm here to tell you about something for New

35:58

York Times news subscribers. And honestly, if you're

36:01

a podcast fan, you're going to want this. It's

36:03

an app called New York

36:05

Times Audio where you can get the

36:08

latest dispatch. It's 10 a.m. in Kiev.

36:10

It's a really loud night. Perfect your

36:12

technique. A splash of soy sauce and then

36:14

a lot of red pepper flakes. Or contemplate

36:16

the future. A computer program is passing the

36:19

bar exam. And we are over here pretending

36:21

not to be amazed by that. It has

36:23

exclusive shows from The New York Times.

36:25

It's the headlines storytelling from serial productions

36:28

in this American life. Fact to a

36:30

fiasco involving a village marauding Visigoth and

36:32

some oil sports from the athletic and

36:34

those big moments, he puts the team

36:37

on his back. And narrated articles from

36:39

the times and beyond. In recent years,

36:41

the unexpected sounds of ice have periodically

36:43

gone viral. New York Times Audio. Download

36:46

it now at NY times.com

36:49

audio. Casey,

36:55

let's talk about drugs. Let's talk about drugs.

36:58

Now, we have gotten at least

37:00

one note from a listener who says, you

37:02

guys seem to talk about drugs a lot

37:04

on the show. You make a lot of

37:06

jokes about magic mushrooms in particular. What's going

37:08

on with that? I mean, I I can

37:11

only say that it is relevant to understanding

37:13

Silicon Valley. This will be my answer. It's

37:15

true. We are tech reporters. We cover an

37:17

industry and a culture out here in San

37:19

Francisco, in the Bay Area. And, you know,

37:22

drugs are part of that culture. So we

37:24

are going to talk about drugs in this segment.

37:26

If you are a parent who doesn't want your kid

37:28

to listen to something like that, you know,

37:30

this one might not be for you. Yeah. So

37:34

this week, the Wall Street Journal reported

37:36

that Elon Musk, the world's wealthiest person,

37:38

has, quote, used LSD,

37:41

cocaine, ecstasy and psychedelic mushrooms

37:43

often at private parties around

37:45

the world where attendees sign non-disclosure

37:47

agreements or give up their

37:49

phones to enter. This was

37:51

a very juicy story that ran

37:53

the other day. And

37:56

it included some actual on the record

37:58

details about. Elon Musk's

38:00

drug use over the years, the journal reported

38:02

that in 2018, Elon Musk took multiple

38:06

tabs of acid at a party he

38:08

hosted in Los Angeles. The

38:11

story also reported that Elon Musk had

38:13

taken magic mushrooms at an event in

38:15

Mexico and took ketamine

38:17

recreationally in 2021 with his

38:19

brother in Miami at

38:21

a house party. And

38:24

that, you know, this has not gone

38:26

down well with members of his orbit

38:28

who are increasingly worried about his erratic

38:30

and potentially drug-fueled behavior. That's right. And what

38:33

I would say is what made this story

38:35

interesting to us, Kevin, is not that Elon

38:37

Musk has been spotted doing drugs a handful

38:40

of times over the years. It is rather

38:42

that people close to him seem both

38:44

very concerned about his behavior in some cases,

38:46

and were willing to talk about it on

38:48

the record with the Wall Street Journal, which

38:51

I just think suggests that this has

38:53

become a serious issue and given Musk's power

38:55

and influence in the world is really worth

38:57

trying to understand. Yeah. So when

38:59

we talk about drugs in this conversation, we

39:02

are not really talking about all the

39:04

classes of illegal drugs. We're talking specifically

39:06

about the ones that are

39:08

popular with people in the

39:10

tech industry, things like psychedelics,

39:12

things like MDMA, things like

39:14

ketamine. These are

39:16

the drugs that Elon Musk has

39:18

been spotted using, according to the

39:21

journal, and that I would say

39:23

are some of the most popular drugs out here

39:25

in the Bay Area among people who work in

39:27

the tech industry. Yeah, I think that's right. So

39:29

today to talk about Elon Musk's drug use and

39:31

this kind of wider phenomenon of drug use

39:33

in Silicon Valley, we've invited

39:35

Kirsten Grind. Kirsten is an

39:37

enterprise reporter at the Wall Street Journal. She

39:39

reports on tech companies and their executives, and

39:42

she was one of the people, along with

39:44

Emily Glazer, her colleague, who broke this story

39:46

about Elon Musk's drug use. She's

39:49

written a lot about this topic over the years,

39:51

and I'm really excited to talk to her about

39:53

her reporting. I answer you. Kirsten

40:03

Grind, welcome to Hard Fork. Thanks so much

40:05

for having me. Hey, Kirsten. So

40:07

you've been reporting on drug

40:10

use in Silicon Valley for a

40:13

while now, and I want to ask you about

40:15

some of the stories you've reported on. But

40:18

I want to zoom in on this

40:20

recent story about Elon Musk and his

40:22

drug use, which I would say

40:25

has been something that a lot of reporters have been

40:27

kind of gossiping about, you know,

40:29

off the record at the bar for many

40:31

months now. Yes. I want

40:33

to say, Kirsten, I was so jealous of this

40:35

story because I have heard so many whispers about

40:37

this stuff. And I have tried to get some

40:40

of this stuff on the record and absolutely failed.

40:42

So when I saw your story come out,

40:44

it was appointment television for me. I

40:46

dropped everything and like inhaled it all

40:48

in one go. Oh, thank you, guys.

40:50

That's so kind. So very

40:52

impressive story. What was the start

40:54

of this? When did you get interested in this

40:56

particular story? Yeah. So at the

40:58

journal, I'm an enterprise reporter. So what

41:00

that basically means for nonjournalists is

41:03

I jump around from topic to

41:05

topic. And so as you pointed

41:07

out, for some reason, the last

41:09

few years, I've kind of been

41:11

on this whole billionaire tech drug

41:13

scene beat, I guess,

41:15

which is really funny because I'm

41:17

the most boring person ever, actually.

41:20

But OK, so now I'm an expert in like

41:22

ketamine and cocaine and all of this. And

41:26

we had done some stories on Elon. We

41:29

reported last year on his Texas plans and

41:31

some of this other stuff. But

41:34

we had been hearing same as

41:36

you guys about this for a

41:38

long time. And so

41:40

we just wanted to definitively know, is

41:42

this man using drugs? I

41:46

think it's a very important question for someone

41:49

of his stature and his

41:51

power and his companies

41:54

with billions of dollars in government contracts.

41:56

So we just kind of went down

41:58

that road. We'll get into

42:00

all the implications of it, but maybe just to start

42:02

with, can you give us an overview of Elon Musk's

42:04

drug use? So he

42:07

is a pretty heavy, I

42:09

would say recreational drug user.

42:13

We have him on, you know, have

42:15

tried a bunch of different drugs. Ketamine,

42:17

we had reported last year. That's sort

42:19

of the popular, by the way, as

42:21

you guys probably know, Silicon Valley drug

42:23

of the moment. Cocaine,

42:26

ecstasy, LSD. And

42:30

a lot of it, one of the

42:32

reasons why it's been kind of kept

42:34

under wraps in a way is because

42:36

a lot of this is happening at

42:38

these private, high-end parties where you often

42:40

have to sign an NDA. A

42:43

lot of the people I spoke to, the

42:45

parties were in different countries. It's

42:47

not just like going out here

42:49

in the valley. And

42:52

so it's been pretty, you

42:54

know, we have examples going back

42:56

years of this. And

42:58

I would talk about my time at these parties,

43:01

but I did unfortunately sign the NDA. So I

43:03

will have to kind of pass on that. But,

43:05

you know, knowing what I

43:07

know about, you know, how these kinds of

43:09

stories come together and the standards at the

43:11

Wall Street Journal as well as the New

43:13

York Times, like it's hard to get a

43:16

story like this into print because you can't

43:18

just go on one sort of anonymous source

43:20

or two anonymous sources. So you actually have

43:22

talked to multiple people who have firsthand accounts

43:24

of witnessing Elon Musk doing these drugs. Is

43:26

that correct? Oh my gosh. We

43:29

had to have people who have witnessed

43:31

his drug use. And I

43:34

cannot even begin to go into the

43:36

rigor of our process for getting a

43:39

story like this into the newspaper to

43:42

give you an idea. Like it was much easier

43:44

writing my last book. Then to

43:47

publish this one story. Yeah.

43:49

Yeah. I mean, for good

43:52

reason, right? Like we don't just, as

43:54

you guys know, like these newspapers don't

43:56

just willy-nilly publish something like this. I

43:58

mean, I think Elon's. followers would

44:00

like to think that, but no, we

44:02

spend a lot of time making sure

44:05

everything's right, we have the sourcing, we

44:07

have it all lined up for sure.

44:09

Right, and so we actually asked months

44:12

ago, Walter Isaacson, who's Elon Musk's biographer,

44:14

about Elon's alleged drug use, and

44:16

what he responded to us was that, you

44:18

know, he knew that Elon Musk

44:20

had been taking ketamine for

44:23

depression, and ketamine, like some of

44:25

these other drugs, is commonly used

44:27

for sort of mental health treatment,

44:29

and so he seemed to think that this was

44:31

all above board, but what you reported was not

44:34

that, so just walk us through some of the

44:36

specifics around the drug use that you have reported

44:38

on with Elon Musk. So it's a lot of

44:40

partying, right? One thing that's

44:43

interesting with Elon, but with a

44:45

lot of these guys, they're using

44:47

psychedelics at parties, but

44:49

also for quote-unquote treatment, right? But

44:51

they're treating themselves, so that's kind

44:54

of the problem, right? So even

44:56

at parties, I think,

44:58

and I'm not saying this specific to Elon

45:00

necessarily, but I think in their heads they're

45:02

saying, oh, if I take

45:04

mushrooms, that's actually healthier than having like

45:07

five shots, or doing a line

45:09

of cocaine, and so with

45:11

Elon, you

45:14

know, he's used a bunch of different drugs,

45:16

but this ketamine is one that a lot

45:18

of people are using at the moment. And

45:21

I think that, you know, what

45:23

you said really speaks to the

45:25

cultural change around drug use in

45:27

Silicon Valley, and of course there

45:29

has basically always been drugs

45:32

in Silicon Valley. LSD is a huge

45:34

part of the story of Steve Jobs

45:36

and Apple, and yet

45:38

at the same time, you know, like Kevin

45:40

and I are around the same age. We

45:42

grew up in a dare America, you know,

45:45

a dare to raise drugs, just say no,

45:47

sort of all of that, and

45:49

it sort of seemed like the only accepted

45:52

drug to do was like alcohol, right? But

45:54

you fast forward to today, you can order

45:56

ketamine off Instagram in a sort of mental

45:58

health context, you can walk... down Posto

46:00

Street and buy mushrooms from

46:02

a, quote, church. Right? So

46:05

the vibe here is just very different than

46:07

I think. And if you have not spent

46:10

time in San Francisco recently, it

46:12

might shock you just how common some of this

46:14

stuff is. Absolutely. You know, I've obviously spent a

46:16

lot of time thinking about this and it really

46:18

goes back to that, I think,

46:21

whole Silicon Valley mentality where it's sort

46:23

of like, I can disrupt,

46:26

you know, myself. I can take charge of

46:28

my own healthcare. And so I think

46:30

in their heads, they're thinking,

46:32

ketamine can be used legitimately

46:35

for mental health treatments. And

46:37

some of these other drugs can be

46:39

used in a good way too, but

46:41

I'm gonna do it myself. Like, nevermind

46:43

like that doctor that's administering it. Right?

46:46

So that's where they're at. Right. Yeah.

46:49

I'll confess that when I first saw some of

46:51

the headlines that you and other general reporters were

46:53

pointing out about sort of drug use in Silicon

46:55

Valley and about Elon Musk, actually, my

46:57

first thought was sort of like, why do I

47:00

care about this? Like, you know,

47:02

these are adults, they're making decisions

47:04

about their own, you know, substance

47:06

use. Some of these drugs,

47:08

as you mentioned, like do have sort of

47:10

demonstrated effects for mental health and are, you

47:12

know, maybe legalized for use

47:14

in the coming years. And

47:17

we live in Silicon Valley where drugs

47:19

have been around forever. So why is

47:21

this such a problem for Elon Musk

47:23

in particular? A hundred percent. And you

47:25

can imagine we had like many conversations

47:27

about this too, right? The

47:29

reason it's very important for Elon

47:31

in particular isn't just because he's

47:34

the world's richest person or the

47:36

world's most powerful person or because

47:38

he runs Twitter or whatever, X,

47:40

sorry. It's

47:42

because in particular, he's running

47:44

six companies, one of them,

47:47

the publicly traded Tesla, where

47:49

he's supposed to be reporting

47:51

to investors, but especially SpaceX,

47:54

which has billions of dollars

47:56

in government contracts. And those

47:58

government contracts aren't like. Yeah,

48:00

if you do a little cocaine on the

48:02

weekend, it's all good. Those are

48:04

like you cannot do illegal

48:07

substances Ever like

48:09

we're not talking about lines at your

48:11

desk. It's like you cannot go to

48:13

Burning Man and do Ecstasy

48:15

or whatever you're doing there, right?

48:18

They're extremely strict and you

48:20

know as you guys I'm sure well

48:22

No, when all he did was smoke

48:24

a little marijuana five years ago on

48:26

Joe Rogan taxpayers footed

48:28

the bill for a 5-million-dollar

48:30

NASA review of his drug

48:32

use and that was just like I think

48:35

one puff And you're just like how did it how

48:37

did it cost the taxpayers 5 million dollars to just

48:39

watch one episode of the Joe Rogan thing? Yeah,

48:44

so they had to do a

48:46

whole drug review of SpaceX employees

48:49

SpaceX employees were subjected to

48:52

Random drug tests for some period of

48:54

time. There's not a lot We

48:57

know about like what went into

48:59

that review But Elon talked about

49:01

this after and some podcasts and

49:03

about how he had not apparently

49:05

Realized the effect this would have on

49:08

SpaceX. So they had to do this

49:10

whole review and Taxpayers

49:12

basically footed the bill Wow

49:15

Congrats taxpayers So

49:17

I think that's an important point about the difference between

49:19

sort of Elon Musk doing this and any sort of

49:22

other You know private citizen who

49:24

does not have government contracts or a

49:26

security clearance But you

49:28

also reported that his drug use has caused

49:30

concern among the board members of his company.

49:33

So tell us about that That's right. So

49:35

that's the second important point. This is not

49:37

the journal like judging Elon Musk. This

49:39

is us saying Listen, it has

49:41

gotten to the point where even Leaders

49:44

at his two largest companies including

49:46

some directors the directors who aren't

49:48

the ones doing the drugs along

49:51

with him Are also

49:53

concerned about this right? And so

49:55

that's that's really the whole point

49:57

of the story like they've had

49:59

years concern, they don't know how

50:01

to handle it. When they're really concerned, they

50:03

kind of go over to Kimball Musk, his

50:05

brother, and are sort of like, hey,

50:08

like, is he getting enough sleep? You

50:10

know, they don't even say drug use

50:12

because that can end up in board

50:14

meeting minutes, right? I thought this was

50:16

so interesting, the way that even those

50:18

who are placed in positions to have

50:21

some measure of authority to serve as

50:23

a check on him, they are terrified

50:25

of just saying what is plain to

50:27

everyone in his orbit, which is just

50:29

that he is on drugs a lot.

50:31

Absolutely. I mean, not to excuse them,

50:33

but you can see this really

50:35

challenging position they're in because first

50:38

of all, we need to say

50:40

Tesla and SpaceX are doing great.

50:42

Tesla especially performing super well. So

50:44

on the first of all, it's

50:46

like, what, who are we to

50:48

complain about that? Like if even

50:51

I think even Elon himself said something

50:53

like this on Twitter after like, if

50:55

I'm using drugs, like I should keep

50:57

doing it. I'm doing a great job.

50:59

That's, that's exactly the position they're

51:02

in. Yeah. And I think there's

51:04

sort of never been a problem with a drug

51:06

user who's sort of in a good run and

51:08

decides to just do more drugs. That's never ended

51:10

badly for anyone who's ever done drugs. Right.

51:14

I wanted to ask about one director

51:16

in particular who you report stepped down,

51:18

Linda Johnson Rice stepped down from Tesla

51:20

decided not to stand for reelection in

51:23

2019 in part because of the drug

51:25

use. I wonder if you could share

51:27

any more of that story. And also

51:29

I have to say reading that that

51:31

does not seem like somebody who was

51:34

worried that he was doing ketamine every once in a while

51:36

at a party. No, I mean, again, that

51:38

was a, the ketamine issue

51:40

is a lot more recent. I would say

51:42

it's been in recent months that people are

51:44

much more worried about ketamine and that, that

51:46

kind of tracks as well with like the

51:49

ketamine popularity growing generally. We

51:51

should say that ketamine is

51:53

legal. It's legal, but

51:55

it's like a gray area legal. And

51:57

I also want to be clear. that

52:00

most people are doing this through dealers

52:02

or you know randomly through Instagram. Yeah

52:04

an online pill mill type of thing

52:06

Yeah, but back to your question I

52:09

mean there's not a ton more I

52:11

can share about what's in the story

52:13

But I would say for a Tesla

52:15

director to step down before their three-year

52:17

term right two years That's

52:20

that's really saying something. Yeah, and this

52:22

is a woman who's very well respected

52:24

respected right in the industry as being

52:27

on many boards and Corporations,

52:30

etc. Yeah, this doesn't sound like somebody who just heard

52:32

that Elon had done mushrooms a couple times at a

52:34

party and said I'm out Of here. Yeah. Yeah, and

52:36

one of the things that is often said about Elon

52:39

Musk's drug use by people who are sort of gossiping

52:41

about It is that it's it's changing his behavior that

52:43

he part of the reason and the

52:45

explanation for why he's been so erratic in

52:48

the past few years and has made all these

52:50

controversial decisions about X and and just

52:52

sort of the Personality that he's adopted

52:55

that this can also be traced to

52:57

his drug use and I

52:59

wonder what you think of that And

53:01

if there are any specific examples of

53:03

behavior that you've reported that has been

53:05

specifically linked to drug use. So I

53:08

Have a lot of in my head, you

53:10

know and also from just knowing The

53:13

drug use situation now instances where

53:15

I've seen him where I think

53:18

You know, maybe that doesn't matter though

53:21

Like in that in this story one

53:23

point we really try to bring up is

53:25

this exact thing that you mentioned?

53:27

He's acting erratically. He's acting strangely.

53:30

Is that just Elon the genius?

53:32

the the guy who said he

53:34

is autistic or

53:36

is he actually on something and so

53:39

this is one reason we brought up

53:42

this example from 2017 where he's speaking

53:44

at SpaceX and Hilariously

53:46

SpaceX has since released that video

53:49

and I would encourage anyone

53:51

to go look at it because in our

53:53

Reporting the executives were all

53:56

worried after that. He was

53:58

on drugs now We don't know

54:00

if he was, and we say that in the

54:02

story. We do not know, right? But

54:05

they're like, is that drugs or

54:07

is that his erotic behavior? And this

54:09

is something that everyone around him

54:11

has struggled with for years. Yeah, this

54:13

is often a question I ask after

54:15

Casey says something stupid on the podcast.

54:20

Or is it the drugs? What exactly is in this tea? Now

54:24

Elon Musk and his camp have responded to

54:26

this story. His lawyer, Alex Spyro,

54:28

told you that parts of this story were

54:30

false, although he didn't specify what exactly was

54:33

false. He also said that

54:35

Elon Musk is, quote, regularly and randomly

54:37

drug tested at SpaceX and has never

54:39

failed a test. So I'm curious what

54:41

you make of that statement and what you know about

54:44

these drug tests, like what are they testing for? How

54:46

often do they have to take them? And

54:49

if it's true that he's never failed a drug test,

54:51

how do you square that with what's in your story?

54:54

So I would first of all say, as

54:56

you guys probably know as journalists, that's not

54:58

necessarily a denial. That's what we call a

55:00

non-denial denial. Okay. And

55:04

I think Matt Levine even pointed that

55:06

out, like in a hilarious way. But

55:09

a note about these drug tests. I wish

55:11

I could tell you more about them. They

55:13

are apparently extremely secretive. So we do

55:16

not know how often

55:18

he's tested, when even what

55:21

drugs are being

55:23

tested for. Generally, I've

55:25

learned that psychedelics aren't

55:27

usually in a test. I want

55:29

to be clear. I don't know if they're testing

55:31

Elon for psychedelics. That's

55:33

the point. We kind of don't know. Then I

55:36

think Elon came out after and said, I

55:38

was tested for three years. So

55:40

I don't know if that means he's

55:42

not been tested the last couple years,

55:44

three years since the Joe Rogan incident

55:47

in 2018. So there's

55:49

just a lot we don't know about these

55:51

drug tests. So reporters have

55:53

been trying to nail down this story about Elon Musk

55:55

and his drug use for years. You

55:58

were actually able to get people. on the

56:00

record talking about it who have firsthand

56:02

encounters with his drug use. Why

56:05

do you think people are willing to open

56:07

up now? I'm so glad you asked that

56:09

question. It was, I have, through

56:11

this whole thing, often thought about

56:13

people's motivations because a lot of

56:15

the times people talk to reporters

56:17

because they're exposing something bad or,

56:20

you know, they're unhappy with how something's

56:22

going. But in this case, you're asking

56:24

people to describe someone in a lot

56:26

of times, someone who they admirers, drug

56:29

use, and they want to be in

56:31

that crowd that's getting into that NDA

56:33

party and all of this. So I

56:36

would say that, you know, without going

56:38

too much into it, a lot of

56:40

the motivation here, well, some of the

56:42

motivation at least, is from people who

56:44

have concern, right? It's

56:47

not just people who, you

56:49

know, saw him one time at a party.

56:52

I mean, definitely I've talked to some of

56:54

those, but there's also just

56:56

a general concern out there. Not people who are

56:58

necessarily trying to get him in trouble. No, not

57:00

at all. People who are trying to maybe get

57:02

him help. That's right. And not

57:04

even just with Elon, but in this reporting in

57:07

general, I found that people

57:09

who are willing to talk about someone

57:11

else's drug use, especially someone in a

57:13

position of power, are doing it because

57:16

they're worried. Yeah, right. And

57:18

so just to take kind of the devil's

57:20

advocate position here. And

57:22

that the drugs are good? No. But,

57:25

you know, I've heard and I've seen since your

57:27

reporting came out, some people just saying like, well,

57:30

the proof is in the pudding, right? His

57:32

companies are doing great. Like he

57:34

has the best rockets. He has the best

57:36

selling car in the world. His behavior is

57:38

unimpeachable, a model of integrity and kindness. But

57:41

you know what I'm saying? Like, like, if

57:43

if these drugs were really hurting him, wouldn't

57:45

it be showing up in the performance of

57:48

his companies? And if it's

57:50

not showing up in the performance of his companies, why

57:52

is it any of our business what he's doing in his

57:54

free time? Well, let's let's take SpaceX

57:56

out of that out of this for a second, because it's

57:59

just a full violation of his SpaceX

58:02

contract. So let's just maybe look at

58:04

Tesla. I think it's

58:06

a great question because Tesla is performing

58:08

really well, right? And so I think

58:11

for directors or other

58:13

executives to reach that level

58:15

of kind of concern about

58:18

his behavior, that's what to look at

58:20

there. You know, they're not they're not

58:22

bringing it up just because they think

58:24

he's had a bad day or something

58:27

like that. Also, like Tesla is

58:29

in part kind of a meme stock. Like,

58:31

yes, the car company itself is performing well

58:33

in the world. Yes. And part of that

58:35

is just because there's a huge fandom around

58:37

Elon Musk, who thinks he's a cool dude

58:39

and likes to see him do stuff. So

58:41

the fact that Elon Musk is on drugs

58:43

all the time, I could see how that

58:45

would make the stock price of Tesla go

58:47

up because it means that Tesla stockholders are

58:49

gonna say, Cool, bro. Yeah, I also wonder

58:51

what you think of the Matt Levine point

58:53

that he made in his newsletter this week,

58:55

which is that Elon Musk is

58:57

in some ways too big to fail a

58:59

drug test. Yeah, that was a great line.

59:02

But also like, you know, and he basically

59:04

says, Look, if you're NASA, or

59:06

you're, you're in the Defense Department,

59:08

and you find out that Elon Musk has

59:10

done drugs, maybe he did drugs in front

59:12

of you, what are you gonna do? Like,

59:14

are you gonna are you gonna, you know,

59:16

put your payload into orbit with someone else's

59:18

inferior rockets? And I thought that was

59:20

a really interesting point. Like, even if it

59:22

is true that he's doing all these drugs, and they're

59:24

getting, you know, in the way of his performance, and

59:27

directors of his companies are growing concerned about it,

59:29

like, what are we

59:31

supposed to do about it? Well, that

59:34

that is the thing. I mean, SpaceX

59:36

is so intertwined with the US government.

59:38

I mean, they are the space program,

59:40

right? So I mean, I don't have

59:42

any insight, knowledge, inside knowledge, but who

59:45

know, who knows what they're going to

59:47

do, or if they can do anything.

59:50

And even on his boards, like

59:52

as we've reported, they've just kind of tiptoed

59:54

around it. So he could be

59:57

too big to fail a drug test. And we just do this all

59:59

the time, right? I mean, And like this is the

1:00:01

troublesome thing about having somebody who is this rich

1:00:03

and powerful and it seems like there just is

1:00:05

no check on his power. Think about how many

1:00:07

times in the past he has done something, he

1:00:10

has broken some law, he's violated some SEC regulation,

1:00:12

and it just seems like everyone throws up their

1:00:14

hands and say, well, what are you going to

1:00:16

do? Like, we don't have any legal system to

1:00:19

take it. He's a genius. Yeah. Yeah. He's

1:00:21

a genius. And also we have no legal protections that

1:00:24

would actually check him. Yeah. Yeah.

1:00:27

And that's why I came out a little

1:00:29

bit because the use of drugs and particularly

1:00:31

of psychedelics is sort of this hidden force

1:00:33

in Silicon Valley that many

1:00:35

people in positions of authority in

1:00:37

the tech industry specifically are fans

1:00:40

of these drugs for legitimate mental

1:00:42

health issues and productivity, but also

1:00:44

for partying and that there's a

1:00:47

sense in which the

1:00:49

drugs are sort of a hidden mover in the

1:00:51

tech industry today. And I wonder what your thoughts

1:00:53

are on that, having spent so much time reporting

1:00:55

on this. Yeah, I have spent a lot of time

1:00:57

on it. First of all,

1:00:59

I want to say I actually

1:01:02

totally agree with the research behind.

1:01:04

I've interviewed a lot of doctors

1:01:06

and like legitimate medical professionals who

1:01:08

are working to make ketamine, you

1:01:10

know, ecstasy, psilocybin, all of those

1:01:13

legal and helpful for post-traumatic stress

1:01:15

syndrome, depression, all of this. So

1:01:17

that is definitely happening and is legit.

1:01:21

I do think that a lot

1:01:23

more people are using psychedelics,

1:01:25

you know, a lot of

1:01:27

tech executives who we probably know

1:01:29

than we know and that it's

1:01:32

way more common. Just no

1:01:34

one still wants to talk about it because

1:01:36

it's illegal, you know, but a

1:01:38

lot of these people are funding some of

1:01:40

these organizations where they're

1:01:43

trying to push for legality

1:01:45

and research medical cures

1:01:47

in part, I think, because it

1:01:50

could help them if done in the

1:01:52

right way. And right now they're doing

1:01:54

it illegally. I will say after your story

1:01:56

came out, I want to put this to you in the interest of

1:01:58

fairness. A friend of mine who who works in

1:02:00

the tech industry texted me and

1:02:03

said, why is the Wall Street

1:02:05

Journal talking

1:02:07

about this like it's the end of

1:02:09

the world? Why are we getting this

1:02:11

story that's sort of talking about how

1:02:13

illegal all of these drugs are? And

1:02:16

this is just what people do in

1:02:18

society and they're only

1:02:20

making a big deal out of this because it's Elon Musk.

1:02:22

What do you say to that? I have

1:02:24

heard that from about 10,000 of Elon's fans as

1:02:29

well over the last few days. So I've

1:02:31

definitely heard that. I mean, I

1:02:33

just have to keep going back to the

1:02:35

fact that he is pretty

1:02:38

much the most powerful person in

1:02:40

this country and all his businesses

1:02:42

are integrated with our infrastructure. He

1:02:44

has billions of dollars in government

1:02:47

contracts. And again, like

1:02:49

even if he's holding it together now,

1:02:52

I'm not saying anything's gonna happen, but

1:02:54

it's something we need to know about the

1:02:56

health of one

1:02:58

of our most powerful people in this country. And

1:03:00

I would just say as a gossipy person who

1:03:02

loves mess, thank you so much for reporting this

1:03:04

story. And I hope you do so much more.

1:03:06

Don't worry if other people think it's important or

1:03:09

not because I'm living for it, Kirsten. Okay, thank

1:03:11

you, Casey. Well, yesterday I

1:03:13

was accused of eating live babies by

1:03:15

one of Elon's followers. Go on. Oh,

1:03:18

it was, I almost wanna read

1:03:20

you guys this. It was

1:03:22

a new low. It was

1:03:25

like Kirsten Grind eats live

1:03:27

babies. Well, it sounds like that

1:03:29

person might've been doing some recreational drugs before

1:03:32

they took the message. And are

1:03:34

you denying on the record that you

1:03:36

eat live babies? I am denying that

1:03:38

on the record, you guys. Just

1:03:41

have to check in the interest of being cripe-like.

1:03:44

Is that even the record's rate? Definitely. I mean,

1:03:46

as you guys know, like covering

1:03:48

Elon Musk comes with hearing

1:03:51

from his many

1:03:53

thousands of fans. Yeah, yeah.

1:03:55

Million probably. Yeah. Yeah,

1:03:57

well, Kirsten Grind, thank you so much for coming on. Thank

1:04:00

you guys so much for having me. When

1:04:06

we come back, we're going to talk about drugs

1:04:08

again. Surprise! But

1:04:12

this time we're talking about the

1:04:14

other kind of drugs, the prescription

1:04:16

ones that AI is helping researchers

1:04:19

discover to treat serious illnesses. So

1:04:29

Casey, as we were sort of planning out

1:04:32

some of our goals for the podcast

1:04:34

this year, one of the topics that

1:04:36

I really wanted to spend more time

1:04:38

talking about is AI and your

1:04:41

coke is like perched at a

1:04:43

very precarious angle. That's amazing that

1:04:45

that didn't spill. I

1:04:48

know. It was like your coke can was

1:04:50

literally like leaned against your laptop at a

1:04:52

45 degree angle in a

1:04:55

way that suggested that you were trying to

1:04:57

play some kind of Daredevil game whereby it

1:04:59

was going to spill on yourself. That was

1:05:01

like the old story, but Prince like Jesus

1:05:03

was carrying me right then. Like I didn't

1:05:05

know it, but he was carrying me and

1:05:07

that's why it didn't spill. Thank you Jesus.

1:05:09

Okay, so Casey,

1:05:12

one of the stories that I have been

1:05:14

sort of devoting more time to trying to

1:05:16

follow recently is what's happening with

1:05:18

AI in the field of medicine. Yes,

1:05:20

because this is a story that I

1:05:22

think everyone who is optimistic about AI

1:05:24

touts as kind of the highest and

1:05:26

best use of this technology. If you

1:05:29

want AI to go faster, this is

1:05:31

one of the best reasons that you

1:05:33

could want it to go faster is

1:05:35

we could discover more drugs more quickly.

1:05:37

Yeah, so this kind of thing is

1:05:39

what a lot of people in tech

1:05:41

and biotech are very excited about. They

1:05:43

say AI is going to be

1:05:45

radically transformative. It's going to help

1:05:47

us, you know, discover new treatments

1:05:50

for cancer and Alzheimer's

1:05:52

disease and heart disease and all

1:05:54

these deadly and debilitating illnesses

1:05:56

and basically AI is going to

1:05:58

serve turbocharts. this

1:06:00

entire field of medicine. And

1:06:03

so I wanted to start covering this

1:06:05

in more detail in 2024, because

1:06:08

there's just a ton of money

1:06:10

and attention and hype and

1:06:13

real promise in the intersection of AI and

1:06:15

medicine. That's right, Kevin. And not only is

1:06:17

there promise, but we are just now starting

1:06:19

to see the fruits of these labors. And

1:06:21

this has gone beyond the realm of, Oh,

1:06:23

wouldn't it be cool if AI could discover

1:06:25

a drug, we are starting to see the

1:06:27

science that, Oh, my gosh, this stuff actually

1:06:29

works. Yeah, this is something that I really

1:06:31

didn't appreciate until I started looking into this.

1:06:33

There's this big healthcare conference, the JP Morgan

1:06:35

Healthcare Conference, which is a sort of a big

1:06:37

deal in that world is happening

1:06:39

in San Francisco this week. And I've

1:06:42

just been reading some of the stuff

1:06:44

coming out of that conference. And it

1:06:46

is remarkable how much of the discussion

1:06:48

in healthcare and medicine today is about

1:06:51

AI, and particularly this use of AI

1:06:53

to discover new drugs. So

1:06:55

I've just had my kind

1:06:57

of antennas up for interesting

1:06:59

and novel stories related to

1:07:01

AI and drugs of the

1:07:04

medical variety. And one

1:07:06

of these stories popped up last

1:07:08

month, researchers at MIT and Harvard

1:07:10

published a paper in the science

1:07:12

journal Nature. They claim to

1:07:14

have discovered an entire class of drugs

1:07:16

using AI and confirmed that these drugs

1:07:19

were successful at combating a type of

1:07:21

bacteria called MRSA. Yeah. And when I

1:07:23

hear the word MRSA, it's always in

1:07:26

the context of why you never want

1:07:28

to be hospitalized. Because apparently in hospitals,

1:07:30

this is a drug resistant infection that

1:07:32

spreads around there can be very difficult

1:07:35

for our existing medicines to treat. And

1:07:37

so it's the exact sort of thing

1:07:39

that we could use some help from

1:07:41

AI and solve. And as it turns

1:07:43

out, AI is already helping researchers trying

1:07:45

to figure out what kinds of chemicals

1:07:48

could be helpful in combating MRSA. And

1:07:50

this is an area where we already

1:07:53

have some evidence that AI is accelerating

1:07:55

discovery. So to talk about this discovery,

1:07:57

we've invited one of the lead authors

1:07:59

of this. Nature study Felix Wong

1:08:01

to join us. Felix is a

1:08:03

postdoc in the lab of James

1:08:05

J. Collins at MIT where he

1:08:07

worked on this research alongside a

1:08:09

big team of scientists. He's also

1:08:12

the co-founder of a drug discovery

1:08:14

startup called Integrated Biosciences and we're

1:08:16

going to talk to him today

1:08:18

about how AI helps make this

1:08:20

discovery possible. Felix

1:08:29

Wong, welcome to Hard Fork. Thank you

1:08:31

for having me. Hi Felix. So we

1:08:34

are interviewing you today because something very

1:08:36

exciting happened just before the holiday break

1:08:38

which is that a research team that you

1:08:41

are on announced that you had used AI

1:08:43

to discover a new class of antibiotics that

1:08:45

could be effective against MRSA and

1:08:47

I also read in the coverage of

1:08:50

this research that there hasn't really been

1:08:52

a new class of antibiotics discovered in

1:08:54

60 years. So why is

1:08:56

that? Why is it hard to

1:08:58

discover new antibiotics using conventional methods?

1:09:00

Yeah so there is a bit of

1:09:03

hype to that statement. So there have

1:09:05

been new antibiotics as well as a

1:09:07

few new classes of antibiotics discovered in

1:09:09

the past 60 years but certainly not

1:09:12

a lot and in fact most of

1:09:14

the clinically used antibiotics that we use

1:09:16

today were discovered in the 1960s and

1:09:19

when we kind of discovered those antibiotics

1:09:21

just by looking at soil bacteria. Turns

1:09:24

out that the bacteria growing in soil weighs

1:09:26

warfare on each other and you can just

1:09:28

kind of take their weapons and use

1:09:30

them as antibiotics. Once this

1:09:32

pipeline really dried up there's just been a

1:09:34

dearth of new drug candidates coming out again

1:09:36

because we've already exhausted kind of this natural

1:09:38

source of antibiotics. Yeah we were really good

1:09:40

at drugs in the 60s but after that

1:09:42

it really seems like America lost its way.

1:09:44

So help me understand here because I hear

1:09:47

a lot about you know the use of

1:09:49

AI to discover new drugs and

1:09:51

I want to talk about your specific discovery

1:09:53

process but I also just want to like

1:09:55

understand at a very broad level what does

1:09:58

it mean to say that AI can help? help

1:10:00

us discover new drugs. Right, because it's not just

1:10:02

going to chat GPT and saying, hey, got an

1:10:04

idea for a new drug? Right, yeah. Yeah, so

1:10:06

of course one can do that. Go to some

1:10:08

LLM and ask for an idea for a new

1:10:10

drug. The question is, is it accurate? And is

1:10:13

it actually worth following up on whatever the LLM

1:10:15

says? In the case of drug

1:10:17

discovery, things are a bit more niche than

1:10:19

LLMs. So it's not like we're training a

1:10:21

general purpose model in order to just write

1:10:24

us poetry or write us emails or whatever.

1:10:27

It's really about training very specialized models

1:10:29

in order to make very specific predictions

1:10:31

as to whether or not a new

1:10:33

chemical might have antibacterial activity. And so

1:10:36

tell us about the nature of that

1:10:38

predictive step. How is it predicting? Yeah,

1:10:40

so as drug discoverers, what we do

1:10:43

is find needles in large haystacks. And

1:10:45

at least in our work, which is

1:10:47

quite typical of these machine learning drug

1:10:50

discovery approaches, the first step is we

1:10:52

need to get training data. And the

1:10:54

best way to do this is empirically.

1:10:56

So in our case, for instance, we

1:10:59

screened 39,000 compounds. So

1:11:01

one by one in a test tube, we

1:11:04

looked at things including, does the compound

1:11:06

affect MRSA? Does the compound become toxic

1:11:08

to human cells, which you don't want

1:11:10

because in that case, bleach

1:11:12

might also be an effect of antibiotic, right?

1:11:14

You had 39,000 different test tubes, each with

1:11:16

a little thing in it? That's basically correct.

1:11:19

So the only kind of quantification there is

1:11:21

that everything is stored for kind of compactness

1:11:23

in place. You could probably fit it in

1:11:25

just a stack of plates here in the

1:11:27

corner of this room. So

1:11:30

when we do the hard fork novel

1:11:32

pathogen creation process, that will be a

1:11:34

very compact storage facility. We

1:11:36

have a Thanksgiving episode this year when we

1:11:38

create a novel bioweapon. So I

1:11:41

think I can follow the story now here because you

1:11:43

conduct these 39,000 plus tests. And

1:11:46

I'm going to guess that some of these

1:11:48

compounds that you test seem more promising than

1:11:50

others. And so you're able to feed this

1:11:52

into your system. And then it can just

1:11:54

start to make predictions by saying, well, this

1:11:56

was more promising than that one. And so

1:11:58

here are a bunch of compounds. that look

1:12:00

like this one that was more promising. And so

1:12:02

let's look into this a little bit more. That's

1:12:04

true with two caveats. So step two is kind

1:12:06

of the model training. And that's where we dump

1:12:08

in all of the data to kind of these

1:12:10

graph neural networks, which are a type of deep

1:12:13

learning model. So the main

1:12:15

thing about deep learning models and one of the

1:12:17

key innovations of our study is really

1:12:20

that up until now, they've been known as black

1:12:22

box. We don't know how the heck is coming

1:12:24

at its predictions. It also means that if it's

1:12:26

inside of a plane that falls out of a

1:12:28

sky, it will survive. Just

1:12:30

ignore K. OK, I'm sorry. Please, please,

1:12:32

just continue with the science. Go ahead.

1:12:34

The concept is similar in the sense

1:12:37

that we wanted to kind of open

1:12:39

up and make sense of what the

1:12:41

model is doing. We don't necessarily have

1:12:43

to reverse engineer the model. But can

1:12:45

we get to a point where at

1:12:47

least we can be like, ah, this

1:12:49

is what the model is looking for.

1:12:51

Can we identify patterns, say, of chemical

1:12:53

substructures and small molecules? And then can

1:12:55

we use this to guide drug discovery?

1:12:58

So one of the kind of key

1:13:00

things about this approach, we kind

1:13:02

of developed this additional kind

1:13:04

of module, if you will, to the

1:13:06

AI model. And what that

1:13:09

module does is it employs a

1:13:11

type of search called Monte Carlo

1:13:13

tree search. That's a word salad.

1:13:15

But the main idea for that

1:13:17

is that we use the same

1:13:19

algorithm as AlphaGo. AlphaGo, the

1:13:21

deep mind algorithm that was able to

1:13:23

beat the best human go players, go

1:13:25

the board game. Yeah.

1:13:27

What was the moment where you're

1:13:29

fiddling around with your 39,000 plates and you say, wait

1:13:31

a minute, how do they beat that board game again? Yeah.

1:13:35

Exactly. So the moment here for

1:13:38

us was when we applied this

1:13:40

Monte Carlo tree search, this AlphaGo

1:13:42

kind of algorithm, to kind of

1:13:45

identifying new chemical substructures that are

1:13:47

predicted to underlie new classes of

1:13:49

antibiotics. We can now actually confidently

1:13:52

say which parts of a chemical

1:13:54

substructure account for its predicted

1:13:56

antibiotic activity. I see. So after you

1:13:58

get the suggestion for. hey, this is

1:14:00

a promising compound. You have a process that

1:14:03

lets you say, okay, why was this thing

1:14:05

promising? Exactly. And this is quite different from

1:14:07

how we've been using AI in the past,

1:14:09

where AI has really just been, at least

1:14:11

in many drug discovery instances, trained on model,

1:14:14

applied to predict some new stuff, and then

1:14:16

you validate some new stuff, great, call it

1:14:18

a day, go home, or maybe

1:14:20

go to the patent office, whatever it might be. In

1:14:23

this case, because we have this

1:14:25

explainable approach to AI, we can

1:14:27

now identify not just single compounds,

1:14:29

but entire classes of compounds, and that's

1:14:31

what's really salient. So instead of finding a

1:14:34

needle in a haystack, which was the old

1:14:36

approach, you're essentially finding little piles of needles

1:14:38

in the haystack. Yeah, we're finding sewing kits.

1:14:40

Right. But are you

1:14:42

saying that the same technology that helped AlphaGo

1:14:45

discover new moves in a board game just

1:14:47

sort of mapped neatly to discovering new chemical

1:14:49

compounds? That's correct. That's something magical about this,

1:14:51

is that, in a sense, the underlying question

1:14:53

is the same. In the case of AlphaGo,

1:14:56

it was kind of looking at the search

1:14:58

space of all possible moves, and

1:15:00

then predicting or anticipating the opponent's

1:15:02

moves. In our case, for

1:15:04

chemical structures, it was looking at

1:15:06

the combinatorial search space of which subset

1:15:09

of a chemical structure actually accounted

1:15:11

for its predicted activity by the

1:15:14

model. That's crazy. That's wild. Yeah,

1:15:16

I mean, and that's like a big reason that

1:15:18

I think people are so optimistic about AI for

1:15:20

drug discovery, is that it turns

1:15:23

out that some of these other problems that

1:15:25

people have been using AI to address, like

1:15:27

playing a board game, or

1:15:29

predicting the next word in a sentence, turn

1:15:31

out to also be very valuable for

1:15:33

other kinds of basic scientific research. Yeah.

1:15:36

And is that a kind of prediction that a

1:15:38

researcher, like a human, could do, but it would

1:15:40

just take them forever? Or is this just fundamentally

1:15:43

like a new kind of ability? This is fundamentally

1:15:45

different. So what a human might do is, because

1:15:47

we do not have any first principles kind of

1:15:49

approach to understanding what or not this new compound

1:15:51

might work, what a human might do would just

1:15:54

be to brute force screen them and

1:15:56

say, well, maybe I invest a few hundred million

1:15:59

into this project. by all of these

1:16:01

millions of compounds and then just brute force them

1:16:03

all. But the main idea of

1:16:05

kind of this machine learning approach is that

1:16:07

it can enable us to now start to

1:16:09

generalize beyond our training data set and look

1:16:12

for maybe often subtle patterns

1:16:14

in the arrangements of atoms and bonds

1:16:16

in a chemical structure in a way

1:16:18

that humans just can't do. You show

1:16:21

me a lot of pictures of

1:16:23

the chemical structures of beta-lactams and quinolones

1:16:25

and other known antibiotics. I can't really

1:16:27

point you to this new class of

1:16:30

antibiotics that we discovered and described. So as

1:16:32

I mentioned, the main prediction step here in step

1:16:34

three is an absence of single hits now.

1:16:37

It's of entire chemical substructures

1:16:39

that define hundreds, if

1:16:41

not thousands, of different chemical compounds.

1:16:43

Right. You're discovering like a

1:16:45

new class of potential drugs, not just like one or two.

1:16:47

Exactly. Yeah, exactly. So

1:16:49

you get back this list. You

1:16:51

shove all this stuff into this neural network. You get back

1:16:54

this list of a bunch of

1:16:56

compounds that might be helpful against MRSA. I

1:16:59

assume then you have to actually go figure out

1:17:01

whether they actually are helpful against MRSA. Oh,

1:17:03

yeah, exactly. So the first aha moment

1:17:06

for us was to actually get this list

1:17:08

in the first place. We had no guarantees

1:17:10

that anything would actually even give us an

1:17:12

output. So we were quite surprised and elated

1:17:14

really when we actually got something from the

1:17:17

algorithm identifying new structural classes of

1:17:19

putative antibiotics, in this case putative, because

1:17:22

as you mentioned, Kevin, we still have to validate

1:17:24

them. So in the end, what we actually did

1:17:26

was we bought around 280 compounds that had high

1:17:29

predicted antibiotic activity, several

1:17:32

of which were also predicted to underlie a new

1:17:34

class of antibiotics. Right now, is there a company

1:17:36

that'll just make any compound for you and sell

1:17:38

it to you? Yeah, can you just go on

1:17:40

Amazon and buy some compounds? Yeah, in fact, not

1:17:42

Amazon, unfortunately, otherwise, you know. You could

1:17:44

get the free delivery with Prime. Exactly. You could

1:17:47

get free delivery with Prime. You can

1:17:49

do garage experiments as well. But

1:17:51

in our case, there are actually

1:17:53

commercially available compounds from synthesis

1:17:56

suppliers as well as chemical

1:17:58

suppliers. many of which

1:18:00

are well known in the field. Great. So

1:18:02

then you have to test these things. How do you

1:18:04

test these things? Yeah, so as I mentioned, we bought

1:18:07

around 280 compounds that

1:18:09

had high predicted antibiotic activity, low

1:18:11

predicted toxicity to human cells. And

1:18:13

also they were quite structurally distinct

1:18:16

from known antibiotics. And so that's

1:18:19

one of the main takeaways of

1:18:21

our work is that we found two

1:18:23

compounds that share the same predicted sub-structure

1:18:26

that defines the new structural class

1:18:28

of antibiotics. And we

1:18:30

found that these compounds work. But

1:18:33

in the end, one of the main

1:18:35

experiments to do is really, does it work

1:18:37

for treating a mouse model

1:18:39

in vivo? A mouse?

1:18:42

A mouse model. So for

1:18:44

instance, like what we did in our work

1:18:46

was we had two mouse models. One was

1:18:48

where we just scraped off the skin of

1:18:50

mice. And then we infected that skin with

1:18:52

MRSA. And that was a topical model in

1:18:54

which you can just apply a cream on

1:18:57

the wound. The other model was

1:18:59

a systemic model. And this is where things

1:19:01

start to get a bit more interesting because

1:19:03

systemic infections underlie the most deadly bacterial infections,

1:19:06

including those leading to sepsis and

1:19:08

other things. So these two compounds

1:19:11

that you discovered using your neural network, they

1:19:13

actually did cure or treat MRSA

1:19:15

in these mice? That's correct. And that

1:19:17

was our second aha moment. So we

1:19:19

found that administration of one of the

1:19:21

compounds of this structure or class actually

1:19:23

decreased MRSA by over 90% in both

1:19:25

models. Got it. So

1:19:28

the process, I'm just gonna repeat this back one more time, just

1:19:30

to make sure I understand that you acquire the data. Oh, you're

1:19:32

gonna do this at home later? Yeah, I am. Yes,

1:19:35

I need the address of that website that sells

1:19:37

you the novel chemical compounds. So

1:19:40

you get the data, you train the

1:19:42

neural network on that data, you use

1:19:45

this kind of like AlphaGo Monte Carlo

1:19:47

tree search technique to like figure

1:19:50

out what the heck is happening inside

1:19:53

the neural network, why it's giving you back

1:19:55

these predictions. And then you get these suggestions

1:19:57

that says these, these 10

1:19:59

compounds. or these however many compounds might be

1:20:02

effective against MRSA and you go and you

1:20:04

rub some cream onto some mice to see

1:20:06

whether it actually works. Is that more or

1:20:08

less what happens? Yeah. I would

1:20:11

also add that we, in addition to rubbing some

1:20:13

cream on mice, we also inject the mouse with

1:20:15

some compound for the systemic model. So yeah. How

1:20:17

are the mice doing? Well, the

1:20:19

mice unfortunately are currently all dead. We have

1:20:22

to sacrifice all of them in order to

1:20:24

extract the bacteria. Okay. So

1:20:28

not a good day for the mice. But potentially they

1:20:30

are going to be... Their sacrifice was not

1:20:32

in vain. Exactly. Because we are

1:20:34

going to have maybe some drugs that actually do treat

1:20:37

this in humans. And that leads me to my next

1:20:39

question, which is, I've been hearing a lot about AI

1:20:41

drug discovery now for what feels like a couple of

1:20:43

years. I know there are a bunch of companies and

1:20:46

labs out there getting funding to use

1:20:48

AI to discover new drugs for certain

1:20:50

common illnesses. I also know that there

1:20:53

have been some companies that have raised

1:20:55

a bunch of money, used AI to

1:20:57

discover some drugs, and then went through

1:20:59

clinical trials and the drugs didn't

1:21:01

work. Or they didn't work as well

1:21:03

as the AI models predicted that they

1:21:05

would. So is there kind of

1:21:07

a step here that you all are taking?

1:21:10

Like the AI model predicts that these compounds

1:21:12

will work against MRSA, but then when you

1:21:14

go to test it in humans, it actually

1:21:16

doesn't work as well as your model predicted

1:21:19

it would. Is there a danger that there's

1:21:21

sort of some missing middle step there? Yeah,

1:21:23

for sure. And so how I like to

1:21:25

think about this is that AI in general

1:21:28

can help with one of two things. It

1:21:31

can help with discovering new compounds

1:21:33

for basic research and also preclinical

1:21:35

development as we do in our work. And

1:21:38

AI can also inform clinical trials and how do

1:21:40

you administer them. That I'm kind of

1:21:42

less an expert on, so I won't really comment too

1:21:44

much about that. But at least for the

1:21:46

former, using AI to discover

1:21:48

new compounds, basically it kind

1:21:51

of ends at that. We

1:21:53

really use AI as a tool to

1:21:55

discover new compounds that ultimately must be

1:21:57

tested and still rather traditionally.

1:22:00

So as I mentioned, even for antibiotics, we had to run

1:22:02

a battery of traditional

1:22:04

microbiological assays, experiments, to determine

1:22:06

what the mechanism of action

1:22:09

is. We had to... I

1:22:11

mean, AI did not help us with

1:22:13

dissecting the mouse or anything. So all

1:22:15

of that is quite traditional. But for

1:22:18

sure, I think things are still in

1:22:20

early days, as well as AI itself

1:22:22

might best be currently, at least utilized

1:22:24

for searching large surf spaces, as we

1:22:26

kind of mentioned. Right. It sounds like

1:22:28

the main thing that AI brings to

1:22:30

the process of drug discovery is just

1:22:33

being able to kind of shrink the

1:22:35

haystacks, take millions and millions of potential

1:22:37

chemical compounds and give you a list

1:22:39

of the 20 or 30 most

1:22:42

promising ones for treating a given disease.

1:22:44

Exactly. At least personally, that's how

1:22:46

I feel AI has created a

1:22:48

law of value. It's really for

1:22:50

initial stages of drug discovery, where you want

1:22:52

to shrink the haystack in order to make

1:22:55

things a bit more manageable. But once you

1:22:57

find a needle, I mean, there is no

1:23:00

guarantee that that needle is sharp, that you have

1:23:02

a great needle. And so I think

1:23:04

at least today

1:23:06

we still do not have great tools

1:23:08

to inform that process. Actually,

1:23:11

before you got here, Casey did actually

1:23:13

volunteer to be a human guinea pig

1:23:15

for any AI discovered drugs. Yeah, you

1:23:17

bring one of those MRSA syringes with

1:23:19

you. Unfortunately. Maybe we could expedite this

1:23:22

process and potentially sacrifice you

1:23:24

in the name of science. I mean, this

1:23:26

is really interesting, Kevin, because I think it

1:23:28

speaks to a question that we have had

1:23:30

over the past year or so, which is,

1:23:33

what is the ideal relationship between human beings

1:23:35

and artificial intelligence? Right. And what

1:23:37

Felix is describing for us here is

1:23:39

a system where people are able to use

1:23:42

AI to develop greater understanding, essentially working

1:23:45

like not hand in hand. That's like

1:23:47

to anthropomorphic. But they are using this

1:23:49

as a tool to further their own

1:23:51

research. It's not quite a creative tool,

1:23:54

but it is a tool that enables

1:23:56

human beings to be more creative while

1:23:58

deepening their scientific understanding. And this

1:24:00

was a really exciting thing. And to automate a

1:24:03

manual labor process that would take probably

1:24:05

centuries to do by hand, as

1:24:08

I hear you describe it, it's

1:24:10

basically creating a lab with tens

1:24:13

of thousands of scientists worth of labor

1:24:15

that you can use to go through

1:24:17

this huge list of compounds and screen

1:24:19

them all very quickly. Yeah, that's one

1:24:21

way to think about it. And

1:24:24

this idea of scale is quite important because,

1:24:26

at least in our paper, we looked at

1:24:28

12 million compounds in a candidate set. But

1:24:31

in principle, drug-like chemical space, which is

1:24:33

all possible really small

1:24:35

molecule compounds, 10 to the 60, 10

1:24:38

to the 60 compounds, that's basically

1:24:40

infinity for most practical purposes. So

1:24:43

you need a couple of postdocs to get through all that.

1:24:46

I have a last question, which is how close

1:24:48

are we to an AI that could actually

1:24:51

automate the testing part of this? It

1:24:54

seems sort of brutish and antiquated to

1:24:56

have to get a bunch of mice

1:24:58

and inject them with stuff and then

1:25:00

maybe move up to monkeys or some

1:25:02

other animal and then do it in

1:25:04

humans and have this whole long process.

1:25:07

Is there no way that you could

1:25:09

use AI to accurately simulate how

1:25:12

a mouse would react to a given compound?

1:25:14

Or do we still... is this sort of

1:25:16

hand in vivo testing? Do I

1:25:18

use in vivo correctly? That was beautiful.

1:25:20

No, that was great. Wow. I'm

1:25:23

not really a scientist I've ever... I'm so

1:25:25

happy with myself for remembering that fact from

1:25:27

biology class. Or is it

1:25:29

the case... He didn't remember it from... He just said it 10

1:25:31

minutes ago. That's true. That's true.

1:25:33

I'm sorry. So is it actually possible that

1:25:35

we could use AI in that phase of the

1:25:38

testing too? Yeah, that's a great question. So of

1:25:40

course there's a huge AI for science movement of

1:25:42

which this work is part of. I think parts

1:25:45

of science are still way too complex

1:25:47

for us to accurately model. And at

1:25:49

least personally I believe that includes how

1:25:51

do we simulate a whole mouse in

1:25:53

terms of all the organs, physiology, etc.

1:25:55

So I think we are still a

1:25:57

ways off from that. Perhaps

1:26:00

one of the things that we could also

1:26:02

consider is also using AI for robotics. And

1:26:04

so I think that is quite an interesting

1:26:06

field because eventually, if you use AI to

1:26:08

do science, you're going to have to interface

1:26:10

with the physical world. And of

1:26:12

course, that's something that, you know, not a

1:26:15

lot of companies are doing nowadays. So you're

1:26:17

saying it's possible that in a few years

1:26:19

we could have like an army of bacteria

1:26:21

resistant robot mice. It's

1:26:25

possible or I think maybe a

1:26:27

way to look at this

1:26:29

might be, you know, in the short term,

1:26:31

maybe AI could like automate like mouse forms

1:26:33

and like very high throughput experiments with handling

1:26:35

mice, especially if the robotics are correct. But

1:26:37

that would kind of look quite dystopian and

1:26:40

not quite like, you know, AI for science

1:26:42

that we have in mind. Yeah, well, I

1:26:44

just got a new idea for a screenplay.

1:26:46

So the rat in two easy quote we

1:26:48

never knew it on. All right. So

1:26:53

thank you so much for coming out. Part

1:27:28

4 is produced by Rachel Cohn and

1:27:30

Davis Land. We're edited by Jen Poyant.

1:27:32

This episode was fact checked by Mary

1:27:34

Mathis. Today's show was engineered

1:27:36

by Alyssa Moxley. Original

1:27:38

music by Mary Lozano, Diane

1:27:40

Wong, Pat McCusker and Dan

1:27:42

Powell. Our audience editor

1:27:44

is Nell Gologuie. Video production

1:27:46

by Ryan Manning and Dylan Bergison. If

1:27:49

you haven't already followed us on YouTube,

1:27:51

check us out. youtube.com/hard fork. Special

1:27:54

thanks to Paula Schumann, Kewing Tam,

1:27:56

Kate Lapreste and Jeffrey Miranda. As

1:27:59

always, you can... email us at

1:28:01

hardfork at nytimes.com. I

1:28:04

feel like we're wrapping so early. Let's do the show again just for

1:28:06

safety. Oh, let's go

1:28:08

get some sandwiches. I

1:28:12

feel like now that we've done it once, we could like really nail

1:28:14

it on the second go through. Yeah?

1:28:17

Yeah. Okay, all right, let's do it again. Okay.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features