Podchaser Logo
Home
Who should be in charge of AI?

Who should be in charge of AI?

Released Friday, 1st December 2023
 1 person rated this episode
Who should be in charge of AI?

Who should be in charge of AI?

Who should be in charge of AI?

Who should be in charge of AI?

Friday, 1st December 2023
 1 person rated this episode
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Casey Newton, welcome to Search Engine. CASEY NEWTON Hi! Also,

0:02

is this another week where you're supposed to be on

0:04

vacation? Uh, not really. I mean,

0:06

like, today is a work day for me. I

0:09

am supposed to be off starting tomorrow, but

0:11

I fully expect I'll be making between three

0:13

and seven emergency podcasts in the next week.

0:16

Who invented the emergency podcast? CASEY

0:18

NEWTON You know what? It, like...

0:21

An emergency podcast is, like, a stupid, like, self-aggrandizing

0:23

name, but the point of a podcast is it's,

0:26

like, people that you hang out with, like,

0:28

during these moments in your life. So,

0:30

when something happens in a world that you care about, you

0:32

actually want to hang out with your friends so you talk

0:34

about that stuff with. Yeah, and I actually, I get a

0:36

real thrill when I see an emergency podcast. I do have

0:38

this joke, which is, like, there's certain things that if you

0:40

put them in front of another word, it negates the meaning

0:42

of the word, and podcast is one of them. Like, podcast

0:45

famous, you're not famous. Emergency

0:47

podcast, it's not an emergency. But

0:50

I do get the adrenal thrill of an emergency podcast.

0:52

Do you think podcast cancels out more words than,

0:54

like, most other words? Yes. I

0:56

think if you've had to put podcast in front of

0:58

it, it's not that thing anymore. And podcast successful. Uh...

1:02

-...ha-ha-ha-ha-ha-ha-ha. This

1:05

week on Search Engine, an emergency podcast. Can

1:08

it be an emergency podcast two weeks after the news event? Sound

1:10

off in the comments. But this week, our

1:12

urgent question, who should actually

1:15

be in charge of artificial intelligence? That's

1:17

after some ads. How

1:19

did you actually sleep last night? If.

1:22

It wasn't an absolute dream. Then you

1:24

need to upgrade to the softest, most

1:27

luxurious sheets from bowl and branch. They're

1:29

made from talks and for you organic

1:31

cotton and get softer with every single

1:34

wash. Millions of Americans are sleeping better

1:36

and their signature sheets and right now

1:38

you can take twenty percent off so

1:40

hurried A B O L L and

1:43

branch.com and use Code Odyssey for twenty

1:45

percent off today. Exclusion Supply see. Side

1:47

for details. All

2:15

right, here's a question to make

2:17

pretty much any room uncomfortable. Get

2:19

everybody's attention and ask, who

2:21

do we think should be in charge here?

2:24

Do we all agree that the right person is

2:26

running things right now? Who

2:29

should get to make the final decision in your

2:31

family, in your workplace? Should

2:34

one person really be in charge? Should power be

2:36

shared? Sharing sounds good. Okay.

2:40

With who? How much? According

2:42

to what criteria? Look,

2:45

sometimes we ask the fun questions on this

2:47

show about toxic airplane coffee or the ethics

2:49

of cannibalism, but these questions about

2:51

power, I don't think these are

2:53

the cute ones. These are the

2:55

questions that start revolutions. These

2:58

are the questions that transform places or

3:00

sometimes destroy them. Who

3:03

should be in charge? Our

3:07

country was founded as an answer to that

3:09

question. We're told by the

3:11

third grade that America is a democracy.

3:13

The people are in charge. At

3:16

junior high, they walk that back a little. They

3:18

tell us it's a representative democracy, which is

3:21

a bit different, much less exciting. Just

3:24

because our country is a representative democracy, it

3:26

doesn't mean every institution in our country

3:28

will be one. There's

3:30

this word governance, which is so boring

3:32

your brain can't even save it. But

3:35

ironically, it refers to the most interesting thing in

3:37

the world. Who is in

3:39

charge of you? Most

3:43

American businesses have somewhat funky governance structures,

3:45

which we stole from the British. The

3:48

typical corporate governance structure goes like this. There's

3:51

a boss, CEO, with most of the power.

3:54

But they're accountable to a board above them, a

3:56

small group of people who can depose them, at

3:58

least in theory. And the

4:01

board usually represents the shareholders. Often

4:03

the shareholders even vote to elect

4:05

the board. This structure

4:07

of collective decision-making, of voting,

4:10

of elections, it

4:12

has existed and evolved since way

4:14

before American democracy. The

4:17

corporate board model comes from England in the

4:19

1500s. Back

4:21

then, England was a monarchy, but its

4:23

companies were not. They were like, not

4:26

democracies, but democracy-esque

4:29

organizations existing in a country

4:31

devoted to the rule of the king. They

4:35

represented a different answer to this who

4:37

should be in charge question. We

4:40

took that corporate structure with us when we

4:42

divorced England, and in 1811, corporations

4:45

really took off in America. That

4:47

year, New York State became the first to

4:49

make it legal for people to form a

4:51

corporation without the government's explicit permission. Over

4:54

the next 200 years, corporations have become

4:56

very powerful. And in

4:58

that time, their CEOs have learned and

5:00

taught one another how better to consolidate

5:02

power. CEOs today,

5:05

particularly the CEOs of big

5:07

tech companies, are less likely

5:09

to answer to their boards or to their

5:11

shareholders, if they even have them. These

5:14

days, in America, our country is

5:16

a democracy, and the corporations are

5:18

the exceptions. Not monarchies

5:21

exactly, but little monarchy-esque

5:23

organizations in a country devoted to

5:25

the rule of the people. Who

5:29

should be in charge? In

5:32

America, we know we don't trust kings, but

5:34

we don't always trust the people. So

5:37

for now, the people sort around the country,

5:39

and the techno kings mostly run into businesses.

5:43

But the tension about who should hold

5:45

power remains unresolved. It

5:47

crackles. Sometimes it erupts in minor

5:49

revolutions in all sorts of places.

5:52

And exactly two weeks ago, it erupted

5:55

at a technology company. Breaking

5:59

news. incredibly

8:00

world-altering powerful. So

8:03

this never resolved question, who

8:06

should exercise power and how? It

8:08

just got even more complicated. Because

8:10

now we have to decide which people

8:13

or person should be in charge of

8:15

artificial intelligence, a technology designed

8:17

to become smarter than human beings.

8:21

Well, let's take a step back. OpenAI

8:23

is the most

8:25

important company of this generation.

8:28

For the past two weeks, as the story

8:30

has unfolded, I've been talking to Casey Newton,

8:32

who publishes the excellent newsletter, Platformer. When

8:35

we spoke last week, he was reminding

8:37

me exactly how important the story of

8:39

OpenAI is, even before this latest

8:42

chapter. It is not

8:44

super young, it was founded in 2015,

8:47

but with the launch of Chat GPT

8:50

last year, it started

8:52

down a road that very few companies

8:54

get to start down, which is the

8:56

road to becoming a giant consumer platform

8:59

that you had mentioned in the same

9:01

breath as a Google or a Facebook

9:04

or a Microsoft. And

9:07

when you are seeing that in the case of

9:09

Chat GPT, you have a product that is being

9:11

used by 100 million people a week. And

9:14

you have a CEO who has become the

9:16

face of the industry. Sam Altman has become

9:18

essentially the top diplomat of the AI industry

9:21

over the past year. The

9:23

number of reasons that

9:26

you would fire that person with

9:28

no warning is just

9:30

extremely small. And

9:33

the idea that even after he was fired,

9:35

you still would not say with any specificity

9:38

what he did is even smaller. Those

9:40

are just some of the reasons why this has just been such

9:42

a crazy story. And when you saw

9:44

it, how did you get the news? I'm

9:49

happy to tell you that story. My

9:51

parents were in town and they asked if we

9:53

could have lunch. And I

9:55

thought, I'm gonna take them to a really

9:57

nice lunch in San Francisco at an institution.

10:00

called the Zuni Cafe. Zuni

10:03

Cafe is known for a roast chicken

10:05

that is so good but it does

10:08

take an hour to cook. So we

10:10

order a few snacks and my parents being my

10:13

parents said hey why don't we get a couple

10:15

cocktails and a bottle of wine and I said

10:17

guys it's 11 45 a.m.

10:19

but you know what let's

10:21

do it. So bottle of

10:23

wine comes the cocktails come we have

10:25

our snacks and we're waiting for the

10:27

chicken and I think I'm

10:30

gonna use the restroom and I get up

10:32

to use the restroom and look at my

10:34

phone and I see the

10:36

news because 78 people have been texting me saying

10:41

holy motherfucking shit what

10:44

is happening? And

10:47

so I go back to the

10:50

table and explain to my parents

10:52

everything that I have to about opening eyes, demoing

10:54

and everything and then I walk outside and I

10:56

get on a Google Meet with my podcast post

10:59

because of course we're gonna need to do an

11:01

emergency episode and I just stare at my parents

11:03

through the window and watch the chicken arrive at

11:05

the table and them start to eat it. So

11:07

you never got to eat the chicken? I did

11:10

well eventually the Google Meet ended and I got

11:12

to have some chicken and it was delicious but

11:15

there was a while there where I was quite hungry and

11:17

jealous of them. And so you

11:19

guys the initial thing is just like holy

11:21

crap this was nuts and like

11:23

was your instinct oh there's

11:26

going to be like like the board is

11:28

gonna come forward and say like hey he's

11:31

done something awful like were you waiting for

11:33

a shoe to drop? Absolutely

11:35

because there again

11:38

the number of reasons why the board would

11:40

have fired him is just very small right

11:43

when I saw it my thought was it's

11:45

always either money or sex is

11:48

why a high-profile person loses their

11:50

position right? And the

11:52

board's description didn't really lean one

11:54

way or another in that direction.

11:56

I started to you know people just started

11:58

to speculate for a while theories at me.

12:02

But again, because this was such a

12:04

consequential move, the expectation was always that

12:06

even if the board wouldn't say it

12:09

in their blog post, they would at

12:11

least tell their top business

12:13

partners, they would tell the top executives at

12:15

OpenAI, and then it would just sort of

12:18

filter out to the rest of us what

12:20

actually happened. But days later, that was still

12:22

not the case. Even after the company was

12:24

in open revolt with 95% plus of the

12:27

company threatening to lock out the door if

12:29

the situation wasn't reversed, the board still wouldn't say

12:31

what happened. Have you ever

12:33

seen anything like that before? Um,

12:37

well, I mean, look, CEOs get fired.

12:39

There's actually an argument that CEOs don't

12:41

get fired enough, right? Like we live

12:43

in the Silicon Valley bubble where we

12:46

have a cult of the founder, and

12:48

there is a very strong feeling that

12:50

the founder should almost never be removed

12:52

because the company cannot survive without them.

12:54

And so it's always very dramatic when

12:56

a founder gets removed, right? Like probably

12:58

the biggest founder drama I can remember

13:00

before this one was the removal of

13:02

Travis Tallinnick from Uber. The difference

13:04

there was that Uber had been involved in a lot

13:07

of public wrongdoing before

13:09

he was removed. And so there was

13:11

kind of a steady drum beat of

13:13

stories and people calling for him to resign before

13:16

that happened. But even then, his board members

13:19

turned on him. And in Silicon Valley, that

13:21

is a taboo for for someone that you

13:23

appoint to your board and you say, be

13:25

a good steward of my company. The expectation

13:27

is you are never going to remove the

13:29

founder. And in fact, we have other Silicon

13:31

Valley companies where the founders have insulated themselves

13:33

against this by just designing the board differently.

13:35

So Mark Zuckerberg has a board that

13:37

cannot remove him. Evan Spiegel at Snap

13:40

has a board that cannot remove them.

13:42

So again, that's just kind of the

13:44

way things operate here. And

13:46

how does a founder choose their board

13:48

members? So the most

13:50

common way is that if UPJ run a venture

13:52

capital firm, which I do think you should, they

13:55

need to talk to you about that. So I

13:57

come to you and I want to. I

14:00

want to get some of your money. You say,

14:02

okay, I will buy this percentage of your company

14:04

for this amount, but I want to take a

14:06

seat on your board. And the

14:09

idea is, hey, if I would have a

14:11

lot of money locked up in your company,

14:13

I want to be able to have a

14:15

say in what happens there. I see. And

14:17

normally speaking, normal company, Facebook, whatever, you've

14:20

got a board, they have a little

14:22

bit of a say because it's their money,

14:24

but a powerful founder of a powerful company

14:26

will set it up so that they don't

14:28

have much of a say. Yeah,

14:31

basically, they create a different

14:33

kind of stock, and

14:35

they will control the majority of that

14:37

stock. And that stock has some sort

14:39

of super voting powers. So when the

14:41

board goes to vote on something, their

14:43

votes will never exceed the number of

14:45

votes cast by the founder. The

14:50

OpenAI board was set up very differently, which

14:52

I'm sure we'll talk about. And so it

14:54

made this sort of thing possible, but absolutely

14:56

nobody saw coming. After

15:02

the break, the strange origin story

15:05

of OpenAI. And how it

15:07

led to the events of this month. Hey,

15:41

it's Ryan Reynolds, owner and user of MidMobile,

15:43

and I am recording this message on my

15:46

phone. I'm literally on my mid phone. Why?

15:48

Because fancy recording studios cost money. And if

15:50

we spent money on things like that, we

15:52

couldn't offer you screaming deals. Like if you

15:54

sign up now for three months, you get

15:56

three months free on every one of your

15:58

plans, even unlimited. at mintmobile.com/Switch.

16:01

Limited time, new customer offer. Activate within 45 days. Additional

16:04

taxes fees and restrictions apply. Unlimited customers using more

16:06

than 40 gigabytes per month will experience lower speeds.

16:08

Video streams at 480p. See mintmobile.com

16:10

for details. Why is Instacart

16:12

the holiday rescue app? Because you

16:15

can get all your seasonal decor

16:17

delivered instead of having to drive

16:19

to 12 different stores. Candles and

16:22

candy canes delivered. Reats and reindeer

16:24

delivered. Lights from lows delivered. And

16:26

since I know you're going to

16:29

ask. Inflatable snowman delivered. So

16:31

this season stay in and get decked

16:33

out. Download Instacart the holiday rescue app

16:35

to get free delivery on your first

16:38

three orders. Offer valid for a limited

16:40

time. $10 minimum per order, additional terms

16:42

of pay. Thank

16:59

you all for sticking around this afternoon. We

17:01

had some great conversations and we're hoping to

17:03

have another great one. It's

17:05

the fall of 2015. Just a couple

17:07

months before OpenAI would be willed into

17:09

existence. Elon Musk and

17:11

Sam Altman are on stage together at

17:13

this conference on a panel called What

17:16

Will They Think Of Next? Questions is

17:18

about artificial intelligence. And one question they

17:21

asked is about AI. This technology that

17:23

in 2015 still felt way

17:25

off in the future. And Elon could share

17:27

with us their positive vision of

17:30

AI's impact on our coming life. Sam

17:33

Altman, who at the time is the

17:35

head of Y Combinator. He

17:37

goes first. I think there are, the science fiction

17:39

version is either that we enslave it or it

17:41

enslaves us. But there's this happy symbiotic vision, which

17:44

I don't think is the default case, but what

17:46

we should work towards. I think already. Sam's dressed

17:48

like a typical 2015 startup guy. Blaze

17:51

are colorful sneakers. What I

17:53

noticed is his eyes, which to me always

17:56

look concerned. Like someone whose car just made

17:58

a weird noise at the beginning of a- long road

18:00

trip. In 2015,

18:02

Sam Altman has a reputation as

18:05

a highly strategic, deeply ambitious person,

18:08

but also someone a bit outside of

18:10

the typical Silicon Valley founder mold. He's

18:13

made a lot of money, but says he's donated most of

18:15

it. He's very obsessed with universal

18:17

basic income. The

18:19

kind of person who tells the New Yorker that

18:21

one day he went on a day-long hike with

18:23

his friends and during it made

18:26

peace with the idea that intelligence might

18:28

not be a uniquely human trade. He

18:30

tells the magazine, quote, there are certain

18:33

advantages to being a machine. We humans

18:35

are limited by our input output rate. He

18:37

says that to a machine, we

18:40

must seem like slowed down whale

18:42

songs. But

18:44

I don't think there's any human left that

18:47

understands all of how Google search results

18:49

are ranked on that first page. It really

18:51

is. On stage, Sam's pointing out the

18:53

ways in which AI is already here.

18:55

We're already relying on machine learning algorithms we

18:57

don't entirely understand. Google search

18:59

results or the algorithms that run

19:02

dating websites. In this case, the

19:04

computer matches us and then we have babies. But

19:06

then have babies and so on in effect, you

19:08

know, you have this like machine learning

19:10

algorithm breeding humans. And so really,

19:12

I mean, you do. And

19:15

so there's this and then, you know, those people like

19:17

work on the algorithms later. And

19:19

so I think the happy vision of the future

19:21

is sort of humans and

19:24

AI in a symbiotic relationship, distributed AI,

19:26

where it sort of empowers a lot

19:28

of different individuals, not this

19:30

single AI that kind of governs everything that

19:32

we all do that's, you know,

19:35

a million times smarter, a million times smarter than any other

19:37

entity. So I guess we should work towards. Elon

19:39

goes next. I agree with

19:41

what Sam said. I mean, we

19:44

are effectively already a human

19:47

machine, collectives symbiotes.

19:50

Like this, like like a like

19:53

a giant cyborg. That's

19:56

actually what society is today. No

19:59

one in the room. cool-headed

22:00

and averting apocalypse instead of like steering

22:02

wildly into it is not a thesis

22:04

that survives modern times. Exactly.

22:08

And so then they think like well, maybe

22:10

we do it as a for-profit company, right?

22:12

Like Sam Altman at the time was running

22:14

Y Combinator, which is the most famous startup

22:16

incubator in the United States. It's responsible for

22:19

Stripe and Dropbox and a number of other

22:21

famous companies. So the obvious thought was well

22:23

why don't we just do it as a

22:25

venture-backed startup? But the more they

22:27

think about it, they think well, gosh, if we're

22:29

again building a super intelligence we don't want

22:31

to put that into the hands of one

22:34

company. We don't want to concentrate power in

22:36

that way because we think this thing could

22:38

be really beneficial. And so we want to

22:40

make sure that everyone reaches the benefits of

22:42

that. So that leaves them with a

22:44

nonprofit and that winds up being the direction to

22:46

go in. And this might be

22:48

jumping ahead but like my

22:51

guess would be like one of the

22:53

reasons as I understand it the technology

22:55

usually moves at like the fastest pace

22:57

it can instead of the most judicious

22:59

pace it can is because if you're

23:02

moving slowly someone else will move more

23:05

quickly, more faster, faster, faster.

23:07

More faster quickly. More faster

23:09

quickly. And so why did

23:11

the responsible company succeed this

23:13

time? Well,

23:17

it had some advantages. One,

23:19

it was probably the first mover

23:21

in this space that was not

23:23

connected to a giant company.

23:26

So Google, for example, already had

23:28

AI efforts underway. Facebook also had

23:30

AI efforts underway. This was

23:32

really the first serious

23:34

AI company. I

23:36

think that because it was a

23:39

startup and because it was a nonprofit,

23:41

it attracted talent that would be less

23:44

inclined to go work for a Google

23:46

or Facebook. Right. There are recruiting advantages

23:48

that come with telling people we do

23:50

not have a profit motive. We are

23:53

a research lab and our intentions are

23:55

good. And so they attracted a lot

23:57

of really smart people. They also had

24:00

the imprimatur of Elon Musk, who

24:02

was one of the co-founders, who

24:04

was a much more reliable operator

24:06

in 2015 than he is today.

24:10

And that served as a powerful recruiting

24:12

signal. And so all those people

24:14

get together and they get to work and they started

24:16

working on a bunch of things and not everything worked

24:18

out. They had a hardware division at one point, like

24:21

they were interested in robotics and it just kind of

24:23

fizzled. But then they started working on

24:25

this GPT thing and things got better for them.

24:29

According to reporting from Semaphore, in early

24:31

2018, Elon Musk makes a bid to

24:33

become OpenAI's president. The board shoots him

24:35

down. Soon after, Elon

24:37

quips OpenAI, publicly setting a conflict

24:39

of interest with Tesla. Semaphore

24:42

also reported that Musk promised to invest

24:44

$1 billion in OpenAI. When

24:47

he left, he said he would keep the promise. He

24:50

didn't. So OpenAI was short

24:52

on money, which was a problem because the

24:54

next year, 2019, the company announced

24:57

their expensive new project, GPT2,

25:00

a much more primitive ancestor to the

25:02

chat GPT likely used. Training

25:04

even this model was hugely expensive

25:07

and OpenAI realized it would not be able

25:09

to get by on donations alone. One

25:12

thing that we've learned over the past

25:14

year, as all of us have been

25:16

educating ourselves about large language models like

25:18

chat GPT, is that they're incredibly expensive

25:20

to run. I talked to a former

25:22

employee of OpenAI this weekend who described

25:25

the company to me as a money incinerator. They

25:27

don't even make podcasts? They don't even make podcasts.

25:29

That's how expensive they are. They're losing money without

25:31

even making podcasts, BJ. Can

25:33

you imagine? If

25:38

you've ever used chat GPT, you've

25:40

cost OpenAI money. Some

25:44

estimates are around 30 cents for you asking

25:46

chat GPT a question. It

25:48

has 100 million users a week. You can imagine

25:50

how much money they're losing on this thing. Is

25:52

that 30 cents computing power? It's

25:54

computing power. Yes. I

25:57

believe the technical term is an inference cost.

26:00

you type in your question to chat

26:02

CPT and then it sort of

26:04

has this large language model and

26:06

it generates a

26:08

sort of series of predictions as to what

26:10

the best answer to your question will be

26:13

and the cost of the electricity and the

26:15

computing power is about 30 cents. Got

26:17

it. So the technology is super expensive to run.

26:19

So even in the early days, they're just burning

26:22

money really quickly. Yes. And

26:24

so they have a problem, which

26:26

is that there is no billionaire,

26:29

there's no philanthropy, there's no foundation, there's no government

26:31

that is going to give them 100 to $200

26:33

billion to try to

26:36

get their project across the finish line. So

26:38

they turn back to the model that they

26:41

had rejected, the for-profit model. But

26:43

instead of just converting a nonprofit

26:45

into a for-profit, which is incredibly

26:48

difficult to do, they take a

26:50

more unusual approach, which

26:52

is that the nonprofit will

26:54

create a for-profit entity, the

26:57

nonprofit board will oversee the

26:59

for-profit entity and the

27:01

for-profit entity will be able to raise

27:03

all those billions of dollars by offering

27:05

investors the usual deal. You

27:08

give us some amount of money

27:10

in an exchange for some percentage

27:12

of the company or for our

27:14

revenues or our profits and

27:16

that will enable us to get further

27:18

faster. March 2019,

27:21

OpenAI publishes a blog post announcing

27:23

the change. The

27:25

nonprofit will now have a for-profit company attached

27:27

to it and the CEO will

27:29

be Sam Altman. He will not,

27:31

however, take any kind of ownership stake, an

27:34

almost unheard of move for a Silicon Valley

27:36

founder. The blog post

27:38

lists the names of the nonprofit board members who

27:40

will keep the for-profit in check. Sam

27:42

Altman is on the OpenAI board along

27:45

with some other executives like OpenAI's chief

27:47

scientist Ilya Sutskivar. There's some

27:49

Silicon Valley bigwigs, LinkedIn's Reid Hoffman,

27:52

Quora's Adam DiAngelo, but also importantly

27:55

there are some effective altruists like

27:57

Holden Karnofsky and scientist engineer Tasha

27:59

McCauley. If the idea

28:01

is that this board is going to be like part

28:04

of the idea is they are a hedge against AI going in

28:06

the wrong direction and they're going to try to get really

28:09

like skeptical, smart

28:11

people, like how serious are these

28:13

people as artificial intelligence thinkers? I

28:16

mean, I think they do have credibility. You

28:18

know, I don't know who in that year

28:20

would have been considered the very best thinkers

28:23

on that subject. But I would note that

28:25

in the years since Reed Hoffman left the

28:27

board to start his own AI journey with

28:29

a co-founder, it's called Inflection AI, they've been

28:32

doing good work. Holden

28:34

Karnofsky was the CEO of

28:36

Open Philanthropy, which is one

28:38

of the effective altruist organizations.

28:41

They are a funder of funders, so they

28:43

like give money to scientists to research things.

28:45

But Holden was essentially part of the group

28:47

that were some of the very first people

28:49

to worry about existential AI risk. At

28:52

a time when absolutely no one was

28:54

paying attention to this, Holden's organization was

28:56

giving researchers money to study the potential

28:59

implications of AI. So

29:01

there were people on that board who

29:03

were thinking a lot about these issues

29:05

before most other people were. And,

29:08

you know, we can debate whether they had

29:10

enough credibility, but like, certainly they

29:12

were not, you know, just

29:14

a bunch of dumb rubber stamps for Sam Altman. At

29:19

this moment in 2019, OpenAI,

29:21

the nonprofit company controlling a for-profit

29:23

subsidiary, was a little unusual, but

29:26

that unusual state of affairs would only

29:28

become truly absurd a few years later.

29:32

November 2022, OpenAI releases,

29:34

without much fanfare, without a very attractive

29:36

name, a product called

29:38

ChatGPT. Within five

29:41

days, ChatGPT has a million users.

29:43

Two months after launch, it has 100

29:46

million monthly active users. At

29:49

that point in time, it's the fastest

29:51

growing consumer app in history. There's

29:53

a new board in town and it's

29:55

staking the world by storm. ChatGPT was launched

29:57

by OpenAI on the 30th of November.

30:00

gaining popularity for its ability to

30:02

craft emails, write research papers, and

30:04

answer almost any question in a

30:06

matter of seconds. The CEO, Sam Altman,

30:08

is just 37. OpenAI

30:11

becomes the leading AI company. Sam Altman

30:13

becomes not just the face of OpenAI,

30:16

but for many people, the face

30:18

of AI itself. He's the rock

30:20

and roll star of artificial intelligence.

30:22

He's raised billions of dollars from

30:25

Microsoft, and his early back has

30:27

included Elon Musk and Reed Hoffman.

30:29

As chat GBT takes over the internet, Sam goes

30:32

on a world tour. Israel, Jordan,

30:34

the UAE, India, South Korea, Japan,

30:36

Singapore, Indonesia, and the UK. You

30:39

must be rushed off your feet here in

30:41

the middle of an enormous world tour. How

30:44

are you doing? It's been super great,

30:46

and I wasn't sure how much fun I

30:48

was gonna have. By May of this year,

30:50

AI has become important enough, fast enough, that

30:52

Sam, AI's chief diplomat,

30:55

is testifying in front of Congress. Mr.

30:57

Altman, we're gonna begin with you, if

30:59

that's okay. Thank you. Thank

31:02

you, Chairman Blumenthal, ranking member Holly,

31:04

members of the Judiciary Committee. He's

31:07

dressed in a navy suit, but now with

31:09

normal gray shoes. His eyes look

31:11

still worried. They're registering congressional levels

31:13

of worry. OpenAI was founded on

31:16

the belief that artificial intelligence has

31:18

the potential to improve nearly

31:20

every aspect of our lives, but

31:22

also that it creates serious risks we have to

31:24

work together to manage. We're

31:26

here because people love this technology. We

31:29

think it can be a printing press moment. We have to

31:31

work together to make it so. OpenAI

31:33

is an unusual company, and we set it

31:35

up that way because AI is an unusual

31:37

technology. We are governed by

31:39

a nonprofit, and our activities are driven by our

31:41

mission and our charter, which commit

31:43

us to working to ensure that the broad distribution of

31:45

the benefits of AI and to maximize

31:48

the safety of AI systems. Sam

31:50

is telling these congressmen, his likely

31:52

future regulators, but every

31:54

tech CEO has told everyone since the

31:57

invention of fire. Don't

31:59

worry. this under control. But

32:01

what is new here, what you would not

32:03

see with someone like Mark Zuckerberg in Facebook's

32:06

early years, is that Sam's also saying

32:08

he knows that the downside risk of

32:11

the thing he's creating is enormous. Casey

32:14

Newton says that this tension, that AI's

32:16

inventors are also the people who worry

32:18

about its power, that's part

32:20

of what makes this story so unusual. Usually

32:28

the way that it works in Silicon Valley

32:31

is that you have the Rara technologist going

32:33

full steam ahead and you know, sort of

32:35

ignoring all the safety warnings. And then you

32:37

have the journalists and the academics and the

32:40

regulator types who are like, hey, slow down,

32:42

that could be bad, things are the implication.

32:44

That's sort of the story we're used to.

32:46

That's the Uber story. That's the Theranos story.

32:49

What's interesting with AI is

32:51

that some of the people who are the

32:53

most worried about it also identify as techno

32:55

optimist. Okay? Like they're the sort of people

32:57

that are usually like, hey, technology is cool. Let's build

32:59

more of that. Why was that the

33:01

case? Well, they've just looked at the events

33:03

of the past couple of years. They use

33:06

GPT-3 and then they use

33:08

GPT-3.5 and then they use

33:10

GPT-4. Now they're using GPT-4

33:13

turbo. We already basically

33:15

know how to train a next generation

33:17

large language model. There are some research

33:19

questions that need to be solved, but

33:21

we can basically see our way there,

33:23

right? And what happens when

33:27

this thing gets another 80% better, 100% better?

33:30

What happens when the AI can

33:32

start improving itself or can start

33:34

doing its own research about AI,

33:36

right? At that point, this stuff

33:38

starts to advance much, much,

33:40

much, much faster. If

33:42

we can see on the horizon the

33:44

day that AI might teach itself, then

33:47

the question of who's in charge of it right

33:49

now feels pretty important. And

33:51

remember, OpenAI itself had foreseen this

33:54

problem. That's the very reason it had

33:56

created the nonprofit board as a

33:58

safety measure. And the problem for

34:00

Sam Altman in 2023, and that

34:02

while chat GBT had been taking over the world,

34:05

the composition of his nonprofit board

34:07

had changed. Some of his

34:09

natural allies, business minded folks like Reid

34:11

Hoffman, had left the board, which

34:14

had tipped the balance of power over

34:16

to the academics towards the people associated

34:18

with the effective altruism movement. And

34:21

that's what set in motion the coup, the

34:23

very recent attempt by the board to take

34:25

out Sam. When news

34:28

of Sam's firing first broke, the reasonable

34:30

guess was that he tried to push

34:32

AI forward too fast in a way

34:34

that had alarmed the board's safety minded

34:36

people. In the

34:38

aftermath of all this, it's pretty clear that that's

34:40

not what happened. According to

34:42

the Wall Street Journal, here's how things broke down.

34:47

The departure of some of Sam's allies had left an

34:49

imbalance of power. And afterwards, the

34:51

two sides began to feud. One

34:54

of the effective altruists, an academic

34:56

named Helen Toner, co authored

34:59

a paper about AI safety, where

35:01

she criticized OpenAI, the company whose

35:03

board she was sitting on. A

35:05

normal enough thing to do in the

35:07

spirit of academia, but an arguably passive

35:10

aggressive violation of the spirit of corporate

35:12

America. Sam Altman

35:14

confronted her about it. Then,

35:17

sometime after that, some of Altman's

35:19

allies got on slack and started

35:21

complaining about how these effective altruist

35:23

safety people were making the company

35:25

look bad in public. The

35:27

company should be more independent of them, they said,

35:31

on slack. The problem

35:33

is that on that slack channel was

35:35

Ilya Sutskivar, a member of the board

35:38

and someone who is both a sometime Altman

35:40

ally, but also someone

35:42

who is deeply concerned with

35:44

AI safety. How many companies

35:46

have been destroyed by the

35:49

actually already nuclear technology that

35:51

is slack? Anyway,

35:53

two days later, it's Sutskivar who

35:55

delivers the killing blow. Sam

35:58

is in Vegas that Friday out of. Formula

36:00

One race, trying to raise more billions for

36:02

open AI, he's invited

36:04

at noon to a Google Meet, where

36:06

Sudskivir and the other three board members

36:08

tell Oppmann he's been fired. Afterwards,

36:11

like any laid off tech worker,

36:13

he finds his laptop has been

36:15

remotely deactivated. Over

36:20

the weekend, as the company's employees and

36:22

executives get angrier and angrier about the

36:25

coup, they confront Helen Toner, the academic

36:27

who wrote the spicy paper. They

36:29

tell her that the board's actions might destroy

36:32

the company. According to the

36:34

Wall Street Journal, Helen Toner

36:36

responds, quote, "'That would

36:38

actually be consistent with the mission.'" In

36:41

other words, she's saying, "'The board should kill

36:43

the company "'if the board has decided it's

36:45

the right thing to do.'" Casey

36:48

told me that in the days after, a

36:50

public consensus is quickly congealed against these effective

36:52

altruists, who had knowingly damaged the company, but

36:54

then had been unable to provide evidence that

36:57

they'd done it for any good reason. Part

37:00

of the fact that the EAs have a really bad

37:02

reputation right now is that if

37:05

you have not thought that much about AI, and

37:08

it's very hard for you to imagine that a

37:10

killer AI is anything other than a fantasy from

37:12

the Terminator movies, and you find

37:14

out that out there in San

37:17

Francisco, which is already a kooky town, you

37:19

think this, there's

37:21

a bunch of people working for some rich

37:23

person philanthropy, and all they do is they

37:26

sit around all day and

37:28

they think about the worst case scenarios that could ever come

37:30

out of computers, you would think, it seems like

37:33

kind of weird and culty to me. It's

37:35

like, these are like the goths of Silicon

37:37

Valley. There's something almost religious

37:39

about their belief that this AI god

37:41

is about to come out of the

37:43

machine. So these people kind of get

37:46

dismissed. And so when the open AI, Sam

37:48

Altman firing goes down, there's a lot

37:51

of discussion of like, well, here go

37:53

the creepy AI kids again, the gods

37:55

of Silicon Valley and their religious belief

37:58

and killer AI, they've all conspired. fired

38:00

to destroy what was a really

38:02

great business. And that becomes, I

38:04

would say, maybe the first big

38:06

narrative to emerge in the aftermath

38:08

of Sam's firing. We all

38:10

know what happens next. On November

38:13

21st, five days after the shocking

38:15

firing of Sam Altman, he

38:17

gets his job back. He is once

38:19

again CEO of OpenAI. And

38:21

while he won't get to keep his seat on the board, he

38:23

seems to have defeated the goths of

38:26

Silicon Valley. There is

38:28

a big party at OpenAI's headquarters.

38:30

Someone pulls the fire alarm because

38:32

there was a fog machine going.

38:35

But by all accounts, everyone had a great time. They

38:37

stay up very late. And what

38:39

about the board? These people that tried and failed

38:41

to do a coup. So

38:44

three of the four members of the

38:46

board are leaving it. That's Sasha McCauley,

38:48

Helen Toner, and Ilya Sutskivar. A

38:50

fourth member, one of the people who had voted

38:53

to fire Sam, Adam D'Angelo, who's the CEO of

38:55

Quora. He is staying on the board. And

38:57

then they have brought in Larry Summers, who

38:59

is a well-known former U.S. Treasury Secretary, and

39:02

Brett Taylor, who is the former chair

39:04

of the Twitter board, the former co-CEO

39:07

of Salesforce. So the three

39:09

of them are going to appoint a new

39:11

board of up to nine members. And they're

39:13

also going to conduct an investigation into what

39:15

happened. And my hope is that in that

39:17

investigation, we will get some more details finally

39:20

on why the board actually fired Sam Altman.

39:28

After the break, we get

39:30

to the question attended here. Who

39:33

should actually be in charge of AI? This

39:57

episode is brought to you by Shopify.

40:00

Selling a little This episode or a

40:02

lot? by Klaviyo. Do

40:04

your thing however you cha-ching with Shopify,

40:06

the global commerce platform that helps you

40:09

sell at every stage of your business.

40:12

Shopify helps you turn browsers into buyers

40:14

with the internet's best converting checkout. 36%

40:17

better on average compared to other leading

40:19

commerce platforms. Get a

40:22

$1 per month trial period at

40:24

shopify.com/ offer23. The

40:30

platform that powers smarter digital relationships. With

40:32

Klaviyo, you can activate all your customer

40:34

data in real time. Connect seamlessly with

40:36

your customers across all channels. Guide

40:39

your marketing strategy with AI-powered insights,

40:41

recommendations, and automated assistance. Deliver experiences

40:43

that feel individually designed at scale

40:45

and grow your business faster. Power

40:48

smarter digital relationships with Klaviyo. Learn

40:51

more at klaviyo.com. That's k-l-a-v-i-y-o.com.

40:53

That's k-l-a-v-i-y-o.com. That's k-l-a-v-i-y-o.com. That's

40:55

k-l-a-v-i-y-o.com. That's k-l-a-v-i-y-o.com. That's k-l-a-v-i-y-o.com.

41:01

What is the future of

41:04

digital communication? Something

41:08

like once a week in America,

41:10

some institution implodes. And

41:12

it pretty much always goes the same way. A

41:14

confusing private conflict breaks out onto the

41:16

internet. The combatants plead their versions of

41:18

the story to the public. And

41:21

we, reporters, gawkers, people online,

41:24

render a quick, noisy verdict.

41:27

The desire to participate in all this is human

41:29

nature. I am doing it right now.

41:32

You are doing it with me. Neither

41:34

of us chose this system, but we're stuck with

41:36

it. Institutions right

41:38

now are fragile. The internet is powerful. And

41:41

we're all addicted to being entertained. In

41:44

my wiser moments, though, what I try

41:46

to remember is that whoever is actually at fault in

41:48

any of these fights of the week, the

41:51

truth is institutions are supposed

41:53

to have conflict. And

41:55

we all, put together, will disagree. A

41:58

healthy institution is one capable of... of

42:00

mediating those disagreements. When

42:03

we, the public, watch a private fight break

42:05

out online, it's hard to ever

42:07

really know for sure who was actually right

42:09

or wrong. What we can

42:11

know is that we are watching as the

42:13

institution itself breaks. OpenAI

42:16

was set up from the beginning to be

42:18

an unusual kind of company with an unusual

42:21

governance structure. As unusual

42:23

as it was, I'm not convinced

42:25

from the available evidence that the structure

42:27

was the problem. The

42:30

fashion of revolutionaries who took over OpenAI,

42:32

who governed it for a little over

42:34

a weekend, it just seems like

42:36

they didn't know how to be in charge. They

42:39

couldn't articulate what they thought was wrong. They

42:41

couldn't articulate why their revolution would fix it. They

42:44

never even bothered to try to win over the people in the

42:46

building with their mission. They

42:48

thought they saw someone acting like a

42:50

king, and so they acted imperially themselves.

42:54

In the aftermath, what I found myself wondering this

42:56

week was this. This

42:58

new version of OpenAI, could

43:01

it tolerate conflict? Could it

43:03

have, productively, the fights you'd hope would

43:05

take place somewhere as important as this,

43:08

in the rooms we'll never see inside of? Casey

43:11

Newton, who is better at spying into those

43:13

rooms than you or me, he

43:16

says he feels optimistic. I

43:18

think the most important thing about the Newport

43:20

is that Adam DiAngelo is on it. This

43:22

is someone who voted to fire Sam Altman

43:24

and who is still there, and who has

43:26

a say on who else comes onto that

43:28

board, who will have a say on who

43:30

gets picked to investigate all of the circumstances.

43:33

To me, that is like, if

43:35

you're somebody who is worried that, oh no, OpenAI is just going

43:37

to go gas to the pedal. If

43:41

you're worried that OpenAI is going to go foot

43:44

to the gas, why can't I figure out this gas to

43:46

the foot pedal? If you're worried

43:48

that OpenAI is going to go gas to

43:50

the foot pedal, don't worry. Because Adam DiAngelo

43:52

is there. What's how I'm feeling about

43:54

it, anyway? Is that how you're feeling about it?

43:56

Are you feeling like, well, I mean, look, let

43:58

me take a step back. I'm having too much of your

44:00

podcast, PJ. But let me tell you something. I love when you take

44:03

a step back. Okay, great. Take a step back.

44:05

Okay, great. One of the big narratives

44:07

that came out of this whole drama

44:09

was there was the forces of corporate

44:11

money-making and there were the forces of AI

44:14

safety. And the forces of AI safety kicked

44:16

out Sam Altman and then the forces of

44:18

corporate money-making stepped in to ensure that Sam

44:20

Altman would be put back in his role

44:23

to continue the corporate money-making. And

44:26

it is true that the forces of

44:28

capitalism intervened to restore Sam Altman. That

44:30

part is true. But from

44:33

my own reporting, I truly believe

44:35

that the core conflict was not

44:37

really about AI safety in

44:39

the sense that Sam Altman was behind

44:41

the scenes saying, like, we have to

44:43

go accelerate all these projects while the

44:45

board isn't looking. And that's why he

44:47

got fired. I do not think that

44:50

was what happened. I think the board

44:52

was actually fairly comfortable where things were

44:54

from a safety perspective. I think they

44:56

were just worried about the lying, that

44:58

they say that he was doing. But they have

45:01

not pointed to a single instance of- Perhaps because

45:03

he's such a good liar that you

45:05

can never catch him, but you can sometimes smell

45:07

the sulfurous smell of a lie that

45:09

went undetected and passed by him. They

45:11

do talk about him like a mischievous

45:14

leprechaun or like Rumpelstiltskin

45:16

or something. I

45:18

like having interviewed Sam. I think, no, that's not my

45:20

impression of him. Maybe it's like a Kaiser Soze thing

45:22

where it's like his greatest trick was convincing me that

45:25

he didn't exist. But

45:27

anyways, you were saying that, and this

45:29

fits with my general worldview, which is

45:31

that when institutions explode, it's

45:34

always described as people

45:36

represent one value versus representing another. And

45:38

sometimes that's true. And often it's actually

45:41

about either things that are more subtle

45:43

or just sort of power. And

45:46

you're saying that from your reporting, your sense

45:48

is not that the board was saying, hey,

45:50

you're screening into the apocalypse, we have to

45:52

stop you. The board had some hard

45:55

to define problems with his leadership

45:57

style, and they pulled the big... red

46:00

lever that they're really only supposed to pull if he's

46:02

inventing a Death Star. But what you're

46:04

saying is if you were worried about the AI Death

46:06

Star, you don't necessarily have to

46:08

feel like the AI Death Star

46:10

is coming. That's right.

46:12

That's right. There's no reason to

46:14

believe that now that the old board is out of

46:16

the way, open AI can just go

46:19

absolutely nuts. I don't think that's what is going to

46:21

happen. And also, by the way, there's going to be

46:23

way more scrutiny on open AI as it releases next

46:26

generation models and new features. And

46:28

so I think there's a way in which this was

46:31

very bad for the AI safety community because they were

46:33

made to look like a bunch of Goths who were

46:35

bad at governance. But

46:38

I think it was good in the sense

46:40

that now everyone is talking about AI safety.

46:43

Regulators are very interested in AI

46:45

safety and regulations are being written

46:48

in Europe about AI safety. So

46:50

I actually don't think we have to panic just yet. Got

46:53

it. And then I guess like I've

46:56

began this episode by saying like one way

46:58

that you can think about this as it

47:00

being like a bunch of silly corporate drama.

47:02

And like that is true. And

47:05

at the same time, can I just say I've

47:07

been reading these stories. It's like, oh,

47:09

well, looks like the Silicon Valley tech

47:11

bros have gotten themselves embroiled in a

47:14

little drama. And

47:16

like the only people who can feel that way are

47:18

the people who truly do not care about the future.

47:21

Sorry, you want to convince yourself

47:23

that like there's nothing at stake here

47:25

that like I truly wish my brain

47:27

were as smooth as yours because it

47:29

actually does matter like how people will

47:31

make money in the future. It

47:34

matters if a machine will be able to do

47:36

everyone's job. So count me on the side of

47:38

those who are interested and who do not think

47:40

that this is just like a fun little Netflix

47:42

series for us all. What are you going to

47:45

bet? I'm with you

47:47

and I appreciate you ranting and raving because

47:49

I feel the exact same way. And I'm

47:51

also just like there's this really annoying to

47:53

me thing in technology and it's not just

47:55

civilians. It's like also sometimes journalists who cover

47:58

it where they're like I know. going

48:00

on is the thing that happened last time. So it's

48:02

like, people who are like,

48:04

AI is just NFTs. I'm like, no, those

48:07

are just pieces of technology that

48:09

are described with letters. They're

48:11

very different. Like, the

48:13

future and the present are informed by

48:16

the past, but it's not just a

48:18

movie that you can say you saw

48:20

the end of. Some journalism is just

48:22

people who don't care posturing for other

48:24

people who don't care. And I think

48:26

that is like, we've seen so

48:29

much of that during the open AI story. But

48:31

we're right. And we're smart. Good

48:33

for we're killing it over here. So

48:36

if we agree, and we do that,

48:38

like, whether or not there were shenanigans

48:40

this week, the shenanigans were inspired

48:43

by a real question. And that real

48:45

question matters. AI is likely

48:48

transformative technology. And the

48:50

idea of how it should be governed is

48:52

really tricky. We're focusing on

48:54

open AI because they are the leader in the space. But

48:57

if you zoom out from open AI, there's

48:59

a ton of other companies developing artificial

49:01

intelligence. There's a ton of other countries

49:03

where this is happening, you know, it's

49:05

being developed all over the world. And

49:08

I don't know the right answer. If

49:11

this technology has a potential to be as powerful

49:13

as the people developing it fear, I don't

49:16

know what you do around that. And I'm curious what you

49:18

think like if you were king of

49:20

the world, but you were leaving next year,

49:22

and you had to set up a regime

49:24

for artificial intelligence that everyone would actually follow.

49:26

What do you do? Well,

49:29

one, I do think this is a place

49:31

where we want the government to play a

49:33

role, right? Like if a technology is created

49:36

that does have the effect of causing massive

49:39

job losses and introduces novel

49:41

new risks into like, you

49:44

know, bioweapons and cybersecurity

49:46

and all sorts of other

49:48

things. I think you do want the

49:50

government paying attention to that. I think

49:52

that there's a good case to be made that

49:54

the government should like be funding its own large

49:56

language, but it should be doing its own fundamental

49:59

research. into how these models

50:01

work and maybe how to build some of

50:03

its own safely because I'm not sure that

50:05

the the for-profit model is the

50:07

one that is going to deliver us to the

50:09

best result here. In terms of what

50:11

would government oversight look like,

50:14

some folks I talked to you talk about it

50:16

just in terms of capabilities. Like we should identify

50:18

capabilities that's like once a model is able

50:21

to do this then we

50:23

would introduce some breaks on how it

50:25

is distributed, how it is released

50:27

into the world. Maybe there are some safety tests

50:29

we make you go through and in a world

50:32

where the government can and does regulate this, which

50:35

government? Is it the US? Is it the

50:37

UN? Like how do you do it? It

50:39

generally winds up being a mix of Western

50:41

democracies that lay the blueprint. You know

50:44

the US doesn't typically regulate technology very

50:46

much but Europe does and so

50:49

Europe essentially writes the rules for the internet

50:51

that the rest of us live on and

50:53

it basically works out okay because their values

50:55

are basically aligned with American values and so

50:57

like yes we have to click on a

50:59

little cookie pop-up every website that we visit

51:01

because Europe is making us and we hate

51:03

it but it's also fine you know. Yeah

51:05

and so like AI is probably going to

51:07

be the same thing where Europe is going

51:10

to say well AI should basically be like

51:12

this and the US will have hearings

51:14

where they sort of gesture in similar directions and

51:16

then never pass the law and like that will

51:18

be the medium-term future of AI. Where

51:20

I think it will change is if there is

51:22

some sort of incident where like thousands of people

51:24

die and AI plays a direct result like that

51:27

is when the US will finally get around to

51:29

doing something. Maybe. Maybe.

51:32

It's weird it's

51:34

weird to feel both scared and excited like I'm

51:36

not used to having two feelings at the same

51:38

time. There's this feeling

51:40

that I just call AI vertigo which I mean

51:42

and this is the sort of staring into the

51:45

abyss feeling where you can imagine

51:47

all of the good that could come with

51:49

you know having a universal translator and essentially

51:51

omniscient assistant that is just living in every

51:54

device that you have like that's incredibly powerful

51:56

and good but like yes

51:58

it will also generate both like

52:00

a huge number of new harms and like

52:02

at a huge volume. And

52:05

so your imagination can just run wild and I

52:07

think it's important to let your imagination run wild

52:09

a little bit and it is also possible to

52:11

go too far in that direction and sometimes you

52:13

just need to like you know chill out and

52:15

go play Marvel stuff a little bit. Casey,

52:18

that's exactly what I'm gonna do. Okay, that's like good.

52:24

Thank you. Thank you. Casey

52:29

Newton. You should subscribe to his

52:31

excellent newsletter platformer and to his

52:33

podcast Hard Fork which he co-hosts

52:35

Kevin Roos. They've had some wonderful

52:37

episodes on the subject. You should go check them out.

52:45

Also just in general this blow up at

52:47

OpenAI has been an occasion for some wonderful

52:49

tech reporting. People have been all over this

52:52

story explaining a very complicated situation

52:54

very quickly. I'm going to put links to

52:56

some of the pieces that I enjoyed and

52:58

drew from for this story. You can find

53:00

them as always at the newsletter for this

53:02

show. There's a link to that

53:04

newsletter in the show. Search

54:19

Engine is a presentation of Odyssey and

54:22

Jigsaw Productions. It was created by

54:24

me, PJ Vogt, and Shruti Pena Menaini, and

54:26

is produced by Garrett Graham and Noah John.

54:29

Theme, original composition, and mixing by

54:31

Armin Bazarian. Our executive

54:34

producers are Jenna Weiss-Berman and Leah Reese

54:36

Dennis. Thanks to the team

54:38

at Jigsaw, Alex Gibney, Rich Pirello, and John

54:40

Schmidt. And to the

54:42

team at Odyssey, JD Crowley, Rob

54:45

Mirandi, Craig Cox, Eric Donnelly, Matt

54:47

Casey, Laura Curran, Josephina Francis, Kurt

54:49

Courtney, and Hilary Scheff. Our

54:51

agent is Orin Rosenbaum at UTA. Our

54:54

social media is by the team at Public Opinion

54:56

NYC. Follow and

54:58

listen to Search Engine with PJ Vogt now for

55:01

free on the Odyssey app or wherever you get

55:03

your podcasts. Also, if

55:05

you would like to become a paid subscriber, you

55:07

can head to pjvogt.com. There's a link in the

55:09

show notes. Or another way

55:12

to help the show is to go to Apple

55:14

Podcasts and rate and review us. Highly

55:16

would be nice. All right, that's it for

55:18

this week. Thank you for listening. We'll see you next week.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features