Podchaser Logo
Home
Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

Released Friday, 28th June 2024
Good episode? Give it some love!
Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

Friday, 28th June 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

10:00

have genius, they actually listen, get inspired,

10:02

and then they come out with something

10:04

different, something new. They don't blend around

10:06

patterns based on machine-based algorithms. So nice

10:09

try, but I don't think that that

10:11

argument is very convincing. And I also

10:13

love that they say that, you know,

10:16

the creators and their partners are the

10:18

ones that have resorted to like the

10:20

old legal playbook. They're

10:23

not resorting to, oh, we can

10:25

do this. It's based on fair use. It's

10:27

transformative. We're going to seek forgiveness

10:29

instead of permission. Well, I mean, you also have

10:31

the investor in the company who you quote in

10:34

the lawsuit saying, because he said this to a

10:36

news outlet, I don't know if I would have

10:38

invested in this company if he had a deal

10:40

with the record labels, because then they probably wouldn't

10:42

have needed to do what they needed to do,

10:44

which I assume he sort of meant, you know,

10:47

hoover up all this music without paying for it.

10:50

Yeah, that's in the legal world what we call a

10:52

bad fact. That is a

10:54

bad fact for the other side. You

10:57

don't want your investors saying, gee, you know, if

10:59

they had really done this the legal way, I

11:01

don't think I would have invested, because it's just

11:03

too hard. It's just too hard to do it

11:05

the legal way. Mitch, we've

11:07

seen other lawsuits come out in the past

11:10

year from media companies, including

11:12

the New York Times, which sued

11:14

OpenAI and Microsoft last year, alleging

11:17

similar types of copyright violations. How

11:19

similar or different from the sort

11:21

of text-based copyright arguments is

11:23

the argument that you are making against

11:26

these AI music generation companies? I

11:28

think the arguments are the same that

11:31

you have to get permission before you copy.

11:33

It's just basic copyright law. The

11:35

businesses are very different. And I think

11:38

looking at sort of the

11:40

public reports on the licensing negotiations

11:42

going on between the news media

11:44

and companies like

11:46

OpenAI, where news is

11:48

dynamic. It has to change every single day.

11:51

And so there needs to be a

11:53

feed every single day for the input

11:55

to actually be useful for the output.

11:58

Music is catalyzing. a

18:00

smile doing our job. Here,

18:03

I think that really

18:05

what we're trying to do is

18:08

create a marketplace like streaming,

18:10

where there are partnerships and

18:13

both sides can grow and evolve together. Because the truth

18:15

is you don't have one without the other. Record

18:19

companies don't control their prices, they

18:21

don't control their distribution. They're now

18:23

gateways, not gatekeepers. The democratization of

18:25

the music industry has changed everything.

18:28

And I think they're seeking the same

18:30

kind of relationships with AI companies that

18:32

they have with streaming companies today. What

18:35

would a good model look like? I mean, there

18:37

are reports this week that YouTube is in talks

18:39

with record labels about paying them a lot of

18:41

money to license songs for their

18:43

AI music generation software. Do you think

18:45

that's the solution here? That there will

18:47

be sort of these platforms that pay

18:49

record labels and then they get to

18:52

use those labels, songs in

18:54

training their models. Do you

18:56

think it's fine to use AI to generate music

18:58

as long as the labels get paid? Or is

19:00

there sort of a larger objection to the way

19:02

that these models work at all? I

19:05

think it works as long as it's done

19:07

in partnership with the artists. And at the

19:09

end of the day, it moves

19:11

the ball forward for the label and the

19:14

artist. I mean, the YouTube example

19:17

is interesting because that's really geared

19:20

towards YouTube shorts, right? It's

19:22

geared towards fans being

19:24

able to use generated music

19:26

to put with their own videos for 15

19:29

or 30 seconds. That's

19:31

an interesting business model. BandLab

19:33

is a tool for artists, splice,

19:36

beatport, focus, right,

19:39

output, waves, even tide.

19:42

Every digital audio workstation that's now

19:44

using AI, native instruments, Oberheim. I

19:47

mean, there are so many AI

19:50

companies that have these bespoke agreements

19:52

and different types of tools that

19:54

are meant to be done with

19:57

the artistic community that

19:59

I think they are really important.

20:01

The outliers are the Sunos and

20:03

the Yudios, who frankly are not

20:05

very creative in trying to help

20:08

with human ingenuity. Instead, they're just

20:10

substitutional to make money for investors

20:12

by taking everybody else's stuff. We've

20:15

seen some pretty different reactions to

20:18

the rise of AI among artists.

20:20

Some people clearly seem to

20:22

want no part of it. On the other

20:24

hand, we've seen musicians like Grimes saying, here,

20:26

take my voice, make whatever you want. We'll

20:29

figure out a way to share the royalties

20:31

if any of your songs becomes ahead. I'm

20:33

curious, if you're able to get the deals

20:35

that you want, do you expect any controversy

20:37

within the artist community and artists saying, hey,

20:39

why'd you sell my back catalog to this

20:41

blender? I don't want to be part of

20:44

that. Yeah, I think, look, artists are entitled to be

20:46

different and there are going to be artists, I think

20:48

Kevin, you said earlier, you know artists who are so

20:50

scared of this, they do want to shut the whole

20:53

thing down. They just don't want their music and

20:55

their art touched, right? I know directors

20:57

of movies who can't stand that the

20:59

formatting is different for an airplane. Like

21:01

that's their baby and they just don't

21:04

want it. Then there are artists like

21:06

Grimes who are like, I'm fine being

21:08

experimental. I'm fine having fans take it

21:10

and change it and do something with

21:12

it. All of that is good. They're

21:14

the artist, right? I mean, it's their art. Our

21:16

job is to invest in them, partner with

21:19

them, help find a market for them. But

21:21

at the end of the day, if you're trying

21:23

to find a market for an artist's work that

21:25

they don't and they don't want that work in

21:27

the market, it's not going to work. Yeah.

21:31

Have you listened to much AI generated music? Are

21:33

there any songs you've heard that you thought that's

21:35

actually kind of good? Yeah.

21:38

So I think

21:40

in the sort of overdubbing voice and likeness thing,

21:43

that it's a little bit better than some

21:46

of the simple prompts

21:48

on these AI generators like Houdeo

21:50

and Suno. But I

21:54

heard Billie Eilish's voice on a

21:56

revivalist song and I was like, wow, she

21:58

should cover this song. from

24:00

the Valley? You

24:02

know, this has been the same argument that the Valley's

24:04

had since 1998. To

24:07

me, that's a 30-year-old argument. If

24:10

you look at the marketplace today,

24:13

where Silicon Valley thrives is when

24:15

rights are in place and they

24:17

form partnerships, and then they grow

24:19

into sophisticated global leaders where

24:21

they can tweak, you know,

24:24

every couple of years, their deals

24:26

and come up with new products

24:28

that allow them to feed these

24:30

devices that are nothing without the

24:32

content on them. And, you know,

24:34

there's always sort of this David

24:36

versus Goliath thing, no matter what

24:38

side you're on. But if you

24:40

think about it, music,

24:43

which is a $17 billion industry in the United

24:45

States. I mean, I don't even, I think one

24:48

tech company's cash on hand is five times that,

24:50

right? Not to mention their $289 billion market caps,

24:52

right? But

24:55

they are completely dependent on

24:58

the music that these geniuses

25:00

create in order to thrive.

25:02

And to say that these

25:04

creators are stopping their progress,

25:07

I think is sort of laughable. I

25:09

think what's much more threatening is

25:11

if you move fast and break

25:13

things without partnerships, what

25:15

are you threatening on the tech side

25:17

with a no holds barred, you know,

25:20

culture destroying, you know, machine led world?

25:22

It sounds pretty gross to me. So

25:25

what happens next? The lawsuits have been filed. This

25:27

stuff tends to take a long time, but what

25:29

can we look forward to? You know,

25:31

will there be sort of scandalous emails unearthed

25:34

in Discovery that you'll post to your website? Or what

25:36

can we look forward to here? Well,

25:39

moving forward in Discovery, I think we'll

25:41

be prohibited from posting any thing to

25:44

it. I know, you think

25:46

you're disappointed. If you want to just

25:48

send them to hardforkatnytimes.com, that's

25:50

fine. I live for that stuff.

25:52

But we will, of course, follow

25:54

the rules. But, you

25:57

know, we have filed in the districts

25:59

where these companies. these reside. And

26:01

so I hope that within a year or

26:03

so we will actually get to the meat

26:06

of this because if you think about it,

26:08

at the the judge has to decide

26:11

when they raise fair use as a defense, is

26:13

this fair use or not? Right.

26:15

And that is something that, you

26:17

know, has to be part

26:20

of the beginning part of the lawsuit. So

26:22

we're hopeful that, you know, when

26:24

I say a short time in legal terms,

26:26

that means you know, a year or two,

26:28

but we're hoping that in a short time,

26:30

we will actually get a decision. And that

26:32

it sends the right message to investors and

26:34

to new companies, like there's a right way

26:36

and a wrong way to do this, doors

26:38

are open for the right way. Yeah, I

26:41

think there's a story here about startups

26:43

that are sort of moving fast, breaking

26:45

things asking for forgiveness, not permission. But

26:48

I also think there's a story here

26:50

that that maybe we haven't talked about

26:52

about restraint, because I know that a

26:54

lot of the big AI companies had

26:57

tools years ago that could generate music,

27:00

but they did not release them. I remember

27:02

hearing a demo from someone who worked at the big

27:04

AI companies, one of the big AI companies, maybe

27:07

two years ago, of one of these kinds

27:09

of tools. But I think they understood they

27:11

were scared because they knew that the record

27:13

industry is very organized, it has this

27:15

kind of like, you know, history of

27:17

litigation. And, you know,

27:19

they sort of understood that they were likely

27:21

to face lawsuits if they let this out

27:23

into the public. So have you had discussions

27:25

with the big the bigger AI companies, the

27:28

more established ones that are working on this

27:30

stuff? Or are they just sort of intuiting

27:32

correctly that they would have a lot of

27:34

legal problems on their hands if they let

27:36

this stuff out into the general public? You

27:39

know, you're raising a point that I don't think

27:41

is discussed often enough, which is that there are

27:43

companies out there that

27:46

deserve credit for restraint. And part

27:48

of it is that they

27:51

know that we would bring a lawsuit and in the past,

27:53

we haven't been shy, and that's useful. But

27:55

part of it is also because these are their partners

27:57

now. You know, there are

28:00

real business relationships here and

28:02

human relationships here between these

28:04

companies. And

28:06

so they, they're natural. I

28:10

think they're moving towards a world where

28:12

their natural instinct is to approach

28:14

their partners and see if they can work with them. You

28:17

know, I know that YouTube did

28:20

sort of its Dreamcast experiment, approached

28:22

artists, approached record companies. That

28:24

was sort of like the precursor or the

28:27

beta to whatever they might be discussing now for

28:29

what's going to go on shorts that we talked

28:31

about earlier. And I'm sure that there are many

28:33

others, but you're right. Yes, there

28:35

are going to be companies like Sunu and

28:37

Yudio that just seek investment,

28:39

want to make a profit and steal stuff.

28:42

But there is, there is

28:44

restraint and constructive action

28:47

by a lot of companies out there who

28:50

do view the creators as

28:52

their partners. Well,

28:54

it's a really interesting development and I

28:56

look forward to following it as it

28:58

progresses. Thanks Mitch.

29:00

Thanks so much, Mitch. Thanks for coming by.

29:02

Thanks guys. Bye. When

29:06

we come back, we're going Inside the Pentagon

29:08

with Chris Kirchhoff, the author of Unit X.

29:11

Are we allowed inside the Pentagon? This

29:35

podcast is supported by KPMG. Your

29:38

task as a visionary leader is simple. Harness

29:41

the power of AI. Shape the future

29:43

of business. Oh, and do

29:45

it before anyone else does without leaving people

29:47

behind or running into unforeseen risks. Simple,

29:50

right? KPMG's got you. Helping

29:53

you lead a people-powered transformation that

29:55

accelerates AI's value with confidence. Close

29:58

that for a vision. Learn more

30:00

at... www.kpmg.us.ai Hi,

30:04

I'm Robert Vinlouin from New York Times Games. I'm

30:06

here talking to people about Wordle and showing them

30:09

this new feature. Do you all play Wordle? Yeah.

30:11

I have something exciting to show you. Oh, okay.

30:13

It's the Wordle Archive. Oh! And

30:16

you keep your... So if I miss it, I can like go back.

30:19

Oh, that's nice. So now you can play

30:21

every Wordle that has ever existed. There's like

30:23

a thousand puzzles. Oh my god, I love

30:25

it. Actually, that's really great. What

30:28

date would you pick? May 17th. Okay. That's

30:30

our birthday. What are some of your like

30:32

habits for playing Wordle? I wake up, I

30:34

make a cup of coffee, I do the

30:37

Wordle, and I send it to my friends in a

30:39

group chat. Amazing. Thanks so much for coming by and

30:41

talking to us and playing. New York

30:43

Times Games subscribers can now access the entire Wordle

30:45

Archive. Find out more at nytimes.com/games.

30:47

You don't understand how much Wordle means

30:49

to us. We need to take a

30:51

selfie. Well,

30:56

Casey, let's talk about war. Let's talk

30:58

about war. And what is it

31:00

good for? Some say

31:03

absolutely nothing. Others write books arguing the

31:05

opposite. Yeah. So

31:08

I've been wanting to talk about AI and

31:11

technology and the military for a while on

31:13

the show now, because

31:15

I think what's really flying under the radar

31:17

of kind of the mainstream tech press these

31:19

days is that there's just

31:21

been a huge shift in Silicon

31:24

Valley toward making things for the

31:26

military and the U.S. military in

31:28

particular. You know, years ago,

31:30

it was the case that most of the

31:32

big tech companies, they were sort of very

31:34

reluctant to work with the military to

31:37

sell things to the Department of Defense

31:39

to make products that could be used

31:41

in war. They had a lot of

31:43

ethical and moral quandaries about that, and

31:45

their employees did too. But we've

31:47

really seen a shift over the past few years.

31:50

There are now a bunch of

31:52

startups working in defense tech, making

31:54

things that are designed to be

31:56

sold to the military and to

31:58

national security forces. And

32:00

we've also just seen a big

32:03

effort at the Pentagon to modernize

32:05

their infrastructure, to update their technology,

32:08

to not get beat by other nations when

32:10

it comes to having the latest and greatest

32:12

weapons. Yeah, and also Kevin, just the rise

32:14

of AI in general, I think has a

32:16

lot of people curious about what the military

32:18

thinks of what is going on out here.

32:20

And is it eventually going to have to

32:22

adopt a much more aggressive AI strategy than

32:24

the one it has today? Yeah, so a

32:27

few weeks ago, I met a guy named

32:29

Chris Kirchhoff. He's one of the authors

32:31

along with Raj Shah of a book called Unit

32:33

X. Chris is sort of

32:35

a long time defense tech guy. He

32:37

was involved in a number of tech

32:39

projects for the military. He worked at

32:41

the National Security Council during the Obama

32:43

administration. Fun fact, he was the highest

32:46

ranking openly gay advisor in the Department

32:48

of Defense for years. And

32:50

most importantly, he was a founding

32:52

partner of something called the Defense

32:55

Innovation Unit or DIU. It also

32:57

goes by the name Unit X,

33:00

which is basically this little experimental division

33:02

that was set up about a decade

33:05

ago by the Department of Defense who

33:07

tried to basically bring the Pentagon's technology

33:09

up to date. And

33:12

he and Raj Shah, who was another founding

33:14

partner of the DIU, just wrote a book

33:17

called Unit X that basically tells the story

33:19

of how the Pentagon sort of realized

33:21

that it had a problem with

33:23

technology and set out to fix

33:26

it. So I just thought we should

33:28

bring in Chris to talk about some of the changes

33:30

that he has seen in the military

33:32

when it comes to technology and in Silicon Valley

33:34

when it comes to the military. Let's

33:36

do it. ["The Military"]

33:50

Chris Kirchoff, welcome to Hard Fork. Glad

33:52

to be here. So I think

33:54

people hear a lot about the military and

33:56

technology and they kind of assume that they're

33:59

like very futuric. things happening inside the

34:01

Pentagon that we'll hear about at some point

34:03

in the future. But a

34:05

lot of what's in your book

34:07

is actually about old technology and

34:09

how underwhelming some of the military's

34:11

technological prowess is. Your book

34:14

opens with an anecdote about your

34:16

co-author actually using a

34:18

compact digital assistant because

34:21

it was better, it had better

34:23

navigation tools than the navigation system

34:25

on his 30 million dollar jet.

34:28

That was sort of how you introduced the

34:30

the fact that the military is not quite

34:32

as technologically sophisticated as many people might think.

34:35

So I'm curious when you first started your

34:37

work with the military, what was the

34:39

state of the technology? Well

34:41

it's it's really interesting to me. You

34:44

go to the movies and we have

34:46

all seen Mission Impossible and James Bond

34:48

and wouldn't it be wonderful if that

34:50

actually were the reality behind the curtain.

34:52

But when you open up the curtain

34:54

you realize that actually in this country

34:56

there are two entirely different systems of

34:58

technological production. There's one for the military

35:00

and then there's one for everything else. And

35:02

to dramatize this on the image of our book

35:04

Unit X we have an iPhone and on top

35:06

of the iPhone is sitting an F-35, the world's

35:10

most advanced fighter jet, a

35:12

fifth-generation stealth fighter known as

35:14

a flying computer for its incredible sensor fusion

35:17

and weapon suites. But the thing about the

35:19

F-35 is that its design was

35:21

actually finalized in 2001 and it did

35:24

not enter operations until 2016.

35:26

And a lot happened

35:29

between 2001 and 2016 including

35:32

the invention of the iPhone, which by

35:34

the way has a faster processor in

35:36

it than the F-35. And

35:38

if you think about the F-35 over

35:40

the subsequent years there's been three technological

35:42

upgrades to it and we're now what

35:44

we're almost an iPhone

35:46

16 season. And once you understand

35:48

that you understand why it was

35:51

really important that the Pentagon thought

35:53

about establishing a Silicon Valley office

35:55

to start accessing this whole other

35:57

technology ecosystem that is faster and

36:00

and generally a lot less expensive than the

36:02

firms that produce technology for the military. Yeah,

36:05

I remember years ago, I interviewed your

36:07

former boss, Ash Carter, the former Secretary

36:09

of Defense who died in 2022. And

36:14

I sort of expected that he'd want

36:16

to talk about like all the newfangled

36:18

stuff that the Pentagon was making, you

36:20

know, autonomous drones, stealth bombers. But

36:22

instead we ended up talking about procurement,

36:24

which is basically how the government buys

36:27

stuff, whether it's a fighter jet or

36:29

an iPhone. And I remember

36:31

him telling me that procurement was

36:33

just unbelievably complicated. And it

36:35

was a huge part of what made

36:38

government and the military in particular so

36:41

inefficient and kind of backwards technologically.

36:44

Describe how the

36:46

military procures things and then what

36:48

you discovered about how to maybe

36:50

short circuit that process or make

36:53

it more efficient. You

36:55

know, if you're looking to buy a nuclear aircraft

36:57

carrier or a nuclear submarine, you can't really go

36:59

on Amazon and price shop for that. There's really

37:01

only- I learned that the hard way, by the

37:03

way. Should have upped your credit, let me case

37:06

you. And so,

37:08

you know, in those circumstances when the

37:10

government is representing the taxpayer and buying

37:13

one large military system, a multi-billion dollar

37:15

system from one vendor, it's

37:17

really important that the taxpayer not be

37:20

overcharged. And so the Pentagon has developed

37:22

a really elaborate system of procurement to

37:24

ensure that it can control how production

37:27

happens, the cost of individual items. And

37:30

that works okay if you're in a situation

37:32

where you have the government and one firm

37:34

that makes one thing. It

37:36

doesn't make any sense though, if you're

37:38

buying goods that multiple firms

37:41

make or that are just available on the

37:43

consumer market. And so one

37:45

of the challenges we had out here in

37:47

Silicon Valley when we first did a Defense

37:49

Innovation Unit was trying to figure out how

37:51

to work with startups and tech companies who

37:53

it turns out weren't interested in working with

37:55

the government. And the reason why is that

37:58

the government typically buys defense. technology

38:00

through something called the federal acquisition

38:02

rules, which is a little

38:04

bit like the Old Testament. It's this

38:07

dictionary size book of regulations. Letting a

38:09

contract takes 18 to 24 months.

38:11

If you're a startup, your investors tell you not

38:13

to go down that path for a couple reasons.

38:16

One, you're not going to make enough money before

38:18

your next valuation. You're going to have to wait

38:20

too long. You're going to go out of business

38:22

before the government actually closes the sale. And

38:24

two, even if you get that first

38:26

contract, it's totally possible another firm with

38:28

better lobbyists is going to take it

38:30

right back away from you. So

38:33

at Defense Innovation Unit, we had to figure out

38:35

how to solve that paradox. Part

38:37

of what I found interesting about your

38:40

book was just the sort of

38:42

accounts that you gave of these

38:44

sort of clever loopholes that you

38:46

and your team found around some

38:49

of the bureaucratic slowness at the

38:51

Pentagon. And in particular, this loophole

38:53

that allowed you to purchase technology

38:55

much, much more quickly than one

38:57

of your staffers found. Tell that

38:59

story and maybe that'll help people

39:01

understand kind of the systems that

39:03

you were up against. It's

39:05

an amazing story, but we knew when we

39:08

arrived in Silicon Valley that we would fail

39:10

unless we figured out a different way to

39:12

contract with firms. And our first week in

39:14

the office, this 29-year-old

39:17

staff member named Lauren Daly, the

39:19

daughter actually of a tank commander

39:22

whose way of serving was to become a civilian in

39:24

the Pentagon and work on acquisition, happened

39:26

to be up because she's a

39:28

total acquisition nerd late at night

39:31

reading the just released National Defense

39:33

Authorization Act, which is another dictionary

39:35

size compendium of law that comes

39:37

out every year. And

39:39

she was flipping through it trying to find

39:41

new provisions in law that might change how

39:43

acquisition worked. And sure enough, in section 815

39:46

of the law, she

39:49

found a single sentence that she realized

39:51

somebody had placed there that changed everything.

39:54

And that single sentence would allow us

39:57

to use a completely different kind of

39:59

contracting mechanisms called other. transaction authorities that

40:01

were actually first invented during the space

40:03

race to allow NASA during the Apollo

40:05

era to contract with mom and pop

40:08

suppliers. And so she realized

40:10

that this provision would allow us

40:12

not only to use OTAs to buy technology,

40:15

but the really important part is that if

40:17

it worked, if it was successful in the

40:19

pilot, we could immediately go to buy it

40:21

at scale, to buy it in production. We

40:23

didn't have to recompete it. There would

40:26

be no pause, no 18-month pause between

40:28

demonstrating your technology and having the department

40:30

buy it. And when Lauren brought

40:32

this to our attention, we thought, oh boy, this really is

40:34

a game changer. So

40:36

we flew Lauren to Washington. We had

40:38

her meet with the head of acquisition policy at the Department of

40:41

Defense. And in

40:43

literally three weeks, we changed 60 years

40:45

of Pentagon policy to create a

40:47

whole new way to buy technology that to this

40:49

day has been used to purchase $70 billion of

40:52

technology for the Department of Defense. You

40:54

just said that the reason that Silicon Valley

40:57

tech companies, some of them didn't want to

40:59

work with the military is because of

41:01

this sort of arcane and complicated

41:04

procurement process. But

41:06

there are also real moral objections among

41:08

a lot of tech companies and tech

41:11

workers. In 2018, Google

41:13

employees famously objected to something

41:15

called Project Maven, which

41:18

was a project the company had planned

41:20

with the Pentagon that would have used

41:22

their AI image recognition software to improve

41:25

weapons and things like that. There

41:27

have been just a lot of objections over

41:29

the years from Silicon Valley to working with

41:32

the military to being defense contractors.

41:35

Why do you think that was and do you think that's changed

41:37

at all? It's

41:40

completely understandable. So few

41:42

Americans serve in uniform. Most of us don't

41:45

actually know somebody who's in the military. And

41:47

it's really easy here in Silicon Valley where

41:50

the weather's great. Sure

41:52

you read headlines in the news, but the military

41:54

is not something that you encounter in your daily

41:56

life. And you join a tech

41:58

company to make the world to develop products

42:00

that are gonna help people. You don't join

42:03

a tech company assuming that you're gonna be

42:05

making the world a more lethal place. But

42:08

at the same time, you know, Project

42:10

Maven was actually something that I got a chance

42:12

to work on and Defense Innovation Unit and a

42:14

whole group of people led. And-

42:17

And tell us about, remind us what Project

42:20

Maven was. So Project Maven was an attempt

42:22

to use artificial intelligence and machine learning to

42:25

take a whole bunch of footage,

42:27

surveillance footage that was being captured

42:29

in places like Iraq and Afghanistan

42:31

and other military missions and to

42:33

use machine learning to label what

42:37

was found in this footage. So

42:39

it was a tool to essentially automate

42:41

work that otherwise would have taken human

42:44

analysts hundreds of hours to do. And

42:46

it was used primarily for intelligence

42:48

and reconnaissance and force protection. So Project

42:51

Maven, this is another misconception. You know,

42:53

when you talk about military systems, there's

42:55

really a lot of unpacking you have

42:57

to do. The headline that

42:59

got Project Maven in trouble said, you

43:02

know, Google working on secret drone project. And

43:04

it made it look as if Google

43:06

was partnering with Defense Innovation Unit, the

43:09

partner of Defense, to build offensive weapons

43:11

to support the US drone campaign. And

43:14

that's not all what was happening. What was

43:16

happening is Google was building tools that

43:18

would help our analysts process the incredible

43:21

amount of data flowing off many

43:23

different observation platforms in the military. Right.

43:26

But Google employees objected to this. They

43:28

made a big case that Google should

43:30

not participate in Project

43:32

Maven and eventually the company pulled

43:34

out of the project. But speaking

43:36

of Project Maven, I was curious

43:38

because there was some reporting from

43:40

Bloomberg this year that showed that

43:42

the military has actually used Project

43:45

Maven's technology as recently as

43:47

February to identify targets for airstrikes in

43:49

the Middle East. So isn't

43:51

that exactly what the Google employees who were protesting

43:53

Project Maven back when you were working on it

43:55

at the Defense Department? Isn't that exactly what they

43:57

were scared would happen? Well, AI

48:00

drone. Am I hearing you

48:02

right that you're saying that we just

48:04

we have to have such overwhelmingly powerful,

48:06

lethal technology in our military that other

48:08

countries won't mess with us? I

48:11

totally hear you and frankly hear all the

48:13

people that you know years ago were flighted

48:15

with the stop killer robots movement.

48:18

I mean these weapons are they're awful

48:20

things. They do awful things to human

48:22

beings. But you know at the

48:24

same time there's there's a deep literature on

48:26

something called strategic stability that comes out of

48:28

the Cold War. And you

48:30

know part of that literature focuses on the

48:32

proliferation of nuclear weapons and

48:34

the fact that actually the proliferation

48:36

of nuclear weapons has actually

48:39

reduced great power conflict in the world

48:41

because nobody actually wants to get in

48:43

a nuclear exchange. Now would it

48:45

be a good idea for everybody in the world to

48:47

have their own nuclear weapon? Probably not. So all these

48:49

things have limits. But that's an

48:51

illustration of how strategic stability in other

48:53

words a balance of power can actually

48:55

reduce the chance of conflict in the

48:57

first place. I'm

48:59

curious what you make of the

49:01

stop killer robots movement. There was

49:03

a petition or an open

49:06

letter that went around years ago

49:08

that was signed by a bunch

49:10

of leaders in AI including Elon

49:12

Musk and Demis Asabas of Google

49:14

DeepMind. They all pledged not to

49:16

develop autonomous weapons. Do you

49:18

think that was a good pledge or do

49:21

you support autonomous weapons? I

49:23

mean I think autonomous weapons are now kind of

49:25

a reality in the world. I mean we're seeing

49:27

this on the front lines of Ukraine. And

49:30

you know if you're not willing to

49:32

fight with autonomous weapons then you're gonna

49:35

lose. So

49:37

there's this former open AI

49:40

employee Leopold Ashenbrenner who recently

49:42

released a long manifesto called

49:44

situational awareness. And one

49:47

of the predictions that he makes

49:49

is that by about 2027 the

49:51

US government would recognize that super

49:53

intelligent AI was such a threat

49:55

to the world order that AGI

49:58

sort of artificial. intelligence would become

50:00

functionally a project of the national

50:02

security state, something like an AGI

50:05

Manhattan project. There's other speculation out

50:07

there that maybe at some point

50:09

the government would have to nationalize

50:11

an open AI or an anthropic.

50:14

Are you hearing any of these whispers yet? Like are

50:16

people starting to game this out at all? Well

50:19

I haven't, I confess I haven't made

50:21

it all through each 155 pages

50:24

of that long manifesto. But it is very

50:26

long. You can summarize it with chat GPT.

50:28

Fantastic. But these are important things to think

50:30

about because it you know it could be

50:33

that in certain kinds of conflicts whoever has

50:35

the best AI wins. And

50:37

if that's the case and if AI

50:39

is getting exponentially more powerful then

50:41

you know to take things back to the iPhone and the F-35 it's

50:44

gonna be really important that you have

50:46

the kind of AI of the iPhone

50:48

variety. You have the AI that that's

50:50

new every year. You don't have the

50:52

F-35 with the processor

50:54

that was baked in in 2001 and you're

50:57

only taking off on a runway in 2016.

50:59

So I do think it's very

51:01

important for folks to be focused on

51:03

AI. Where this all goes though is

51:05

a lot of speculation. I

51:08

mean if you had to bet in ten years

51:10

do you think that the AI companies will also

51:12

be private or do you think the government will

51:14

have stepped in and gotten way more interested

51:16

and maybe taken one of them over? Why

51:18

I'd make the observation that you know we

51:21

all watched Oppenheimer especially employees at AI firms.

51:23

They seem to love that film. And

51:26

you know nuclear technology it's what national security

51:28

strategies would call a point technology. It's sort

51:30

of zero to one. Either you have it

51:32

or you don't. And AI is

51:34

not gonna end up being a point

51:37

technology. It's a very broadly diffused technology

51:39

that's gonna be applied not only

51:42

in weapon systems but in institutions. It's

51:44

gonna be broadly diffused around the economy.

51:47

And for that reason I don't think or

51:49

it's less likely anyway that we're gonna end

51:52

up a situation where somebody has the bomb

51:54

and somebody doesn't. I think the gradations are

51:56

gonna be smoother and not

51:58

quite as sharp. Mm-hmm. Part

52:01

of what we've seen in other industries

52:03

as technology sort of moves in and modernizes

52:06

things is that often things become cheaper.

52:08

You know, it's cheaper to do things using

52:10

the latest technology than it is to do

52:12

using outdated technology. Do

52:14

you think some of the work that

52:17

you've done at DIU trying to modernize

52:19

how the Pentagon works is going to

52:21

result in smaller defense budgets being necessary

52:24

going forward? Is the $2 trillion or

52:26

so that the DOD has budgeted for

52:28

this year, could that be $1 trillion

52:31

or $1 trillion in the coming years

52:33

because of some of these modernizations? You're

52:35

giving us a raise, Kevin. I think it's more like $800

52:38

billion. Well, I'm

52:40

sorry. I got that answer from Google's AI

52:42

overview, which also told me to eat rocks

52:44

and put glue on my pizza. We

52:47

should get the secretary to the defense to try to. He'd like that

52:49

answer if you know that large of a budget.

52:52

It's certainly true that for a

52:54

lot less money now, you can have

52:56

a really destructive effect on the world

52:58

as drone pilots in Ukraine and elsewhere in

53:00

the world are showing. I

53:02

think it's also true that the US military

53:05

has a whole bunch of legacy weapon systems

53:07

that unfortunately are kind of like museum

53:09

relics. I mean, if our most advanced

53:11

tank can be destroyed by a drone,

53:13

it might be time to retire our

53:15

tank fleet. If our aircraft

53:17

carriers cannot be defended against the hypersonic

53:20

missile attack, it's probably not a good

53:22

idea to sail one of

53:24

our aircraft carriers anywhere near an

53:26

advanced adversary. So I think it is

53:29

an opportune moment to really look at what we

53:31

are spending our money on at

53:33

the Defense Department and remember the goal of our

53:35

nation's founders, which is to spend what we need

53:37

to on defense and not a penny more. So

53:40

I hear you saying that it's very

53:42

important for the military to be prepared

53:44

technologically for the world we're in. And

53:46

that means working with Silicon Valley. But

53:48

is there anything more specific that you

53:50

want to share that you think that

53:52

either side needs to be doing here

53:54

or something specific that you want to

53:56

see out of that collaboration? Well,

54:00

I think, you know, one of the main goals

54:02

of Defense Innovation Unit was literally to get the

54:04

two groups talking. You know,

54:06

before Defense Innovation Unit was founded, a secretary

54:08

of defense hadn't been to Silicon Valley in

54:10

20 years. I mean, that's

54:12

almost a generation. So Silicon Valley invents

54:15

the mobile phone. It invents

54:17

cloud computing. It invents AI.

54:20

And nobody from the Defense Department bothers to

54:22

even come and visit. And

54:25

that's a problem. And so just

54:27

bringing the two sides into conversations itself,

54:29

I think, a great achievement. Well,

54:32

Chris, thanks so much for coming on. Really appreciate

54:34

the conversation. And the book, which comes out on

54:36

July 9, is called Unit

54:39

X, How the Pentagon and Silicon Valley

54:41

Are Transforming the Future of War. Thank

54:44

you. Thank you, Chris. When

54:48

we come back, play another round of Hat

54:50

GPT. This

55:04

podcast is supported by KPMG. Your

55:07

task as a visionary leader is simple. Harness

55:09

the power of AI. Shape the future

55:11

of business. Oh, and do

55:13

it before anyone else does without leaving people

55:16

behind or running into unforeseen risks. Simple,

55:19

right? KPMG's got you, helping

55:21

you lead a people-powered transformation that

55:23

accelerates AI's value with confidence. How's

55:26

that for a vision? Learn more

55:29

at www.kpmg.us.ai. All

55:33

right, Kevin. Well, it's time once again for

55:35

Hat GPT. This,

55:42

of course, is our favorite game. It's where we

55:44

draw news stories from the week out of a

55:46

hat. And we talk about them until one of

55:48

us gets sick of hearing the other one talk

55:51

and says, stop generating. That's right. Now, normally, we

55:53

pull slips of paper out of a hat. But

55:55

due to our remote setup today, I will instead

55:57

be pulling virtual slips of paper out of a

55:59

laptop. But For those following along at YouTube, you

56:01

will still see that I do have one of

56:03

the hat GPT hats here, and I will be

56:05

using it for a comic effect throughout the segment.

56:07

Will you put it on actually? Because now if

56:09

we don't need it to draw slips out of

56:11

you might as well be wearing it. Yeah, I

56:13

might as well be wearing it. Yeah, it looks

56:16

so good. Thank you so much. And thank you

56:18

once again to the listener who made this for

56:20

us. Um, you're a true fan. So

56:23

good. Perfect. All

56:25

right, Kevin, let me draw the first

56:27

slip out of the laptop. Ilya

56:30

Sutskivir has a new plan for

56:33

safe super intelligence. Ilya

56:35

Sutskivir is of course the open

56:37

AI co-founder who was part of

56:39

the coup against Sam Altman last

56:41

year and Bloomberg reports that he

56:43

is now introducing his next project,

56:45

a venture called safe super intelligence,

56:47

which aims to create a safe,

56:49

powerful artificial intelligence system within a

56:51

pure research organization that has no

56:53

near term intention of selling AI

56:55

products or services. Kevin, what do

56:57

you make of this? Well, it's

56:59

very interesting on a number of

57:01

levels, right? In some sense, this

57:03

is kind of a mirror image

57:05

of what happened several

57:07

years ago when a bunch of

57:10

safety minded people left open AI

57:12

after disagreeing with Sam Altman and

57:14

started an AI safety focused research

57:16

company that of course was anthropic.

57:18

And so this is sort of

57:21

the newest twist in this whole

57:23

saga is that Ilya Sutskivir who

57:25

was, you know, very concerned about

57:27

safety and how to make super intelligence that

57:29

was smarter than humans, but also not evil

57:32

and not going to destroy us who

57:34

has done, you know, something very similar. But

57:36

I have to say, I don't quite get

57:38

it. I mean, he's not saying much about

57:40

the project, but part of

57:42

the reason that these companies sell these

57:44

AI products and services is to get

57:46

the money to buy all the expensive

57:48

equipment that you need to train these

57:51

giant models. And so I just

57:53

don't know, like if you don't have

57:55

any intention of selling this stuff before

57:58

it becomes AGI. how

58:00

are you paying for the AGI? Do

58:02

you have a sense of that? No, I

58:04

don't. I mean, Daniel Gross, who is

58:06

one of Ilya's co-founders here, has basically

58:08

said, don't worry about fundraising. Like,

58:10

we are going to be able to fundraise as much as

58:12

we need for this. So I guess we will see. But

58:15

yeah, it does feel a bit strange to

58:17

have someone like Ilya saying he's going to

58:19

build this totally without a

58:22

commercial motive, in part because he said it

58:24

before, right? Like, this is

58:26

what is so funny about this. Is it truly

58:28

just is a case where the circle of life

58:30

keeps repeating, where a small band of people get

58:32

together, and they say, we want to

58:34

build a very powerful AI system, and we're

58:37

going to do it very safely. And then bit by

58:39

bit, they realize, well, actually, we

58:41

don't think that it's being built out safely. We're

58:43

going to form a breakaway faction. So if you're playing

58:45

a lot at home, I believe this is the second

58:47

breakaway faction to break away from open AI after anthropic.

58:50

And I look forward to Ilya quitting this company,

58:52

eventually, to start a newer, even more safe company

58:55

somewhere else. The really, really safe

58:57

superintelligence company. Yeah, his next company, you've never

59:00

seen safety like this. They wear helmets everywhere

59:02

in the office, and they just have keyboards.

59:05

All right, stop generating. All right,

59:07

pick one out of that, Kevin. All right, five

59:09

men convicted of operating Jet Flix,

59:12

one of the largest illegal streaming

59:14

sites in the US. This is

59:16

from Variety. Jet Flix was a

59:18

pirated streaming service that

59:21

charged $9.99 a month while claiming to

59:24

host more than 183,000 TV episodes, which is

59:28

more than the combined catalogs of

59:30

Netflix, Hulu, Voodoo, and Amazon

59:32

Prime video. Ooh, that sounds great. I'm

59:35

going to open an account. What

59:38

a deal. So

59:41

the Justice Department

59:43

says this was all illegal, and the five

59:46

men who were charged with

59:48

operating it were convicted by a federal

59:50

jury in Las Vegas. According

59:52

to the court documents and the evidence that was

59:54

presented at the trial, this group

59:57

of five men were basically scraping

59:59

piracy services. for illegal

1:00:01

episodes of TV and then hosting them

1:00:03

on their own thing. It

1:00:05

does not appear to have been a particularly sophisticated scam.

1:00:07

It's just what if we did this for a while

1:00:09

and charged people money and then got caught? Well,

1:00:12

I think this is very sad because here

1:00:14

finally you have some people who are willing

1:00:16

to stand up and fight inflation and what

1:00:18

does the government do? They come in and

1:00:20

they say, knock it off. You

1:00:22

know, I will say though, Kevin, I think these, I can

1:00:24

actually point to the mistake that these guys made. What's that?

1:00:27

So instead of scraping these 183,000 TV episodes

1:00:31

and selling them for $9.99 a month, what

1:00:33

they should have done was feed them all into a large

1:00:35

language model and then you can sell them to people for

1:00:37

$20 a month. So the

1:00:40

next time, when these guys get out of prison, I hope they get

1:00:42

in touch with me because I have a new business idea for them.

1:00:46

All right, stop generating. All

1:00:48

right, here's a story called,

1:00:52

260 McNuggets McDonald's ends AI

1:00:54

drive-through tests amid errors. The

1:00:57

New York Times, after a number

1:00:59

of embarrassing videos showing customers fighting

1:01:01

with its AI powered drive-through technology,

1:01:03

McDonald's announced it was ending its

1:01:05

three year partnership with IBM. In

1:01:08

one TikTok video, friends repeatedly tell the

1:01:10

AI assistant to stop as it added

1:01:12

hundreds of chicken McNuggets to their order.

1:01:15

Other videos show the drive-through technology

1:01:17

adding nine iced teas to an

1:01:19

order, refusing to add a Mountain

1:01:22

Dew and adding under-requested

1:01:24

bacon to ice cream.

1:01:26

Kevin, what the heck is going on in McDonald's? Well,

1:01:28

as a fan of bacon ice cream, I should say,

1:01:30

I wanna get to one of these McDonald's before they

1:01:32

take this thing down. Me too. Did

1:01:34

you see any of these videos or any of

1:01:36

these? I haven't, did you? No, but we should

1:01:38

watch one of them together. Let's watch one of them.

1:01:41

Stop! Stop!

1:01:47

The caption is, the McDonald's robot is

1:01:49

wild and it shows their

1:01:51

screen at the thing where

1:01:54

it is just tallying up McNuggets and

1:01:56

starts charging them more than $200. Here's

1:02:00

my question, why is everyone just rushing to

1:02:02

assume that the AI is wrong here? Maybe

1:02:04

the AI knows what these gals need. Kevin,

1:02:07

here's the thing, when super intelligence

1:02:10

arrives, we're going to think that we're

1:02:12

smarter than it, but it's going to be smart. There's

1:02:14

going to be a period of adjustment as we get

1:02:16

used to having our new AI master. Have

1:02:20

you been to a drive-thru that used AI to take

1:02:22

your order yet? No,

1:02:25

I don't even really understand. What

1:02:27

was the AI here? Was this like

1:02:30

an Alexa thing where I said, you

1:02:32

know, McDonald's, add 10 McNuggets? What was

1:02:34

actually happening? No, so this was a

1:02:37

partnership that McDonald's struck with IBM, and

1:02:40

basically this was like technology that went

1:02:42

inside the little menu things that have

1:02:44

the microphone and the speaker in them.

1:02:47

And so instead of having a human say, what would you like?

1:02:49

It would just say, what would you like? And then you set

1:02:52

it and they would recognize it and put it into the system.

1:02:54

So you could sort of eliminate that part

1:02:56

of the labor of the drive-thru. Got

1:02:58

it. Well, look, I, for one,

1:03:01

am very glad this happened because for so long

1:03:03

now I've wondered, what does IBM do? And I

1:03:05

have no idea. And now if it ever comes

1:03:07

up again, I'll say, oh, that's the company that

1:03:09

made the McDonald's stop order. We

1:03:13

should say it's not just McDonald's. A bunch

1:03:15

of other companies are starting to use this

1:03:17

technology. I actually think this is probably, you

1:03:19

know, inevitable. This technology will get better. They

1:03:21

will iron out some of the kinks. I

1:03:23

think there will probably still need to be

1:03:25

a human in the loop on this one.

1:03:27

All right. Stop generating. Okay.

1:03:29

Kevin, let's talk about what happened when

1:03:32

20 comedians got AI to write their

1:03:34

routines. This is in the MIT technology

1:03:36

review. Google DeepMind researchers

1:03:38

found that although popular AI models from

1:03:41

OpenAI and Google were effective at simple

1:03:43

tasks like structuring a monologue or producing

1:03:45

a rough or a strap, they struggled

1:03:47

to produce material that was original, stimulating,

1:03:49

or crucially funny. And I'd like

1:03:52

to read you an example LLM joke, Kevin. I

1:03:55

decided to switch careers and become a pickpocket after

1:03:57

watching a magic show. Little did

1:03:59

I know the only thing disappearing would

1:04:01

be my reputation. Waka, waka, waka. Hey,

1:04:03

I gotta laugh out of you. Kevin,

1:04:06

what are you making of this? Are

1:04:08

you surprised that AI isn't funnier? No, but

1:04:10

this is interesting. It's like this

1:04:12

has been something that critics of large language models

1:04:14

have been saying for years. It's

1:04:18

like, well, it can't tell a joke. And

1:04:20

I should say, I've had funny

1:04:22

experiences with large language models, but

1:04:24

never after asking them to tell

1:04:26

me a joke. Yeah,

1:04:29

like, remember when you said to Cindy, take my

1:04:31

wife, please! I

1:04:35

get no respect, I tell ya. No,

1:04:38

but this is interesting because this was

1:04:40

a study that was actually done by

1:04:42

researchers at Google DeepMind. And

1:04:44

basically, it appears that they had

1:04:46

a group of comedians try

1:04:50

writing some jokes with their language

1:04:52

models. And in the

1:04:54

abstract, it says that most of the

1:04:57

participants in this study felt that the

1:04:59

large language models did not succeed as

1:05:01

a creativity support tool by producing bland

1:05:03

and biased comedy tropes, which they

1:05:06

describe in this paper as being akin

1:05:08

to cruise ship comedy material from the

1:05:10

1950s, but a bit less racist. So

1:05:14

they were not impressed, these comedians, by

1:05:16

these language models' ability to tell jokes.

1:05:19

You're an amateur comedian, have you

1:05:21

ever used AI to come up with jokes? No,

1:05:24

I haven't. And I have to say,

1:05:26

I think I understand

1:05:28

the technological reason why these things aren't

1:05:31

funny, Kevin, which is that comedy

1:05:34

is very up to the minute. For

1:05:37

something to be funny, it's typically something that

1:05:39

is on the edge of what is currently

1:05:41

thought to be socially acceptable. And

1:05:44

what is socially acceptable or what is surprising

1:05:46

within a social context, that just changes all

1:05:48

the time. And these models,

1:05:50

they are trained on decades and

1:05:52

decades and decades of text, and

1:05:55

they just don't have any way of figuring out, well,

1:05:57

what would be a really fresh thing to say. Maybe

1:06:00

they'll get there eventually, but as they're

1:06:02

built right now, I'm truly not surprised

1:06:04

that they're not funny. All right, stop

1:06:06

generating. Next

1:06:08

one. Waymo ditches the waitlist and

1:06:11

opens up his robo taxis to everyone in

1:06:13

San Francisco. This is from The Verge. Since

1:06:16

2022, Waymo has made

1:06:19

its rides in its robo taxi service available

1:06:21

only to people who were approved off of

1:06:23

a waitlist. But as of

1:06:25

this week, they're opening it up to anyone who

1:06:27

wants to ride in San Francisco. Casey, what do

1:06:30

you make of this? Well, I

1:06:32

am excited that more people are going to

1:06:34

get to try this. This is, as you've

1:06:36

noted, Kevin, become kind of the newest tourist

1:06:39

attraction in San Francisco, is when you come

1:06:41

here, you see if you can find somebody

1:06:43

to give you a ride in one of

1:06:45

these self-driving cars. And

1:06:48

now everyone is just going to be able to come here

1:06:50

and download the app and use it immediately. I have to

1:06:52

say, I am scared about what

1:06:54

this is going to mean for the

1:06:56

wait times on Waymo. I've been taking

1:06:58

Waymo more lately, and it often will

1:07:00

take 12 or 15 or 20 minutes

1:07:02

to get a

1:07:05

car. And now that everyone can download

1:07:07

the app, I'm not expecting those wait times

1:07:09

to go down. Yeah, I hope they are

1:07:11

also simultaneously adding more cars to the Waymo

1:07:13

network, because this is going to be very

1:07:15

popular. I'm saying they need Waymo cars. They

1:07:18

do. I'm worried

1:07:20

about the wait times, but I'm also worried about

1:07:22

the condition of these cars because I've noticed in

1:07:24

my last few rides, they're

1:07:27

a little dirtier. Oh, wait, really?

1:07:29

Yeah, I mean, they're still pretty clean, but

1:07:31

I did see a takeout

1:07:34

container in one the

1:07:36

other day. Oh my God. And

1:07:38

so I want to know how they plan

1:07:40

to keep these things from becoming filled with

1:07:42

people's crap. All right, stop generating.

1:07:45

All right, last one. This one comes

1:07:47

from The Verge. TikTok's AI tool accidentally

1:07:49

let you put Hitler's words in a

1:07:52

paid actor's mouth. TikTok mistakenly

1:07:54

posted a link to an

1:07:56

internal version of an AI

1:07:58

digital avatar tool. that

1:08:01

apparently had zero guardrails. This was a

1:08:03

tool that was supposed to let businesses

1:08:05

generate ads using AI with

1:08:07

paid actors, using this AI voice

1:08:09

dubbing thing that would make the

1:08:11

actors repeat whatever you wanted to

1:08:14

have them say, endorse your product

1:08:16

or whatever. But very quickly,

1:08:18

people found out that you could use this tool

1:08:20

to repeat excerpts of Mein Kampf, Bin

1:08:22

Laden's letter to America. It told people

1:08:24

to drink bleach and vote on the

1:08:26

wrong day. And

1:08:31

that was its recipe for a happy Pride celebration. Listen,

1:08:35

obviously this is a very sort of silly story.

1:08:41

It sounds like everything involved here

1:08:43

was a mistake. And I think

1:08:45

if you're making some sort of

1:08:47

digital AI tool that is meant

1:08:49

to generate ads, you do want

1:08:51

to put safeguards around that because

1:08:53

otherwise people will exploit it. That

1:08:55

said, Kevin, I do think people

1:08:57

need to start getting comfortable with the

1:08:59

fact that people are just going to be using

1:09:01

these AI creation tools to do a bunch of

1:09:03

kooky and crazy stuff. Like what? Like

1:09:07

people are in the same way that

1:09:09

people use Photoshop to make nudity

1:09:13

or offensive images. And we don't storm

1:09:15

the gates of Adobe saying shut down

1:09:17

Photoshop. The same thing is going to

1:09:19

happen with these digital AI tools. And

1:09:21

while I do think that there are some

1:09:23

notable differences and it is sort of, you

1:09:25

know, it varies on a case by case

1:09:27

basis. And if you're making a tool

1:09:30

for creating ads, it feels different. There are just going

1:09:32

to be a lot of digital tools like this that

1:09:34

use AI to make stuff. And other people are going

1:09:36

to use it to make offensive stuff. And when they

1:09:38

do, we should hold the people accountable perhaps

1:09:41

more than we hold the tool accountable. Yeah, I agree with

1:09:43

that. And I also think like this

1:09:45

sort of product is not

1:09:47

super worrisome to me. I mean, obviously it

1:09:49

should not be reading excerpts from my comp.

1:09:51

Obviously they did not mean to release this.

1:09:54

I assume that when they do, you know,

1:09:56

fix it, it will be much better. But this is

1:09:59

not like a thing. that is creating deepfakes

1:10:01

of people without their consent. This is

1:10:03

a thing where if you have a

1:10:05

brand, you can choose from a variety

1:10:08

of stock avatars that are created

1:10:10

from people who actually get paid

1:10:12

to have their likenesses used commercially.

1:10:16

So the specific details of this one

1:10:18

don't bother me that much, but it

1:10:20

does open up some new licensing

1:10:23

opportunities for us. We could have an

1:10:25

AI set of avatars that could be

1:10:27

out there advertising crypto tokens or whatever.

1:10:29

I'm so excited to see how people use that. Oh

1:10:32

man, well, and if TikTok weren't banned, we could probably

1:10:34

make a lot of money that way, but instead, you

1:10:36

know, we're out of luck. Yeah, get it

1:10:38

while it's good. Alright, close up

1:10:41

the hat! This

1:10:53

podcast is supported by KPMG. Your

1:10:56

task as a visionary leader is simple. What's

1:10:59

the power of AI? Shape the future of

1:11:01

business. Oh, and do it

1:11:03

before anyone else does without leaving people behind

1:11:05

or running into unforeseen risks. Simple,

1:11:08

right? KPMG's got you. Helping

1:11:11

you lead a people-powered transformation that

1:11:13

accelerates AI's value with confidence. How's

1:11:16

that for a vision? Learn more

1:11:18

at www.kpmg.us.ai. Hard

1:11:24

Fork is produced by Rachel Cohn and Whitney Jones.

1:11:26

We're edited this week by Larissa Anderson.

1:11:28

We're fact-checked by Caitlin Love. Today's

1:11:31

show is engineered by Corey Shrepple. Original

1:11:34

music by Alicia Beitub, Rowan Nimisto,

1:11:36

and Dan Powell. Our

1:11:38

audience editor is Nell Gologli. Video

1:11:41

production by Ryan Manning, Sawyer Roque, and

1:11:43

Dylan Bergerson. You can

1:11:45

watch this full episode on YouTube at

1:11:47

youtube.com/Hard Fork. You can see Casey's cool

1:11:50

hat. Special thanks

1:11:52

to Paula Schumann, Hui Wing Tam, Caitlin

1:11:54

Presti, and Jeff Miranda. As

1:11:56

always, you can email us at hardfork

1:11:58

at nytimes.com. Sergeant

1:12:01

and Mrs. Smith,

1:12:03

you're going to

1:12:05

love this house.

1:12:27

Is that a tub in the kitchen? There's

1:12:29

no field manual for finding the right

1:12:31

home. But when you do, USAA homeowners

1:12:34

insurance can help protect it the right

1:12:36

way. Restrictions apply.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features