Podchaser Logo
Home
On Creepy Fingers, Deep Fakes, and Fair Use with Getty Images CEO Craig Peters

On Creepy Fingers, Deep Fakes, and Fair Use with Getty Images CEO Craig Peters

Released Monday, 1st July 2024
Good episode? Give it some love!
On Creepy Fingers, Deep Fakes, and Fair Use with Getty Images CEO Craig Peters

On Creepy Fingers, Deep Fakes, and Fair Use with Getty Images CEO Craig Peters

On Creepy Fingers, Deep Fakes, and Fair Use with Getty Images CEO Craig Peters

On Creepy Fingers, Deep Fakes, and Fair Use with Getty Images CEO Craig Peters

Monday, 1st July 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:01

Support for this show comes from

0:03

Ferragamo. A century ago,

0:05

a shoemaker in southern Italy had

0:07

a vision to marry abstract art

0:09

with high-quality craftsmanship. And that led

0:11

them to create beautiful, timeless footwear.

0:13

Today, the name Ferragamo is synonymous

0:15

with luxurious shoes and accessories, and

0:17

for more than a decade, Ferragamo

0:20

footwear has been made with an

0:22

ethical supply chain and sustainable footprint.

0:24

Their classic ballerinas or contemporary moccasins

0:26

are a perfect blend of fashion

0:28

and comfort, and yes, Kara Swisher

0:30

has a pair of both, and

0:32

I bought them. Discover Ferragamo's timeless

0:35

styles with a modern touch at

0:37

ferragamo.com. Have

0:42

a question or need how-to advice? Just

0:45

ask MetaAI. Whether you need

0:47

to summarize your class notes or want to create

0:49

a recipe with the ingredients you already have in

0:51

your fridge, MetaAI has the

0:53

answers. You can also

0:55

research topics, explore interests, and so

0:58

much more. It's the most

1:00

advanced AI at your fingertips.

1:03

Expand your world with MetaAI.

1:06

Now on Instagram, WhatsApp, Facebook,

1:08

and Messenger. Hi,

1:24

everyone from New York Magazine and the Vox

1:26

Media Podcast Network. This is on with Kara

1:28

Swisher, and I'm Kara Swisher. My

1:30

guest today is Craig Peters, CEO

1:32

of Getty Images. You've

1:34

seen Getty's work, even if you don't know it.

1:36

Getty photographers are on the red carpets at the

1:38

Met Gala or at the White House or at

1:41

the Olympics for sure. Pictures that are then used

1:43

by media companies across the globe. Getty

1:45

also has a huge inventory of stock photos

1:47

and video. It has also one of the

1:50

largest archives of historical images in the world.

1:52

You'd think that would be enough, but

1:54

it's not these days. Craig Peters has been heading

1:56

Getty since 2019 and took the company back. public

2:00

in 2022. The same month as

2:02

Bad Luck would have it that

2:04

OpenAI launched Dali in beta, which

2:06

means questions about how to protect

2:08

copyrighted images got a whole lot

2:10

more urgent both for creators and

2:12

for the public. Questions that have

2:14

not gone away. Fittingly, our expert

2:16

question this week comes from former

2:18

Time Life and National Geographic photojournalist

2:20

Rick Smolin. Craig has been

2:22

an outspoken voice on improving standards and protecting

2:25

creators, but his stance on AI has also

2:27

shifted in the past few years. So

2:29

I'm looking forward to talking about how he

2:32

sees the business model for working collaboratively with

2:34

this new technology in the future. And

2:36

at the end of the day, this

2:38

is about transparency and trust and authenticity.

2:41

My biggest worry, of course, is if

2:43

this is a replay of what the

2:45

internet did to media the first go-round.

2:48

We'll talk about that and more, including

2:50

why they can't get fingers right. Those

2:52

fingers are creepy. Hi

3:02

Craig, great to see you again. Great

3:04

to see you as well. Thanks for having me.

3:07

Craig and I had breakfast in New York, I

3:09

don't know, a couple months ago, talking about these

3:11

issues. I'm trying to get my

3:13

arms around AI and all its aspects

3:15

and interviewed wide ranges of people

3:17

and obviously photos

3:19

are of course at the center of

3:21

this in many ways, even though the

3:24

focus is often on text, but photos

3:26

and video are critical. So you've been

3:28

at the forefront of conversations around generative

3:30

AI copyright issues, but Getty has also

3:32

been collaborating and utilizing AI like most

3:34

of media. I want to talk

3:36

about all that and where you see this heading for

3:38

creatives and for the public. But

3:40

first, I think most people know or have seen the

3:42

name Getty Images in a caption, but they don't have

3:44

a good understanding of the business as a whole. So

3:47

why don't you give us a very quick talk

3:50

about what it does. Alright, well first let

3:52

me start. We were founded by Mark Getty

3:55

of the Getty family and

3:57

Jonathan Klein back in 1991. So

4:00

the company's been around for almost 30 years. We

4:03

are a business to business content platform.

4:07

We provide editorial content, so

4:09

coverage of news, sports, entertainment,

4:12

as well as a really deep

4:14

archive, and creative content. So

4:17

this would be materials that would be used

4:19

to promote your brand, your company, your product

4:21

offerings, or your services. We

4:24

provide that in stills, so photography,

4:26

as well as video, as you

4:28

highlighted. We provide that in what

4:30

we call pre-shot, which means it's available

4:33

for download immediately. You can search our platform,

4:36

find content, and then download it and use

4:38

it. In

4:40

some cases, we provide custom creation,

4:43

right? So we might work on an assignment

4:45

to cover a certain event. Just

4:47

for full disclosure, you've done Fox Media events. Correct. And

4:51

you then own the pictures, but Fox gets

4:54

use of them, correct? Correct. We

4:56

represent over 600,000 individual photographers and over

4:59

300 partners. These are partners like

5:01

the BBC or NBC News or

5:03

Bloomberg or Disney or AFP. And

5:07

what we try to do is allow our customers

5:09

really, we try to deliver value

5:11

across four elements. So

5:14

we have editorial content, so when you

5:16

think about creating more efficiently, we'll

5:18

have a team of over 100 individuals in Paris

5:20

for the Olympics. And

5:23

we will allow the entities that are covering

5:25

the Olympics to have

5:27

access to high-quality visuals in real

5:29

time on- Big

5:31

events. Big events or stuff you're contracted for. Like

5:34

you did my last code conference, for example.

5:36

Correct. But then there's also,

5:39

again, what we call creative, which is also

5:41

referred to as stock imagery. It's

5:43

intended to be used for commercial

5:45

purposes, to promote a company's brand,

5:47

products, or services. Right. And

5:51

you've been a company since 2019. You took

5:53

it public through a SPAC in July

5:55

2022, just about two years ago. Correct. Coincidentally,

5:58

it was the same month OpenAI made its image available

6:00

in beta form. What

6:03

did you think at the time? Did you think

6:05

end times or did it not register with you

6:08

all? No, I think, I mean, first off, generative

6:10

AI is not something that is entirely

6:13

new. Sure. It's something that we

6:15

had been tracking for probably seven years prior

6:17

to the launch of DALI. We

6:19

had first had discussions with Nvidia who we now

6:22

have a partnership, which we could talk more about.

6:24

Yeah, I will. But

6:27

we knew those models

6:29

were in development. We

6:31

knew the capabilities were

6:33

progressing. So

6:35

it didn't surprise us in

6:38

any way, shape or form. And

6:41

fundamentally, our business is again, providing

6:43

a level of engagement, whether

6:45

that's, again, through a media outlet

6:48

or through your corporate

6:50

website or your sales and marketing collateral

6:52

in a way that relies on authenticity

6:54

that relies on creativity, it lies on

6:57

core concepts being conveyed. Those are difficult

6:59

to do. Yeah, so here's this thing

7:01

coming, you see it. Obviously it's been

7:03

around and they've been generating

7:05

fake images for a long, long time, absolutely.

7:08

But did you think, well, look,

7:10

there's six fingers, it's kind of weird looking.

7:13

They always have that sort of like a Will

7:15

Smith picture of him eating spaghetti and it doesn't

7:17

look great. Did you think this

7:19

could be a real problem?

7:23

No, I don't think we saw it necessarily as

7:25

a problem. I think we saw some of the

7:27

behaviors around it as a problem. So we

7:30

knew that hair lines and fingers and

7:32

eyes and

7:35

things like that were gonna resolve over time. What

7:38

we saw though was ultimately these platforms

7:43

that were launching these generative models

7:45

were scraping the internet.

7:47

Correct. And taking third party intellectual

7:49

property in order to create

7:51

these services. And fundamentally, we

7:54

believe that creates some real

7:56

problems. Yeah, that's top theft. They've

7:58

done it for many years. I would say

8:00

that there's kind of four things that go

8:02

into generative AI. So GPUs,

8:05

processing and power, and computer

8:07

scientists. And those three things,

8:09

these companies are spending billions and billions

8:12

of dollars around, right? But

8:14

the fourth item is something that they call

8:16

data, I happen to call content. And

8:19

it is something that they are taking, and they

8:22

are scraping and

8:24

taking from across the internet. And

8:26

that fundamentally, to me, raised some

8:28

questions about not

8:31

only should they be doing that,

8:33

is that allowed under the law? But

8:37

it also creates real IP risk for

8:39

end users of these platforms, if

8:42

they're using them commercially. Yes, but they're hoping

8:44

to get away with it just so you wear. But

8:46

you push back pretty hard against AI initially,

8:48

you ban the upload and sale of AI-generated

8:50

images in your database. But it seems you've

8:52

done more than you seem

8:54

to be embracing it more. Talk about

8:57

how it's evolved in your thinking.

9:00

Because at its base, they like to take things. They've been doing

9:02

it for a long time. I

9:05

had a lot of scenes in my book where

9:07

Google was just taking things and then reformulating and

9:09

spinning the book fight that they had. They

9:11

use the expression fair use quite a bit. Did your thoughts

9:14

change on it? Or do you think, well, this is going

9:16

to happen again, and I have to embrace it? I

9:18

think we thought we really

9:20

took a two-pronged strategy. And

9:22

I don't think it's changed or it's

9:24

evolved. I think it's been pretty constant.

9:27

But we are not luddites. We believe

9:29

in technology can be beneficial to society.

9:31

We believe it needs to have some

9:33

guardrails and be applied appropriately. But we

9:35

wanted to embrace this technology to enhance

9:37

what our customers could do. We

9:42

also wanted to make sure that we

9:44

protected the rights of our

9:47

600,000 photographers and 300

9:50

partners and of our own and try

9:53

to get this to be one

9:55

where it was done more

9:57

broadly beneficial, not just to the...

10:00

the model providers, but to the industries as

10:02

a whole. So

10:05

we pursued a strategy that's really twofold.

10:07

So one was we

10:10

did not allow content into our database that

10:12

was generative AI. We did that for two

10:15

reasons. One was that

10:17

our customers are really looking for

10:19

authentic imagery, high-quality imagery, that imagery

10:22

that can really relate to an

10:24

end audience. We think

10:26

that that is still best done

10:28

through professionals using real

10:30

models, etc. Accepting

10:33

AI imagery that's been created from

10:35

engines that are scraping the Internet

10:37

has potential risk. It has potential

10:39

risk from copyright claims. Correct. It

10:42

also has potential risk from people

10:44

that have had their image scraped

10:46

off of the Internet generally, and

10:49

it can replicate those types of things. So we didn't

10:51

want to bring that content onto our platform and then

10:54

ultimately give it to our customers and have them bear

10:56

that risk. Because one of the things that we try

10:58

to do for our customers is we try to remove

11:00

that risk. Right. Now, I'll get to your end-of-the- end,

11:25

that's creative content, not editorials content. So to

11:28

your point, we haven't trained it to know

11:30

who Taylor Swift is, or Scarlett Johansson is,

11:33

or President Biden. That

11:36

creative content is permissioned, it's released.

11:38

So we have model

11:40

releases from the people portrayed

11:42

within that imagery. We have property releases,

11:45

etc. So

11:47

it is trained off of a

11:49

universal content where we have

11:51

clear rights to it. There's a

11:53

share of revenues back to the content

11:55

provider. So as we sell that service,

11:58

we actually give a revenue share back. to

12:00

the content providers that its content was used

12:02

to train. As you said,

12:04

it can't produce deep fakes. It cannot

12:06

produce third party intellectual property. So if

12:08

you search sneakers or type a generative

12:10

prompt of sneakers, it

12:13

will produce sneakers, but it's not going to

12:15

produce Nikes. If you type in laptop, it's

12:17

not going to give you an Apple iMac.

12:21

It is high quality. So

12:23

you can produce models that

12:26

are of very high quality in terms of the

12:28

outputs that it produces. You don't

12:30

have to scrape the entirety of the internet.

12:33

This proved that. And it

12:35

is fully indemnified because, again, we know the

12:37

source of what it was trained on. We

12:40

know the outputs that it can produce,

12:42

and they're controlled. And it

12:44

ultimately gives a tool to corporations

12:48

that they can use within their business day

12:50

to day without taking on intellectual

12:52

property risk. So the idea is to

12:54

make a better version of this, but

12:57

there's so many AI-generating image products out

12:59

there already on the market. Dolly, Mid-Journey,

13:01

OpenAI is currently testing Sora. As

13:04

you said, these people have billions of dollars

13:06

put to task here.

13:09

So talk about your sort of unique

13:11

selling point. Obviously, cleaner database, less biased

13:14

than internet-scraped versions. And you also know

13:16

these images are backed by our uncapped

13:18

indemnification. So explain what you mean by

13:20

that. But let's start with you've got

13:23

a lot of competitors with tons of

13:25

money and have a little less care

13:27

than you do about the imagery. This

13:31

model is intended to be utilized

13:33

by businesses that

13:35

understand that intellectual property can carry real

13:37

risks as they use it in their

13:39

business. Those risks can come

13:42

with large penalties. And what we are

13:44

doing is providing a solution that allows

13:46

them to embrace generative AI, but avoid

13:48

those risks and still produce

13:50

a high-quality output. Other tools that are

13:52

out in the marketplace do

13:54

bear risk because they have scraped

13:57

the internet. You can produce.

14:00

third-party intellectual property with them, you can

14:02

produce a deep-fakes with them. That

14:07

ultimately we believe is a constraint

14:09

on the broad-based commercial adoption of

14:11

generative AI. This has been their

14:13

modus operandi for years and years

14:16

to do this to scrape and

14:18

then later clean themselves

14:20

up. I think it certainly

14:22

break things and move fast. Ultimately,

14:27

we think that this technology

14:29

should not just be thrust into

14:31

the world and

14:33

damn the consequences. We believe there should be

14:35

some thought, we believe there should be some

14:37

level of regulation. We should believe that there

14:39

should be clarity on whether it is in

14:41

fact fair use. One of the companies that

14:44

I'm not sure if you mentioned, which

14:48

Stability AI that launched Stable Diffusion. I

14:50

will. It's a- We knew that they

14:52

had scraped our content from across the

14:54

Internet. And used

14:56

it to train their model. We didn't believe that

14:58

that was covered by fair use in the US

15:00

or fair dealing in the UK. Right.

15:03

So we've litigated on that

15:05

point and it's progressing through

15:07

the courts. I want to talk

15:09

about that in a second. I want to put

15:11

a pin in that. But you talk about what

15:13

uncapped indemnification means. It means that

15:16

Getty Images stands behind

15:19

the content that the generative model produces,

15:21

and that you're safe to use it.

15:23

And if there are any issues that

15:25

occur downstream, which there shouldn't

15:27

be, we will still stand behind it.

15:30

We will protect our clients. So AI gets

15:33

stuff wrong all the time though. Are you

15:35

worried about promising indemnification? I guess that's what

15:37

you're promising now. But are you worried about

15:39

the cost? If you do have copyright and

15:41

for instance, using this technology? No, because again,

15:43

we know what it was trained on. We

15:46

know how it was trained. And

15:49

we trained it in a way that it

15:51

couldn't produce those. I think, again, when you

15:53

just scrape the internet and you take in

15:56

imagery of children, when

15:59

you take in... imagery of famous

16:01

people or brands or third-party intellectual

16:04

property like Disney, Star Wars, or

16:06

the Marvel franchise, you can

16:08

produce infringing outputs. You can produce things

16:11

that are gonna run afoul of

16:13

current laws and potential future laws. This

16:16

was built purposefully from

16:19

the ground up to avoid the social

16:21

issues that are being

16:24

questioned as a result of this technology, but

16:26

also to avoid the commercial issues. Right, right,

16:28

which was these companies are gonna have the

16:31

same thing they did with YouTube and they

16:34

eventually settled it out. So I want to

16:36

get to the lawsuits in a second, but

16:38

the Pick Arts Partnership is different. I mean,

16:40

it's a licensing deal. They are using your

16:42

image database to train their AI, and these

16:45

are these licensing deals. You're not the only

16:47

one. AP is the license deals with OpenAI.

16:49

There's all kinds of people who are striking

16:51

deals like OpenAI

16:53

and Meta did one with your one of

16:55

your competitors Shutterstock. Let's talk

16:57

about those first. Well, I think, you

16:59

know, I think ultimately we believe that

17:01

there are solutions. These services

17:04

can be built that ultimately

17:06

do appropriately compensate for the

17:09

works that they're utilizing as

17:12

a service. And we

17:15

think, you know, the models that we've

17:17

entered into that you'd mentioned, so Nvidia

17:19

and Pixar, those are not payments to

17:22

us, a one-time payment, which is most

17:24

of these deals or a recurring payment.

17:27

They're actually, this is a rev-share deal,

17:30

so as those models produce revenues, they

17:32

will generate revenues that could flip back

17:34

to the content creators. And you

17:37

know, I think some of the deals that are getting

17:39

done today are trying to, in

17:41

essence, whitewash a little bit

17:43

the scraping that has historically been done

17:45

and show that they are willing to

17:48

work with providers. In some cases, it's

17:50

providing access to more current information that's

17:52

difficult to get in real time from

17:54

scraping. Sure. But ultimately it's a small

17:57

drop in the overall bucket. about

18:00

the lawsuit against stability AI, where

18:03

does it stand? And just again,

18:05

you're claiming the company unlawfully copied

18:07

and processed millions of Getty copyright

18:10

protected images, which are

18:12

very easy to track. I mean, more than text,

18:14

much easier. So tell us where that stands

18:17

now, because again, that's an expensive prospect for

18:19

a small company. Well, it's something

18:21

that we invest in. We think

18:23

it's important. We think

18:25

we need to get settled view

18:27

on whether this is in fact fair

18:29

use or not. We don't buy into

18:31

maybe what some of the providers are

18:33

stating is settled fact, that it is

18:35

in fact fair use. We believe that

18:37

that is up for debate. I

18:40

mean, we believe within the realm of imagery, we

18:42

think there's a strong case to be made that

18:44

it is not fair use. And we think there

18:46

are some precedents out there, most

18:49

notably in the US, Warhol versus

18:52

Goldsmith, which was settled at the court,

18:55

which highlights some case law

18:57

that would say that this is going to

18:59

be questionable whether it would qualify for fair

19:01

use. So we launched

19:04

that litigation. It is

19:06

moving forward a little

19:08

bit more at pace in the UK.

19:11

So it is proceeding to trial,

19:13

I expect will likely be a

19:15

trial end of this year, early

19:18

next. In the US, it's moving a little

19:20

more slowly as things take time

19:22

to move through the court system. And I don't have a magic wand

19:25

to speed that up. Talk about the

19:27

calculation of suing versus settling

19:29

essentially. And the costs, because here you are,

19:31

someone grabs something of yours, you have to

19:33

get hit on the head, you have to

19:35

take them to court for hitting you on

19:37

the head. But still, you've been hit on

19:39

the head, correct? Well, I think,

19:41

again, it's in a world where everyone's taking our imagery anyways.

19:43

Our imagery is all over

19:45

the internet. As you mentioned, it's on the

19:47

verge. It's on the media

19:52

platforms that you are visiting.

19:56

The one mistake I made in copyrighted imagery, I

19:58

paid a lot of money. It

20:00

was a lot of money and I never did

20:02

it again. I'll tell you that. So our imagery

20:04

is all over the internet and it's being scraped

20:06

and it's being taken and it's being used to

20:08

train these models. We

20:11

wanna get clarity if in fact

20:13

that is something that they have rights to do.

20:16

I believe it's good for the industry

20:18

overall. When I say the industry overall,

20:20

I'm not just talking technology, I'm not

20:22

just talking content owners, but to have

20:24

clear rules to the road. Is

20:27

this or is this not fair use? And then

20:29

you know what you need to do in

20:32

order to work within that settled law. And that's

20:34

what we're trying to get to. But let's get

20:36

back to the issue of content creation that you

20:38

also brought up. You aren't the only one that's

20:40

worried about losing money. Every week we get a

20:42

question from an outside expert. This week our question

20:44

comes from photojournalist, Rick Smolin, the creator of the

20:46

best selling day in the life book series. You've

20:49

seen the movie Tracks. He's a National Geographic

20:52

photographer portrayed by Adam Driver. Let's have a

20:54

listen to his question. Like many

20:56

of my peers, my photographs are sold

20:58

around the globe by Getty Images. And

21:01

we've always considered our relationship with

21:03

Getty as symbiotic. We

21:05

photographers risk life and limb to

21:08

provide Getty with our images and in

21:10

return Getty markets our photographs to buyers.

21:12

But the advent of AI, it's sort of starting

21:15

to feel a little bit like a marriage where

21:17

one partner in the relationship is

21:19

caught having an affair with a much

21:21

younger, more attractive person while assuring

21:24

their spouse, this is nothing more than

21:26

a dalliance. We

21:28

assume that our images are being sampled

21:30

to generate the synthetic images that Getty's

21:33

now selling. And many of us

21:35

are very concerned that our work has become a commodity

21:38

like coal being shoveled into

21:40

Getty's LLM engine. And

21:43

also sort of like the cheated upon older

21:45

spouse, many of us believe it's

21:47

only a matter of time before we're cast

21:49

aside for this much younger and much sexier

21:51

young thing called AI. So

21:53

my question to Craig Peters is whether

21:55

this marriage is over but you

21:57

just haven't admitted to us yet. told

22:00

us, or if it's not

22:02

over, how does Getty Images

22:04

envision its relationship with actual

22:06

photographers versus prompt jockeys? And

22:09

how is Getty going to compensate us,

22:11

your long-term partners, fairly for our images

22:13

in the future? Thanks. Okay,

22:16

that's a really good question. I hear it a

22:19

lot from photographers. They don't know who to trust.

22:21

So how are you compensating the creators whose work

22:23

is being fed into this tool, talked about how

22:25

it works, and can people opt out if they

22:28

don't want AI trained on their

22:30

work? Right. Well, first off, let me start

22:33

with, I think we don't train off

22:35

of any of our editorial content. And

22:37

the coverage that he is talking about

22:39

that he does, and that we are

22:41

fortunate enough to represent, would

22:44

be considered editorial coverage. We

22:47

believe that the job that he does is

22:49

incredibly important to the world. The

22:53

images that they produce, the

22:55

events that they cover, the topics that

22:57

they cover are incredibly important. Right behind my desk,

22:59

Kara, there is an image that

23:02

was taken in Afghanistan and it was taken

23:04

by one of our photographers, staff photographers, a

23:06

guy named Chris Hundress, who happened to lose

23:08

his life in

23:10

the pursuit of covering conflict around

23:12

the world. And

23:15

that's incredibly important. And I have it there because

23:17

it reminds me that

23:19

we have a very

23:21

important mission and that the individuals that

23:23

are producing this image are incredibly important

23:25

and they are taking very real risks.

23:27

And we value that and we never

23:29

want to see anything

23:32

that we do undermine that or misrepresent

23:34

it. And we think it's important to

23:36

the world going forward and we think

23:38

it's very important that persists.

23:41

So we do not train off

23:43

of editorial content. This model was

23:45

not trained off of editorial content.

23:47

And we believe that that type

23:49

of work has a level

23:52

of importance today and we'll have a level

23:54

of importance going forward in the future. And

23:56

we think that importance only increases as we

23:58

see more and more deep-fake

24:00

content produced by these generative

24:02

models that are trying

24:04

to undermine true knowledge and true truth.

24:06

So the compensation system stays the same?

24:09

So the compensation, so again, we have

24:11

creative imagery, this would be, you could

24:13

also refer to it as stock imagery,

24:16

where we have permissions from the

24:18

individuals that are contained within those

24:20

images. We have the right to

24:22

represent that copy of my, broadly,

24:24

that is training these models. These

24:27

models, when they produce a dollar's worth of

24:30

revenue, we give a share of

24:32

that revenue back to the creators whose content

24:34

was used to train that model. That

24:37

is the same share that they would get if

24:40

they licensed an image, so

24:42

one of their images off of our platform. So

24:45

on average, Kara, we give

24:48

royalties out around 28% of our revenue. We

24:52

invest a lot in our platform and sales and

24:54

marketing and content creation,

24:56

etc. But on average, we're

24:58

sending 28 cents back for every image that

25:01

we license. And you can think the exact

25:03

same thing will happen when

25:06

we license a

25:08

package of, you know, someone subscribes to

25:10

our generative service. So who owns the

25:12

images created by these AI tools? And

25:14

it sounds like in the weeds. That's

25:17

a very good question. It's, again, an

25:19

unsettled dog because we're in new territory.

25:21

So in the US today, you cannot

25:24

copyright a generative output. I

25:27

think over time, that might evolve to where

25:29

it depends on the level of human input

25:31

and involvement in that process. But

25:33

right now is not one that you can

25:35

copyright this image. But what we take a

25:37

stance is for our customers that are using

25:40

our generative tool is since

25:42

they gave us the generative prompt, they

25:44

are part of the creation. And therefore,

25:46

we don't put that imagery back into

25:49

our libraries to resell. It

25:52

is essentially a piece of content that

25:54

is, quote unquote, owned by the customer

25:56

that produced it using the generative model.

26:00

copyright is one that we can't convey,

26:02

that has to be done by the

26:04

US Copyright Office. But we essentially take

26:06

the position that the individuals

26:09

or the businesses that are using that service

26:11

own the output to that service. They own

26:14

the output, and it would be a similar

26:16

thing. But right now, whether they can be

26:18

copyrighted is unclear, even if so, is the

26:20

prompt owner the copywriter? Correct. And

26:22

I think, again, I think that will evolve over time to

26:25

one where I think the level

26:27

of human input will have

26:29

a bearing on whether that

26:32

is, in fact, copyrightable. It will all be. It's

26:34

a question of how much you get paid along the line. So

26:37

you've pushed back against the idea that these licensing

26:39

agreements are Faustian, though, meaning basically you're making a

26:41

deal with the devil. Shutterstock's deal

26:43

with OpenAI is for six years, while

26:45

Red Teamers at OpenAI are testing Sora.

26:50

Are you worried that you're all doing what we

26:52

in the media did with, say, the Facebooks of

26:54

the world, the Googles of the world, who now

26:57

sucked up all the advertising money and everything

26:59

else? What is your biggest worry when

27:01

you're both doing these deals

27:04

and suing, just waiting for clarity,

27:06

I assume, from courts and copyright

27:08

officials? I believe that many

27:11

of the deals that are happening today are

27:13

very Faustian and go back to

27:16

ultimately some of the mistakes that were made early

27:19

on in digital media and or

27:22

in social media, making trades

27:24

for things that weren't in the long term health

27:26

of the industry. What we are trying to do

27:29

very clearly in the partnerships that we are striking and

27:31

in the deals that we are striking and in our

27:33

approach is we are trying

27:35

to build a structure where the

27:41

creators are rewarded for

27:43

the work that was included, and fairly

27:45

so. I gave you

27:48

my four components of generative

27:50

AI, GPUs, processing, computer scientists,

27:52

and content. That fourth

27:54

right now is getting an extremely

27:57

small stake. Some companies are doing

28:00

no deals, so like the mid

28:02

journeys and such, and they

28:04

are providing no benefit back to it.

28:06

We wanna see a world where content

28:09

owners, content creators

28:12

share in ultimately economics

28:15

of these. And I

28:17

think if you equate it again back to a

28:19

Spotify, I mean, the labels

28:21

and Spotify can argue over whether

28:23

the stake of that is fair

28:25

and that's a commercial negotiation, but

28:27

ultimately it has created

28:29

a meaningful revenue stream back

28:32

to artists that support artists in the

28:34

creation. And that's what we'd

28:36

like to see flow out of generative

28:39

AI. That's the world that we believe

28:41

is right. Ultimately the creative industries represent

28:44

10% of GDP, more

28:48

than 10% of GDP in the US

28:50

and the UK and many developed economies

28:52

around the globe. And we think the

28:56

economic impacts of undermining that contribution

28:59

to GDP can't be net beneficial

29:01

to society from

29:03

an economic standpoint, if you ultimately just

29:05

allow them to be sucked dry. I

29:08

use the example, everybody's watched

29:10

the movies, The Matrix and

29:12

ultimately what were humans in The Matrix? The humans

29:15

in The Matrix were the batteries. They

29:17

were used to create the power to fuel

29:19

the AI. Well, right now, I feel like

29:21

the creatives are already being consumed as batteries.

29:24

And the media is already being a battery. We are already

29:27

not being valued. We're

29:29

basically just being used in

29:32

order to create one of

29:34

the core pillars that is necessary to create

29:36

generative AI. It's

29:38

just being stripped and taken. And

29:41

I don't wanna see a world where there

29:43

aren't more creatives. I don't wanna see a world where

29:46

creatives aren't compensated for the work that they do. And

29:49

I don't wanna see a world that doesn't have creativity in

29:51

it. We'll

29:54

be back in a minute. Support

30:04

for On with Kara Swisher comes from NetSuite.

30:07

Your business cannot succeed if costs spiral

30:09

out of control, but there's so much

30:12

more required if you want to grow.

30:14

Tracking data, handling HR responsibilities, and mastering

30:16

accounting are just a few tasks on

30:18

every Entrepreneur's to-do list. But what

30:20

if you had a platform that could deal with all of them at once?

30:23

NetSuite is a top-rated financial system

30:25

bringing accounting, financial management, inventory HR

30:27

into one platform and one source

30:29

of truth. With NetSuite, you can

30:31

reduce IT costs because NetSuite lives

30:33

in the cloud with no hardware

30:36

required, accessible from anywhere. You

30:38

can cut costs of maintaining multiple

30:40

systems because you've got one unified

30:42

business management suite. Plus, NetSuite can

30:44

help you improve efficiency by bringing

30:46

all your major business processes into

30:48

one platform, slashing manual tasks and

30:50

errors. Over 37,000 companies

30:53

have already made the move to NetSuite.

30:55

By popular demand, NetSuite has extended

30:57

its one-of-a-kind flexible financing program for

30:59

a few more weeks. Head

31:02

to netsuite.com/on. netsuite.com/on.

31:07

That's netsuite.com/on.

31:14

Support for this show comes from Delete Me. Do

31:17

you ever wonder how much of your personal information

31:19

is out there on the internet? A lot. Chances

31:22

are a lot more than you think, too.

31:24

Your name, contact info, social security number, they

31:26

can all be compiled by data brokers and

31:28

sold online. It's happening right now. It might

31:30

be why you keep getting those insane spam

31:32

calls about your student loans, even if you

31:34

paid them off 20 years ago. And

31:36

worse. You might want to try

31:38

Delete Me to fix that. Delete Me is

31:40

a subscription service that removes your personal information

31:42

from the largest people search databases on the

31:44

web and in the process helps prevent potential

31:46

ID theft, doxxing and phishing scams. I have

31:48

really enjoyed Delete Me largely because I'm a

31:51

pretty good person about tech things and I

31:53

have a pretty good sense of privacy and

31:55

pushing all the correct buttons. But I was

31:57

very surprised by how much data was. out

31:59

there about me. Take control of your data

32:01

and keep your private life private by signing

32:03

up for Delete Me. Now at a special

32:05

discount to our listeners today, get 20% off

32:07

your Delete Me plan when you

32:10

go to joindeleteme.com/Cara and use the

32:12

promo code Cara at checkout. The

32:14

only way to get 20% off

32:16

is to go to joindeleteme.com/Cara, enter

32:19

the code Cara at checkout. That's

32:22

joindeleteme.com/Cara, code Cara.

32:29

This is advertiser content from

32:31

PBS. Jason

32:39

Scott practically grew up at the Pike Place Fish

32:41

Market in Seattle. His mom

32:43

was a fishmonger, and now he's one too. He

32:46

sees firsthand the toll that irresponsible fishing can

32:48

take on our oceans. I grew up here.

32:51

I'm only 51. Are you sure? And there's stuff

32:53

that we can't even get locally anymore because it's

32:55

been overfished. That's not fair. A

32:58

common misconception is that wild caught is good and

33:00

farmed is bad. But Jason says

33:03

it's not that simple. I think as a

33:05

consumer you shouldn't accept if the person selling

33:07

you, you know, your fish or meat or

33:09

whatever it is, doesn't know where it comes

33:11

from or how it's caught or those things. And,

33:13

you know, we're moving a lot of fish. We

33:16

need ways where everybody can eat it and

33:18

we're not depleting the oceans. Whether

33:20

the fish you eat is sustainably farmed or wild

33:22

caught, knowing the difference is the first step.

33:25

In PBS's new three-part docuseries, Hope in

33:27

the Water, scientists, celebrities and

33:30

regular citizens come together to explore

33:32

unique solutions and challenges facing our

33:34

oceans and how we preserve the

33:36

food it provides. In order

33:39

for you to keep enjoying the riches of the

33:41

ocean, you're going to have to also be a

33:43

friend of the ocean. Join

33:45

chef Jose Andres, Martha Stewart, Shailene

33:47

Woodley and Baratunde Thurston as they

33:49

deep dive into the powerful blue

33:51

food technologies that can not only

33:54

feed us, but help save

33:56

our threatened seas and fresh waterways. Stream

33:58

Hope in the Water now. on the PBS app.

34:03

So I want to zoom in a bit

34:05

to talk more broadly about the issue of

34:07

copyright intellectual property and standards. You've called for

34:09

industry standards around AI. What

34:11

do you think those standards should include?

34:13

Watermarking, copyright protections. Obviously the more confusing

34:15

it is, the better it is for

34:17

these tech companies who, this is me

34:19

saying this, could give a fuck about

34:21

creatives. They do not care.

34:23

They do not care. They never have. Well,

34:26

let me start. I mean, there are four pillars that we

34:28

think are important with respect to

34:30

visual generative AI. One of

34:33

those is transparency of training data so that

34:35

you know what's being trained and

34:37

where they're training. And that's not only important

34:39

to companies like Getty Images and creatives that

34:41

are creating works around the globe, but that's

34:43

important as a parent or someone

34:46

with children. If

34:48

I post this image to this social

34:50

media platform, is it being used to

34:52

train outputs in these models? So

34:55

that's one, permission of copyrighted work. So if

34:57

you are gonna train on copyrighted data, you

34:59

need to have permission rights in

35:02

order to do so. We want

35:04

these models to identify

35:06

their outputs in a way that

35:08

is persistent. Now that's likely to

35:10

involve watermarking. It might involve some

35:13

other standards like hashing to the

35:15

cloud, but the

35:17

technology is still in development, but we want

35:19

that to be at the model level.

35:22

So the model actually does the output and

35:24

it's persistent. And then

35:26

we want to make sure that model

35:28

providers are accountable for

35:31

the models they put out into the world. In essence,

35:33

there's no section 230, which kind

35:35

of protected platforms back in

35:37

the original Digital

35:39

Millennium Copyright Act, which

35:41

basically gave them indemnity

35:44

for many claims for content that was posted

35:46

on their platform. We wanna see a world

35:49

where model providers have some level of accountability

35:51

aren't given a government exemption in

35:54

indemnity. We wanna see people

35:56

be accountable. Who would should set up

35:58

the standards in your opinion, Paul, too? industry leaders,

36:00

there's business advocacy groups like C2PA, which

36:03

is the Coalition for Content, Provenance, and

36:05

Authenticity, which includes Google and OpenAI, and

36:07

your competitors, Adobe and Shutterstock. And who

36:10

should be responsible for enforcing them? Is

36:12

it a combination of courts and regulations?

36:14

I'll come back to C2PA, but I

36:17

think it's going to be a combination

36:19

of regulators, of legislators, and

36:22

industry, which is typically how all standards

36:24

evolve over time. I think like happened

36:26

in the EU AI Act, there's a

36:28

general requirement, but the definition of how

36:31

that requirement actually manifests itself is still

36:33

yet to be put

36:36

forward. And that's going to take collaboration

36:38

from the technology industries and

36:40

from creative industries in order

36:42

to get to something that works.

36:45

I think the C2PA, to be very

36:47

specific, it

36:50

is a foundation, but there's still some

36:52

flaws. It is relying upon the end

36:54

user of these platforms to identify the

36:56

generative output versus the model itself. That's

36:58

what happened with YouTube initially, as you

37:01

recall. Exactly. So if you say, if

37:03

I'm going to use a generative model and I'm a bad actor,

37:06

I can say it's authentic and

37:08

label it as such. Well, that's a

37:10

point of failure in the C2PA model

37:12

as it exists today. It's already been

37:14

exploited and we want to see that

37:16

close again. So that's why we want to see these models provide

37:19

the output at the model

37:21

level versus at the user level. It

37:24

also is one that they're trying to

37:26

shift the cost of

37:28

implementing C2PA to the individual versus

37:31

the model provider. There are

37:33

going to be costs if you're producing outputs and

37:36

then you have to store the output

37:38

in the cloud, the original and kind of hash it

37:40

and so that can be referenced and referenced back to,

37:43

I don't think that's a reality where individuals

37:46

are going to be making that investment. So you're

37:48

bearing all the costs. I believe the model providers

37:50

should bear that cost. So do I. We

37:53

are a model provider in partnership with

37:55

Nvidia and Pixar and others and

37:58

I believe that we should create standards.

38:00

standards that are immutable that store with

38:02

it. And I think C2PA is a

38:04

foundation, but I think it's

38:07

one that we need to kind of continue

38:09

to evolve it in partnership with the technology

38:12

industry to get it to something that truly

38:14

meets the challenges versus is a nice kind

38:16

of cover when you get

38:18

called to Congress or Parliament

38:21

or into the EU. Sure.

38:24

That you can just kind of throw it out is that we've got

38:26

a solution. We really want to prevent deep

38:28

fakes. We really want people to

38:31

have fluency in terms of the imagery

38:33

that they're looking at. And

38:35

we think C2PA can evolve to that and

38:37

we're going to work hard in

38:39

partnership with the members of C2PA to try to get

38:42

it there. And ultimately what I

38:44

don't want to see though, you go

38:46

back to some of the original mistakes of digital media

38:48

and such, is I don't want to see it be

38:50

a big cost shift from

38:52

technology industries into the media

38:54

industry where what I

38:56

mean by that is I'm hearing some,

38:58

well, we're never going to be able

39:01

to identify generative content. No, it's very

39:03

hard. Yeah, it's very hard. It's a

39:05

hard problem. That's their excuse. We can

39:07

invent these tools, but we can't invent

39:09

a way to identify. So what we

39:11

want the media industry to do is

39:13

we want them to identify all of

39:15

their content is authentic. So they have

39:17

to buy all new cameras. They have

39:19

to create entirely new workflows. They have

39:21

to fundamentally change how

39:23

they bring content onto their websites.

39:27

That investment, we want the

39:29

media to make. And I don't think that's the

39:31

right solution. I'm not saying

39:33

that Getty Images won't invest in

39:36

technologies in order to better

39:38

identify track and identify provenance

39:41

of content. But I don't think kind

39:43

of saying, well, the solution for the tools that

39:46

were created that create all this generative content and

39:48

are producing the problem that D fakes, the

39:51

creators of those tools don't have any accountability

39:53

for creating solutions to identify. Absolutely. I think

39:55

we have to have a balanced solution. Absolutely.

39:58

It does play into the hands. that actor

40:00

is for sure. That's right. There's a really

40:02

interesting book that I read a

40:04

couple of years back, Power in Progress. And

40:07

it's an interesting study where I think we've

40:09

always been adapted. I'm a pro technologist. Me

40:11

too. I started my career out in the Bay Area. But

40:15

we always assume that just technology for technology's

40:17

sake will be beneficial to society. That's what

40:20

we've been fed and that's

40:22

what we have been taught. And

40:24

their study really looks at

40:26

technology over a thousand years.

40:28

And whether it is innately

40:30

beneficial to society or whether

40:32

society needs to put some

40:35

level of rules around

40:38

it in order to make it

40:40

net beneficial. And you go

40:42

back to industrial revolution and basically

40:45

the first 100 years of the industrial

40:47

revolution were not beneficial to society

40:50

as a whole. They benefited a

40:52

small number of individuals but largely

40:54

gave society nothing more than disease.

41:00

And it took things like regulations

41:02

on limiting work week, regulations

41:06

on child labor. It

41:08

took certain organizations like labor

41:10

unions, et cetera, to

41:13

ultimately get that technology

41:15

leap forward to be something that was

41:17

broadly beneficial to society. And I think

41:19

that is something that we need to

41:21

think a little bit more about is

41:23

not how do we stop AI. Not

41:26

how to, I'm not trying to put this technology

41:28

in a box and kill it. I

41:31

think it could be net beneficial to society. I

41:33

think we need to be thoughtful about

41:36

how we bring it into society. And

41:38

I don't think racing it out of the

41:40

box, absent

41:43

that thought is necessarily gonna lead

41:45

us to something that is net

41:47

beneficial. Yeah, you're gonna lose your

41:49

libertarian card from the boys of Silicon Valley.

41:51

Just so you know, you just lost it.

41:56

We'll be back in a minute. Have

42:08

a question or need how-to advice? Just

42:11

ask Meta AI. Whether you

42:13

want to design a marathon training program that

42:16

will get you race-ready for the fall or

42:18

you're curious what planets are visible in tonight's

42:20

sky, Meta AI has the

42:22

answers. Perhaps you

42:24

want to learn how to plant basil when

42:26

your garden only gets indirect sunlight. Meta

42:29

AI is your intelligent assistant. It

42:32

can even answer your follow-up questions, like

42:34

what you can make with your basil and other ingredients

42:36

you have in your fridge. Meta

42:39

AI can also summarize your class notes,

42:41

visualize your ideas, and so much more.

42:44

The best part is you can find Meta AI right

42:46

in the apps you already have. Instagram,

42:49

WhatsApp, Facebook, and Messenger.

42:52

Just tag Meta AI in your chat or

42:54

tap the icon in the search bar to

42:57

get started. Place the

42:59

most advanced AI at your

43:01

fingertips. Expand

43:03

your world with Meta AI. This

43:08

episode is brought to you by Shopify. This episode is brought

43:10

to you by Shopify. Forget

43:12

the frustration of picking commerce platforms

43:14

when you switch your business to

43:16

Shopify, the global commerce platform that

43:19

supercharges your selling wherever you sell.

43:21

With Shopify, you'll harness the same

43:23

intuitive features, trusted apps, and powerful

43:25

analytics used by the world's leading

43:27

brands. Sign up today for your

43:29

$1 per month trial

43:32

period at shopify.com/tech, all

43:34

lowercase. That's shopify.com/tech. That's shopify.com

43:36

slash tech. Speaking of deepfakes before we

43:38

go, I do want to talk about

43:40

the election and political images. Getty has

43:42

put editorial notes on doctored images like

43:44

the ones of Princess of Wales, Kate

43:46

Milton that were sent from the Royal

43:49

Palace. You said earlier you don't use

43:51

editorial content for your AI database, but

43:53

your competitors like Adobe and Shutterstock have

43:55

been called out for having AI-generated editorial

43:57

images of news events in their databases

43:59

like the... were in Gaza, which were then

44:01

used by media companies and others. And as

44:03

we said before, their databases are also being

44:05

licensed to train some of the biggest AI

44:08

models. So, you know, I think it's

44:10

the expression in software is garbage

44:12

in garbage out. How

44:14

worried should we be about synthetic

44:16

content, basically fake or altered images

44:18

impacting the outcome of elections, including

44:20

ours in November, and

44:23

those images then circulating back into the system

44:25

like a virus? I think we

44:27

should be very worried. I think it

44:29

ultimately, it goes back to some of the

44:32

things that why we're really focused

44:34

in on some of the, you know,

44:36

the premises of the regulatory elements I

44:38

talked about. We want to see a

44:40

situation where we can

44:43

identify this content. We can

44:45

give society as a whole fluency

44:47

in fact, right? If you don't have fluency

44:50

in fact, you don't fact have one of

44:52

the pillars of democracy. So

44:54

I think it is something

44:56

that we should be very worried about. I

44:59

think the pace of AI

45:01

is moving so fast. The ability

45:03

for it to permeate society across

45:05

social networks and those algorithms,

45:07

I think is something that we need to be

45:09

highly concerned about. And I think we

45:12

need industries and governments

45:14

and regulatory bodies to come together

45:16

in order to mitigate

45:19

that risk overall. I'm

45:22

hopeful that we can get there. It's

45:24

a tough problem and it's going to take a lot

45:26

of people putting energy against it

45:28

to solve for it. What can news

45:30

or should news or issues do about

45:32

this specific kind of visual misinformation and

45:34

what responsibly do the agencies have? Obviously

45:36

you are not doing that, but your

45:38

competitors are. Well, I think ourselves,

45:40

I think the AP, I think Reuters,

45:42

I think other news agencies around the

45:44

world, I think we have

45:47

to continue to make sure that our services

45:50

and our content is incredibly

45:52

credible and ultimately

45:54

lives up

45:58

to a standard that is beyond the And

46:00

I think then we need to work

46:02

with the technology industry and again, regulatory

46:04

bodies and legislative bodies on solutions that

46:07

address the generative side of things. Because there

46:09

are bad actors in this world and there

46:12

are models that allow people

46:14

to create not only misinformation

46:16

with respect to elections, but

46:19

some very other societally harmful

46:22

outcomes like deep

46:24

fake porn that

46:26

we need to address. And

46:28

these tools are readily available. What's your

46:30

nightmare and how quickly can you react

46:32

when the Kate Middleton things took a

46:34

few days? It's not the biggest deal

46:36

in the world, but it was still

46:39

problematic. Like that took a while. So

46:42

how do you, it's

46:45

expensive to do this to really constantly

46:47

be picking up trash that everybody's throwing

46:49

down. It's very important that we spend

46:52

a ton of time, first

46:54

off, vetting all of our sources of content. It's

46:56

not something we just take any piece of content

46:58

and put it on our website and then provide

47:00

it out. There is a

47:02

tremendous amount of vetting that goes into the

47:04

sources of content that we have. So

47:07

like the NBC News is or the Bloomberg's

47:09

of the world or the BBC's

47:11

of the world, but all the way down to

47:14

the individuals that we have covering conflicts in Gaza

47:17

or in Ukraine and making

47:20

sure that we know that

47:22

the provenance of their imagery is authentic.

47:24

We know the standard of their journalism is high.

47:28

That takes a tremendous amount of

47:30

effort, time, and investment in order

47:32

to do. And we

47:35

need to make sure that we don't in

47:37

any way, shape, or form reduce that investment.

47:39

We need to increase that investment in the

47:41

face of generative AI. We need to tell

47:43

that story. We need

47:45

to make sure that our customers have,

47:49

who are the media outlets like yourself, know

47:51

that that is something that we're doing.

47:55

Sure. So what's your

47:58

nightmare scenario? Give me a nightmare.

48:00

The primary scenario is that we

48:02

ultimately continue to undermine the public

48:04

trust in fact. And

48:08

we've seen that over

48:11

recent history. It's

48:14

been enabled by some technology

48:16

and platforms. We

48:18

need to make sure that we

48:21

don't restrict the

48:23

public square in terms of debate

48:25

and ideas and different

48:29

opinions, but at the same time that

48:31

we don't feed into an undermining of

48:33

what is real and what is authentic

48:36

and what is fact. And

48:38

those two things to me sometimes get conflated, and

48:41

I think that's a false conflation. I

48:44

think they can both stand as true. We

48:46

can have debate. We can have

48:48

different points of view, but

48:51

we can't have different facts and we can't have

48:53

different truths. And

48:55

ultimately, I think that's the long haul. It's

48:58

not a particular event. It's

49:02

a continual undermining

49:04

of the foundation of that truth. Well, it is

49:06

certainly easier to be a bomb thrower than a

49:08

builder. One of the issues is the fall off

49:10

in economy for all media,

49:12

not just your company, but everybody.

49:15

The costs go up, the revenues go down.

49:17

In the case of media, they suck up

49:20

all the advertising. You see declines in all

49:22

the major media who are fact-based, I would

49:24

say. Getty's value dropped

49:26

by two-thirds since you went public. Now, I

49:28

know SPACs have had all kinds of problems.

49:31

That's a trend in the SPAC area, which is how you

49:34

went public. But even with

49:36

all your AI moves, your shares are down.

49:38

Do you think it's because you're taking a more

49:41

cautious approach than your competitors? And

49:43

what turns it around? I want you to

49:45

speak for all of media, but you're running

49:47

a company here, and you're trying

49:49

to do it a little safer and

49:51

more factual. And that's not as profitable

49:53

as other ways. How

49:55

does that change from an economic point of

49:57

view? And I think all

50:00

of media. has this challenge going forward?

50:02

Well, I can't speak to specific drivers

50:04

of stock price or not. What

50:07

I can tell you is that we aren't taking

50:09

a more conservative view or

50:11

cautious view. We are taking a long-term

50:13

view. We are taking a

50:16

view that we are going to spend money and

50:18

invest to bring services to the market that we

50:20

think are helpful to our customers. We're

50:22

going to invest in lawsuits that we

50:25

think underpin the value of our content

50:27

and information that we have. We

50:31

are going to continue to invest

50:33

in bringing amazing content and services to

50:35

our customers. We think over time that

50:38

is something that will be to

50:40

the benefit of all stakeholders and

50:43

Getty Images, our photographers and videographers

50:45

around the world, our partners around

50:48

the world, our employees and our

50:50

shareholders. That's what we are

50:52

focused in on doing. I

50:55

think there is a backdrop within

50:57

recent trading of the entertainment strikes

50:59

and the impacts of those against

51:01

our business, some of the massive

51:03

impacts against large advertising and some

51:05

of the end-use cases where

51:07

our content gets utilized. Your

51:10

customers are also suffering too, yes. It

51:13

hasn't been a great environment for media companies

51:16

in the recent 18 months. I

51:18

think the end of the average... I would say 10

51:20

years, but okay. But more

51:22

recently, it's been more acute. You're seeing

51:24

streaming services cut

51:26

back. You're seeing ad revenues

51:29

down double digits. So

51:31

there are some challenges out there. But you aren't

51:33

seeing that for the tech companies. No, you're not.

51:35

You're not. Again, that goes where

51:38

I think we need to have some rules to

51:40

the road as we approach generative AI that we

51:42

already learned some of the things that maybe we

51:44

didn't do 20 years ago or 25 years ago. But

51:49

our business is about a long term.

51:51

The images has been around almost 30

51:53

years. And we've focused

51:55

in on making sure that the foundations of

51:59

copyright and... intellectual property remains strong throughout. Yeah.

52:01

And so that's what we'll continue to do

52:03

going forward. The hits keep on coming and

52:05

I don't mean good hits. Then

52:08

I have a final question for you. Are they

52:10

ever going to get fingers right? What

52:13

is the fucking problem? Well, look, I think if

52:15

you use our model, and I know that you're

52:17

pretty sure I had some time with it and

52:19

hopefully you'll get some time with it as well.

52:21

I think you'll see that we largely do. We've

52:24

trained on some things to try to make sure

52:26

that those outputs are of high quality. The

52:28

reality is it does matter. You

52:31

said kind of crap in, crap out.

52:33

Well, I would argue the exact opposite.

52:35

So quality in means quality out. And

52:38

so we've addressed those issues with the

52:40

model. Those aren't going to

52:42

be the hard things. Fingers in the

52:44

eyes. Fingers, eyes, hair, hair lines, smiles

52:47

and teeth. I think

52:49

we've largely solved those items. But

52:51

what we're always going to struggle

52:53

with is solving for the blank

52:55

page issue that customers have. And how

52:57

do you really get to authenticity? I think one of

52:59

the most interesting things to me, and I know we're

53:02

short on time, is how generative

53:04

AI comes directly in

53:06

conflict with another massive trend of the last

53:08

10 years plus, which is

53:10

authenticity. And how

53:13

does authenticity, how

53:15

am I portrayed? How am I

53:17

viewed? Am I positively represented? How

53:20

am I brought to bear? And what do I

53:22

see in the content that's presented to me? Body

53:26

representation, gender representation. How

53:29

did those things come? And I think that

53:32

is still a world that that trend

53:34

is not going away. And I think

53:36

it's a world that Getty Images helps

53:38

our customers address. Although I'm

53:40

still going round after round with Amazon about

53:42

my fake books and the fake Kara Swishers

53:44

who look corny. They just look corny. Strange

53:47

Kara Swishers are all over Amazon. Go

53:50

check it out. Anyway, I really

53:52

appreciate it. This is a really important discussion. People

53:54

understand what's happening to companies like yours. Well, I

53:56

appreciate you making time and thank you again for

53:58

having me. Thanks so much. Craig. On

54:05

with Kara Swisher is produced by

54:07

Christian Castro Roussel, Kateri Yocum, Joly

54:09

Myers and Megan Bernie. Special thanks

54:11

to Kate Gallagher, Andrea Lopez-Druzado and

54:13

Kate Furby. Our engineers are

54:15

Rick Kwon and Fernando Arudo. And our

54:17

music is by Trackademics. If you're already

54:19

following the show, you're authentic.

54:22

If not, be careful that you're not

54:24

being used as a battery. Go wherever

54:26

you listen to podcasts, search for On

54:28

with Kara Swisher and hit follow. Thanks

54:30

for listening to On with Kara Swisher

54:32

from New York Magazine, the Vox Media

54:34

Podcast Network and us. We'll be back

54:36

on Thursday with more. Have

54:44

a question or need how to advice? Just

54:47

ask Meta AI. Whether you

54:49

want to design a marathon training program,

54:51

or you're curious what planets are visible

54:53

in tonight's sky, Meta AI

54:55

has the answers. It can

54:58

also summarize your class notes, visualize your

55:00

ideas, and so much more. It's the

55:02

most advanced AI at your fingertips.

55:06

Expand your world with Meta

55:08

AI. Now on

55:10

Instagram, WhatsApp, Facebook and Messenger.

55:18

If you've been enjoying this, here's just

55:20

one more thing before you go from

55:22

New York Magazine. I'm Corey Sica. Recently,

55:25

our writer Rebecca Tracer noticed something. Republican

55:27

and right wing women have been flourishing

55:29

and prospering in the last year. From

55:31

Marjorie Taylor Greene to Kristi Noem. They're

55:33

tough, they're chaotic, and they tend to

55:36

have really great teeth. They're also swirling

55:38

in the orbit of Donald Trump as

55:40

he seeks to seize the country with

55:42

an iron fist this fall. Rebecca wondered,

55:45

is this empowerment or are

55:47

they just Trump's handmaidens? I

55:49

brought her in to explain to all of us what's going on here.

55:52

Hello, Corey. I'm gonna start

55:54

asking really boring questions. Here's one.

55:56

Which of the many exciting recent

55:59

dustups incited to want to talk

56:01

to and write about right-wing women

56:03

politicians. The first time I floated

56:05

a version of this was after Katie

56:07

Britz post-State of the Union. But I

56:09

can't say that at that point I

56:12

thought like I want to do a

56:14

whole scope of Republican women. I was

56:16

just really into Katie Britz because it

56:18

was very old school in certain ways

56:20

like the kitchen, like it

56:22

was very white suburban, middle

56:25

class, mommy presentation. But it

56:27

was also like, gothic

56:30

horror. You know, there's blood

56:32

of the Patriots right here in

56:34

my kitchen with an apple. But it wasn't

56:36

enough. I wasn't going to write a whole

56:39

piece about Katie Britz. It might have been

56:41

the infomercial that South Dakota Governor Kristi Noem

56:43

cut for the dental work she'd had done

56:45

where I was like, what is happening with

56:48

the Republican women? Right.

56:51

Like, so Kristi Noem did sort

56:53

of a physical self-redivation to make

56:55

herself either more palatable or more

56:57

powerful. I'm not sure. When

56:59

she began, she had a very no-nonsense,

57:02

boxy, Pelosi-esque, Hillary sometimes haircut, that like

57:04

sort of choppy haircut. And then in

57:06

recent years, since she's become a little

57:08

bit of a right-wing star, one of

57:11

the things she's done is really changed

57:13

her look. Now, Donald Trump is very

57:15

open about how he feels about women,

57:17

how he evaluates women. And Noem has

57:20

clearly remade herself into somebody

57:22

who looks like somebody Donald Trump has expressed

57:24

physical appreciation for. So part of my question

57:26

in this piece is, what does political power

57:28

mean if you conform to those kinds of

57:30

aesthetic standards, but then in some way that

57:32

winds up diminishing the respect that the people

57:34

who set those standards have for you? Well,

57:36

your point is a great one that Trump

57:38

hangs over a lot of this. Both soliciting

57:42

him for a big job at the same

57:44

time as they know he has standards. But

57:47

also at the same time, Trump's big innovation

57:49

was like performance is power and they're enacting

57:51

their own narratives. They've all become Trumpy in

57:53

their own weird way. Yeah. And I have to

57:56

tell you that it's very frustrating for me because I read about politics

57:58

and I hate the thing where everything is about I

58:00

always want to make it not about Trump. But writing

58:02

about these women really challenged that conviction

58:05

in me because it is clear that

58:07

at least for some of them, so

58:09

many of the new behaviors they're enacting

58:11

are in response to

58:13

Trump, are about the single demand

58:16

in the Republican Party right now,

58:18

which is showing him loyalty, fealty to

58:20

this guy. Like, all these

58:22

people, Valentina Gomez, Laura Loomer, they're enjoying

58:24

the fruits of choice in career

58:26

and motherhood. Like, does this

58:28

mean feminism won? Ah. Well,

58:32

this is what's so dystopian and scary

58:34

about their project, is that they're all

58:36

doing these things which are really fascinating,

58:38

right? Marjorie Taylor Greene's lifting weights

58:40

in a video and not behaving

58:43

classically demure and all of this

58:45

sense of empowerment is absolutely

58:48

what feminism gave to women. OK, so great.

58:51

Here is its success. But also, the

58:55

party and the ideology that

58:57

these women are using these

58:59

feminist gains to promote is

59:02

openly dedicated to the rolling back

59:04

of those feminist gains. That's

59:07

Rebecca Tracer. You can read her

59:09

work on Republican Women and more

59:12

in your home in our glorious

59:14

print magazine and at nymag.com/lineup.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features