Podchaser Logo
Home
Generative AI: Its Rise and Potential for Society

Generative AI: Its Rise and Potential for Society

Released Tuesday, 14th November 2023
Good episode? Give it some love!
Generative AI: Its Rise and Potential for Society

Generative AI: Its Rise and Potential for Society

Generative AI: Its Rise and Potential for Society

Generative AI: Its Rise and Potential for Society

Tuesday, 14th November 2023
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:02

All right. Welcome everybody. You

0:05

guys excited? Here we go. Hello, hello.

0:08

Welcome to Smart

0:11

Talks with IBM, a podcast from Pushkin

0:13

Industries, iHeartRadio, and

0:15

IBM. I'm Malcolm Gladwell. This

0:18

season, we're continuing our conversations with new

0:20

creators, visionaries who are creatively

0:23

applying technology and business to drive change,

0:26

but with a focus on the transformative

0:28

power of artificial intelligence and

0:30

what it means to leverage AI as

0:32

a game-changing multiplier for

0:35

your business. Today's

0:37

episode is a bit different. I was recently

0:39

joined on stage by Dario

0:41

Gil for a conversation in front of a

0:43

live audience at the iHeartMedia headquarters

0:46

in Manhattan. Dario is the senior

0:48

vice president and director of IBM

0:50

Research, one of the world's largest

0:53

and most influential corporate research

0:55

labs. We discussed the rise

0:57

of generative AI, what it means

0:59

for business and society. He also

1:01

explained how organizations that leverage

1:03

AI to create value will dominate

1:06

in the near future. Okay, let's

1:09

get on to the conversation.

1:10

Hello,

1:12

everyone. Welcome. And

1:15

I'm here with Dr. Dario

1:17

Gil. And I wanted to say before

1:19

we get started, this is something I said backstage that I

1:22

feel very guilty

1:24

today because you're the, you

1:28

know, arguably one of the most important

1:30

figures in AI research in

1:32

the world. And we have taken you away from your job

1:36

for a morning. It's like if, you

1:38

know, Oppenheimer's wife in 1944 said,

1:42

let's go and have a little

1:44

getaway in the Bahamas. It's

1:46

that kind of thing. You know, what do you say to your wife?

1:49

I can't. We have got

1:51

to work on this thing. I can't tell you

1:53

about. She's like getting me out of Los Alamos. No, so

1:56

I do feel guilty. We've set

1:58

back AI research by.

1:59

I have about four hours here.

2:04

But I wanted to, you've been

2:06

with IBM for 20 years? 20 years,

2:08

yeah, this summer. So, and how old were

2:10

you when you, not to give away your age, but you were, how

2:12

old when you started? I was 28, okay,

2:15

yeah. So I wanna go back to your 28 year old self. If

2:18

I asked you about artificial intelligence,

2:21

I asked 28 year old Dario, what

2:24

does the future hold for AI? How quickly

2:26

will this new technology transform

2:29

our world, et cetera, et cetera? What

2:31

would 28 year old Dario say? Well,

2:33

I think the first thing is that even though

2:36

AI, as a field has been with us

2:38

for a long time, since the mid 1950s,

2:40

at that time, AI was not

2:42

a very polite word to say. Meaning

2:45

within the scientific community, people

2:47

didn't use sort of that term. They would have said things

2:50

like, maybe I do things related to machine

2:52

learning, right? Or statistical

2:54

techniques in terms of classifiers and so

2:56

on. But AI had a mixed

2:59

reputation, right? It had gone through different cycles

3:01

of hype, and it's

3:03

also moments of a

3:06

lot of negativity towards it because

3:08

of lack of success. And

3:11

so I think that that would be the first thing. We probably say like

3:13

AI is like, what is that? Like

3:16

respectable scientists are not working on AI defined

3:19

as such. And that really changed over

3:21

the last 15 years only, right? I would say

3:23

with the advent of deep learning over

3:25

the last decade is when that reenter

3:28

again, the lexicon of saying AI,

3:30

and that that was a legitimate thing to

3:32

work on. So I would say that that's the first thing I think

3:34

we would have noticed at contrast 20 years ago. Yeah.

3:37

So at what point in your 20 year tenure

3:40

at IBM, would you say you

3:42

kind of snapped into present kind

3:45

of wow mode? I

3:48

would say in, late

3:52

2000s when IBM was working

3:55

on the Jeopardy project. and

4:01

just seeing the demonstrations of what

4:03

could be done in question answering. It's

4:06

literally jeopardy is this crucial

4:08

moment in the history of AI. You

4:10

know there had been a long and wonderful history

4:13

in inside IBM on AI. So

4:16

for example like you know in terms of like these grand

4:18

challenges at the very beginning

4:20

of the field founding which is this famous Dartmouth

4:23

conference that actually IBM sponsored

4:26

to create there was an IBM

4:28

out there called Nathaniel Rochester and

4:31

there were a few others who right after

4:34

that they started thinking about demonstrations

4:36

of this field and for example they created

4:38

the first you know game to

4:40

play checkers and to demonstrate that you could

4:42

do machine learning on that.

4:45

Obviously we saw later in the 90s like

4:47

chess that was very famous example of that.

4:50

Deep blue. With deep blue. Yeah. Right and

4:52

playing with Kasparov and then but I think

4:54

the moment that was really those other

4:56

ones felt like you know kind of like brute force anticipating

4:59

sort of like moves ahead. But this aspect

5:01

of dealing with language

5:02

and question answering felt different

5:05

and I think for us

5:07

internally and many others was when a moment of saying

5:09

like wow you know what are the possibilities

5:11

here and then soon after that connected

5:14

to the sort of advancements in computing

5:16

and with deep learning the last decade

5:18

has just been an all out you know sort of like

5:20

front of advancements and that and I just continue

5:23

to be more and more impressed and the last few

5:25

years have been remarkable too. Yeah. So

5:27

we'll ask you three quick

5:29

conceptual questions before we dig into it. Just

5:32

so I sort of get a we all get a feel

5:34

for

5:35

the shape of AI.

5:38

Question number one is where are we in

5:41

the evolution of this? So you

5:44

know

5:44

the obvious quite we're all suddenly aware

5:47

of it we're talking about it. What can you give

5:49

us an analogy about where we are in the kind

5:51

of likely evolution of this as a technology?

5:55

So I think we're in a significant

5:58

inflection point that each feels

6:00

the equivalent of the first browsers

6:03

when they appear and people imagine

6:06

the possibilities of the internet or more imagine

6:09

Experience the internet the internet

6:11

had been around right for quite a few decades

6:13

AI has been around for many decades I

6:16

think the moment we find ourselves is that people

6:18

can touch it and they can

6:20

before they were AI systems that were like behind the

6:22

scenes like your search results or or Translation

6:25

systems, but they didn't have the experience

6:28

of like this is what it feels like to interact with this

6:30

thing So so that's what I mean

6:32

I think maybe that analogy of the browser is appropriate

6:34

because it's all of a sudden is like whoa You

6:36

know this this network of machines

6:38

and content can be distributed and everybody

6:40

can self publish And there was a moment

6:42

that we all remember that and I think that that

6:45

is what the world has experienced over the last

6:47

nine months or so on so and

6:49

But fundamentally also what is important is

6:51

that this moment is where the ease of

6:53

the number of people that can build and

6:56

use AI Has skyrocketed

6:59

so over the last decade,

7:01

you know technology firms that

7:03

had

7:03

large research teams Could

7:05

build AI that worked really well, honestly

7:08

But when you went out into say hey

7:10

can everybody use it kind of data science

7:13

team in a bank You know go and develop these

7:15

applications

7:15

It was like more complicated some

7:17

could do it, but it was more the barrier of entry was

7:20

high now It's very different because

7:22

of foundation models and the implications

7:24

that that has for the moment when the technology

7:27

is being democratized It's been democratized

7:30

Frankly works better For

7:32

classes of problems like programming and other things

7:34

is really incredibly impressive what he can do So

7:37

the accuracy

7:37

and the performance of it is much better and

7:40

the

7:40

ease of use and the number of use cases

7:42

We can pursue is much bigger so that democratization

7:45

is a big difference But when you say when you make an

7:47

analogy to the first browsers If

7:49

you if we do another one

7:51

of these time travel questions back

7:53

at the beginning of the first browsers It's

7:56

safe to say many of the potential

7:58

uses of the Internet such we

8:01

hadn't even begun, we couldn't even anticipate. Right.

8:04

So we're at the point where the future direction

8:06

is largely unpredictable. Yeah,

8:08

I think that that is right because it's such

8:10

a horizontal technology that

8:13

the intersection of the horizontal capability,

8:15

which is about expanding our productivity

8:17

and tasks that we wouldn't

8:19

be able to do efficiently without it, has

8:22

to marry now the use cases that

8:24

reflect the diversity of human experience, our institutional

8:26

diversity. So as more and more institutions

8:28

say, you know, I focus on agriculture, you know,

8:31

to be able to improve seeds, you know,

8:34

in these kinds of environments, they'll find their

8:36

own context in which that matters that the creators

8:38

of AI did not anticipate at the beginning. So

8:41

I think that that is then the fruit of surprises

8:43

will be like, why, we didn't even think that it could

8:45

be used for that. And also clever people

8:47

will create new business models associated

8:50

with that, like it happened with the internet, of course,

8:52

as well. And that will be its own

8:55

source of transformation and change in its own right.

8:57

So I think all of that is yet to unfold.

8:59

Right. What we're seeing is this catalyst moment of

9:02

technology that works well enough and it can be democratized.

9:05

Yeah. What next sort of conceptual

9:07

question, you know, we can loosely

9:09

understand or categorize

9:12

innovations in

9:14

terms of their impact on the kind of

9:18

balance of power between have and have

9:20

nots. Some innovations, you

9:23

know, obviously favor those who

9:25

already have a make the rich richer. Some

9:29

it's arising to either the left or the left of both. And some

9:33

are biased in the other direction. They close the gap

9:35

between is it possible

9:37

to say, to predict which

9:39

of those three categories AI might fall

9:42

into? It's a great question.

9:44

You know, a first observation

9:46

I would make on your first two categories

9:50

is that it will be both likely

9:52

be true that the use of

9:54

AI will be highly democratized, meaning the number of

9:57

people that have access to its power to

9:59

make. improvements in terms of efficiency and so

10:01

on will be fairly universal. And

10:04

that the ones who are able

10:06

to create AI may

10:09

be quite concentrated. So if you

10:11

look at it from the lens of who

10:13

creates wealth and value over

10:15

sustained periods of time, particularly,

10:18

say, in a context like business, I

10:20

think just being a user of AI

10:23

technology is an insufficient strategy. And

10:26

the reason for that is, yes, you will get the immediate

10:29

productivity boost of just making API

10:31

calls and that will be a new baseline

10:33

for everybody. But you're

10:35

not accruing value in terms

10:37

of representing your data inside the AI

10:40

in a way that gives you a sustainable competitive advantage.

10:43

So I always try to tell people, don't

10:45

just be an AI user, be an AI

10:47

value creator. And I think that

10:49

that will have a lot of consequences

10:52

in terms of the haves and have-nots as an example

10:55

and that will apply both to institutions and

10:57

regions and countries, etc. So

11:00

I think it would be kind of a mistake to

11:02

just develop strategies that are just

11:04

about usage. Yeah. But

11:08

to come back to that question for a moment, to give you a specific,

11:10

suppose I'm an industrial

11:13

farmer in Iowa with $10 million

11:17

of equipment and I'm comparing

11:19

it to a subsistence farmer somewhere

11:22

in the developing world who's got a cell phone. Over

11:26

the next five years, whose

11:29

well-being rises by a greater amount?

11:32

Yeah, I think,

11:34

I mean, it's a good question, but it might be

11:36

hard to do a one-to-one sort of like attribution

11:39

to just one variable in this case, which is AI.

11:43

But again, provided that you have access

11:45

to a phone and some kind

11:47

to be able to be connected, I

11:49

do think, so for example, in that context, we've

11:52

developed, we've done work with NASA as an

11:54

example to build geospatial models

11:57

using some of these new techniques. And I

11:59

think, for example, or ability to do

12:01

flood prediction. I'll tell you an advantage of why it would

12:03

be a democratization force in that context.

12:06

Before, to build a flood model based

12:08

on satellite imagery was

12:11

actually so on earth and so complicated and difficult

12:13

that you would just target to very specific regions.

12:16

And then obviously, countries prioritize their own, right?

12:18

But what we've demonstrated is actually you

12:20

can extend that technique to have like global coverage

12:23

around that. So in that context, I would say

12:25

it's a force towards democratization that everybody

12:27

sort of would have access if you have some kind of connectivity.

12:30

That Iowa farmer might have a flood model.

12:33

The guy in the developing world definitely

12:35

didn't. And now he's got a shot at getting one. Yeah, but now

12:37

he has a shot at getting one. So there's aspects of

12:39

it that so long as we provide connectivity and

12:42

access to it, that there can be democratization

12:44

forces. But I'll give you another example that can

12:47

be quite concerning, which is language, right?

12:49

So there's so much language in

12:53

English. And there

12:55

is sort of like this reinforcement loop

12:57

that happens that the more you concentrate because

12:59

it has obvious benefits for global communication and

13:01

standardization, the more you can enrich

13:04

base AI models based on that capability.

13:07

If you have very resource-carched languages,

13:10

you tend to develop less powerful

13:12

AI with those languages and so on. So

13:15

one has to actually worry and

13:17

focus on the ability to

13:19

actually represent, in that case, is

13:22

language as a piece of culture, also in

13:25

the AI such that everybody can benefit

13:27

from it too. So there's a lot

13:29

of considerations in terms of equity about

13:32

the data, the data sets that we accrue,

13:35

and what problems are we trying to solve. I mean, you

13:37

mentioned agriculture or health care and so on. If

13:39

we only solve problems that are related

13:41

to marketing, as an example, there will be a less

13:43

rich world in terms of opportunity that

13:46

if we incorporate many, many other broader set

13:48

of problems. Yeah. Who do you

13:50

think, what do you think are the biggest impediments

13:53

to the adoption of AI

13:57

as you think AI ought to be adopted? But

14:00

look, what are the sticking points that you would... Look,

14:03

in the end, I'm gonna give a non-technological

14:05

answer as a first one, has to do with workflow,

14:07

right? So even if the technology is very

14:11

capable, the organizational change

14:13

inside a company to incorporate into the natural

14:15

workflow of people on how we work is,

14:19

it's a lesson we have learned over the last decade, it's hugely

14:22

important. So there's a lot

14:24

of design considerations, there's

14:26

a lot of how do people want to work,

14:29

right? How did they work today, and what is the

14:31

natural entry point for AI? So that's like number

14:33

one. And then the second one is,

14:36

for the broad value creation

14:38

aspect of it, is the understanding inside

14:41

the companies of how

14:43

you have to curate and create data

14:46

to combine it with external data such

14:48

that you can have powerful AI models

14:51

that actually fit your need. And that

14:53

aspect of what it takes to actually

14:56

create and curate the data for these modern

14:58

AI, it's still a work

15:01

in progress, right? I think part

15:03

of the problem that happens very often when I talk

15:05

to institutions is that they say, AI, yeah, yeah,

15:07

yeah, I'm doing it, I've been doing it for

15:10

a long time. And the reality

15:12

is that that answer can sometimes be a little of a

15:14

cop-out, right? It's like, I know you were

15:16

doing machine learning, you were doing some

15:18

of these things, but actually the latest

15:20

version of AI what's happened with foundation

15:22

models, not only is it very new,

15:24

it's very hard to do.

15:26

And honestly, if you haven't been assembling

15:29

very large teams and spending hundreds of millions

15:31

of dollars of compute, and you're probably not

15:33

doing it, right? You're doing something else

15:36

that is in the broad category. And

15:38

I think the lessons about what it means

15:40

to make this transition to this new wave is

15:42

still in early phases of understanding. So

15:44

what would you say, I wanna give you a couple of examples

15:47

of people with kind of

15:49

real world, in real world positions of responsibility.

15:52

Imagine I'm sitting right here. So imagine that

15:54

I am the president of a small liberal

15:56

arts college. And I come to you and I say,

15:58

Dario, I keep hearing about AI. AI, my

16:01

college has, you know, I don't make

16:03

it, you know, I'm, my, my, I'm not,

16:05

I'm making this much money if that every year, my enrollment's

16:08

declining, I

16:10

feel like this maybe is an opportunity. What is the

16:12

opportunity for me? What would

16:14

you say? Um, so

16:17

it's probably in a couple of segments around that, right?

16:20

One has to do is, well, what

16:22

is the implications of this technology inside

16:25

the institution itself instead of the college

16:28

and how we operate and can

16:30

we improve, for example, efficiency, like if

16:32

you're having very low levels of,

16:35

of sort of margin to be able to reinvest is,

16:37

you know, you run IT, you

16:39

run, you know, infrastructure,

16:42

you run many things inside the college. What are the

16:44

opportunities to increase the productivity or

16:46

automate and drive savings such that you

16:49

can reinvest that money into the mission

16:51

of education, right? As an example. Number one is

16:53

operational efficiency. Operational efficiency

16:56

is a big one. I think the second one is within

16:58

the context of the college is implications for the educational

17:01

mission on its own, right? How will, you

17:03

know, how does the curriculum need to evolve

17:05

or not? What are acceptable use policies

17:08

or some of these AI? I don't think we've all read

17:10

a lot about like what can happen in terms of exams

17:12

and, and so on and cheating and not cheating

17:14

or what are they actually positive elements of it in

17:17

terms of how curriculum should be developed and professions

17:20

sustain around that. And then there's another

17:22

third dimension, which is the outward oriented element

17:24

of it, which is like prospect students, right? So so

17:27

which is frankly speaking, a big use

17:29

case that has happened right now, which in the broader

17:31

industry is called customer care or client care

17:33

or citizen care. So in this question will be education

17:36

like, you know, hey, are you reaching the right

17:38

students around that that may

17:40

apply to the college? How can you

17:42

create them, for example, an environment to interact

17:44

with the college and answering questions that could be a chatbot

17:47

or something like that to learn about it and

17:49

personalization? So I would say there's

17:51

like at least three lenses with which I would

17:53

give advice. The second

17:55

part of the second one, because it's really interesting. So

17:58

I really

17:59

You can't assign an

18:01

essay anymore, can I?

18:03

Can I assign an essay? Yeah. Can

18:06

I say, write me a research paper and come

18:08

back to me in three weeks? Can I do that anymore? I think

18:10

you can. How do I do that? And then

18:12

you can then. Look, there's

18:14

two questions around that. I think

18:17

that if one goes and explains

18:19

in the context like, why are we here? Why in

18:21

this class? What is the purpose of this? And

18:25

one starts with assuming like an element

18:27

of like decency on people or people are there or like

18:29

to learn and so on. And you just give a disclaimer,

18:32

look, I know that one option you have

18:34

is like just put the essay question and

18:36

click go and give an answer. But

18:39

that is not why we're here. And that is not

18:41

the intent of what we're trying to do. So first I would start

18:43

with the norms

18:46

of intent and decency and appeal

18:49

to those as step number one. Then

18:51

we all know that there will be a distribution of use cases

18:54

that people like that will come in one ear and come

18:56

out of the other and do that. So

18:58

for a subset of that, I think the

19:01

technology is going to evolve in such a way that we

19:03

will have more and more of the ability to discern

19:07

when that has been AI generated and created. It

19:10

won't be perfect. But there's some

19:12

elements that you can imagine inputting the essay

19:15

and you say, hey, this is likely to be generated

19:17

around that. And for example, one way you

19:19

can do that just to give you an intuition, you could just have an

19:22

essay that you write with pencil and

19:24

paper at the beginning. You get a

19:26

baseline of what your writing is like. And

19:28

then later when you generate

19:31

it, there will be obvious differences around

19:33

what kind of writing has been generated on the other.

19:37

Everything you're describing

19:39

makes sense, but in this

19:42

respect at least, it seems to greatly complicate

19:44

the life of the teacher. Whereas the other two use

19:46

cases seem to kind of clarify

19:50

and simplify the role, suddenly

19:53

teaching prospective

19:55

students sounds like they can do that much more

19:58

kind of efficient a lot. administration

20:00

costs, but the teaching thing is

20:02

tricky.

20:03

Well, until we develop

20:05

the new norms, right? I mean, again,

20:07

I know it's an abuse analogy, but calculators,

20:10

we deal with that too, right? And

20:12

it says, well, calculator, what is the purpose of math,

20:14

how are we going to do this, and so on. And we

20:17

have. Can I tell you my dad's calculator story? Yes,

20:19

please. My father taught mathematics at

20:21

the University of Waterloo

20:23

in Canada. In the 70s,

20:25

when people started to get OCA calculators,

20:28

his students demanded that they be able to

20:30

use them. And he said no, and they took

20:32

him to the administration, and he lost. So

20:35

he then changed completely

20:38

throughout all of his old exams, introduced new

20:40

exams where there was

20:42

no calculation. It was all

20:46

like, figure out the problem on a conceptual

20:48

level and describe it to me. And they

20:51

were all students, deeply unhappy that he

20:53

had made their lives better complicated. But to

20:56

your point, I mean, he the

20:59

result was probably a better education. He

21:02

just removed the element that

21:04

they could gain with their pocket calculators.

21:06

I suppose it's a version of. I think it's a version

21:08

of that. And I think they will develop the equivalent

21:10

of what your father did. And I think people say, you know what, it's

21:12

like these kinds of things, everybody's doing it generically,

21:15

and none of us have any meaning. Because all

21:17

you're doing is pressing buttons. And the intent of

21:19

this was something, which was to teach you how to write or

21:21

to think or something. There may be a variant of

21:23

how we do all of this. I mean, obviously, some version

21:26

of that that has happened is like, OK, we're all going

21:28

to sit down and do it with pencil and paper. And my computer's

21:30

in the classroom. But there'll be other variants of creativity

21:33

that people will put forth to say, you know what, that's

21:36

a way to solve that problem, too. But this is interesting

21:38

because, to stay on this analogy,

21:41

we're really talking about a profound

21:44

rethinking, just using a college

21:46

as an example, a real profound rethinking

21:49

of the way. There's no part

21:51

of this college that's unaffected by AIA.

21:55

In one case, I've made everyone's job

21:58

easier. In one case, I've made I'm

22:00

asking us to really rethink from the ground up what

22:03

teaching means. In another

22:05

case, I've automated systems that I didn't think of.

22:07

I mean, it's like, that's right. That's

22:10

a lot to ask someone who got

22:12

a PhD in medieval language literature 40 years

22:15

ago. Yeah, but

22:17

I'll tell you a positive development that I'm seeing

22:19

in the sciences around this, which is you're

22:22

seeing, as you see more and more examples

22:25

of applying AI technology within

22:27

the context of historians to as an example.

22:31

You have archival and you have all

22:33

these books and being able to actually help

22:35

you as an assistant around that,

22:37

but not only with text now, but with diagrams.

22:41

And I've seen it in anthropology too, and

22:44

archeology with examples of engravings

22:46

and translations and things that can happen. So

22:49

as you see in diverse fields, people

22:52

applying these techniques to advance on how

22:54

to do physics or how to do chemistry. They

22:56

inspire each other, right? And they say,

22:58

how does it apply actually to my area? So

23:01

once, as that happens, it becomes less of

23:03

a chore of like, my God, how do I have to deal

23:06

with this? But actually it's triggered by curiosity.

23:09

It's triggered by, there'll be like

23:11

faculty that will be like, you know what, let me explore

23:13

what this means for my area. And

23:16

they will adapt it to the local context, to

23:18

the local language and

23:20

the profession itself. So I see

23:22

that as a positive vector that is

23:24

not all going to feel like homework. It's

23:26

not going to feel like, oh my God, this is so overwhelming.

23:29

But rather to be very practical to see what works,

23:31

what have I seen others to do that is inspiring,

23:34

and what am I inspired to do? You know, what, what

23:36

is, how is this going to help my career? I think

23:38

that that's going to be an interesting question for, you

23:41

know, those faculty members, for the students and professionals.

23:43

Yeah. Sorry, I'm going to stick with this

23:45

example a lot because it's really interesting. I'm curious

23:47

following up on what you just said, that one

23:50

of the most persistent critiques of

23:53

academia, but also of many, of many

23:55

corporate institutions in

23:57

recent years, has been siloing.

24:00

Right? Different parts of the

24:02

organization are going off on their

24:04

own and not speaking to each other. Is

24:08

a real potential benefit

24:11

to AI the kind of breaking

24:13

down, a simple tool for breaking

24:15

down those kinds of barriers? Is that a

24:17

very elegant way of sort of saying

24:20

what we are going to do? I really think that I was actually

24:22

just having a conversation with Provost,

24:24

very much on this topic very recently, exactly

24:27

on that, which is all

24:30

this appetite to collaborate across disciplines.

24:32

There's a lot of attempts towards

24:34

our goal, like creating interdisciplinary centers,

24:37

creating dual degree programs or dual appointment

24:39

programs. But actually, in a

24:42

lot of progress in academia,

24:44

happens by methodology too.

24:46

When some methodology gets adopted,

24:49

I mean the most famous example

24:51

of that is a scientific method, as an example of

24:53

that. But when you have a methodology that

24:56

gets adopted, it also provides a way

24:58

to speak to your colleagues across

25:00

different disciplines. And I think what's happening

25:03

in AI is linked to that. That within

25:05

the context of the scientific method, as an example,

25:08

the methodology about

25:11

which we do discovery,

25:13

the role of data, the role of these neural

25:15

networks of how we actually find proximity

25:17

to concepts to one another, is actually

25:20

fundamentally different than

25:22

how we traditionally applied it. So

25:25

as we see across more professions, people

25:27

applying this methodology is also

25:29

going to give some element of common language

25:32

to each other. And in fact,

25:35

in this very high dimensional representation

25:37

of information that is present to neural networks,

25:40

we may find amazing adjacencies

25:42

or connections of themes and topics in

25:45

ways that the individual practitioners cannot

25:47

describe, but yet will be latent

25:49

in these large common neural networks. We

25:52

are going to suffer a little bit from causality,

25:54

from the problem of like, hey, what's the root cause

25:56

of that? Because I think one

25:58

of the unsatisfied

25:59

aspects that this methodology

26:02

will provide is they may give you answers from

26:04

which they don't give you good reasons for

26:06

where the answers came from and

26:09

then there will be the traditional process of discovery

26:11

of saying if that is the answer what are the

26:13

reasons so we're gonna have to

26:16

do this sort of hybrid way of

26:18

understanding the world but I do think

26:20

that common layer of AI is a powerful

26:22

new thing. Yeah, what a

26:24

couple of random questions that kind of mind as you talk

26:27

in the in the writers strike that

26:29

just ended in Hollywood one of the sticking points

26:31

was how the studios and writers

26:33

would treat AI generated content

26:36

right would writers get credit

26:38

if their material was somehow the source

26:41

for a but more broadly

26:44

did the writers need protections against the use

26:46

of I could go on you know what yeah I'll be worth familiar

26:48

with all of this had you been I don't know

26:50

whether you were but had either

26:53

side called you in for advice

26:55

during that

26:56

the writers had the writers called you and said Dario

26:59

what should we do about AI and

27:01

how should we that should be reflected

27:04

in our contract negotiations what would you

27:06

have told them?

27:08

The way I think about that is that I divided

27:11

I would divide it into two pieces first is what's

27:13

technically possible right and

27:15

anticipate scenarios like you

27:17

know what can you do with voice cloning for example

27:20

you know now for example it is possible there's

27:23

been dubbing right

27:25

like let's just take that topic right around the world

27:27

there was all these folks that would dub

27:29

people in other languages well now

27:32

you can do this incredible renderings

27:34

I mean I don't know if you've seen them where you

27:36

know you match the lips is your original

27:38

voice but speaking any language that you want as

27:40

an example so busy that has a set of

27:42

implications around that I mean just to give an example so

27:44

I would say create a taxonomy that

27:47

describes technical capability that we know

27:49

of today and applications

27:52

to the industry and two examples of

27:54

like hey you know I could film you for five minutes and I

27:56

could generate two hours of content of you and I don't

27:58

have to you know that And if you get paid by

28:00

the hour, obviously I'm not paying you for that other thing.

28:03

So I would say technological capability

28:05

and then map with their expertise consequences

28:08

of how it changes the way they work or

28:10

the way they interact or the way they negotiate

28:12

and so on. So that would be one element of

28:14

it. And then the other one is like a non-technology

28:17

related matter, which is an element of almost

28:19

of distributed justice. It's like who deserves what,

28:21

right? And who has the power to get what? And

28:25

then that's a completely different discussion. That

28:27

is to say, well, if this is the scenario of what's possible,

28:31

what do we want and what are we able

28:33

to get? And I think that that's a different

28:35

discussion, which is all that's life. Which

28:37

one do you do first? I

28:40

think it is very helpful to have

28:42

an understanding of what's possible

28:44

and how it changes the landscape as

28:47

part of a broader discussion,

28:50

right, and a broader negotiation. Because

28:53

you also have to see the opportunities because there

28:55

will be a lot of ground to say, actually,

28:58

you know, if we can do it in this way

29:01

and we can all be that much more efficient in

29:03

getting this piece of work done or this filming done,

29:06

but we have a reasonable agreement about

29:08

how we both sides benefit from it,

29:11

right? Then that's a win-win for

29:13

everybody. Yeah. Right? So

29:16

that's a, I think that would be a golden triangle, right? Here's my

29:18

reading and I would like you to correct me if I'm wrong

29:20

and I'm likely to be wrong. When

29:23

I looked at that strike, I said, if they're worried

29:25

about AI, the writers

29:27

are worried about AI. That seems silly.

29:30

It should be the studios who are worried about

29:32

the economic impact of AI. Doesn't in

29:34

the long run AI put the studios

29:36

out of business long before it puts the writers out of business?

29:39

I only need the studio because the cost of

29:41

production are as high as the sky

29:44

and the cost of production are overwhelming. And

29:47

whereas if I don't,

29:48

if I have a tool which brings, introduces

29:51

massive technological efficiencies to the production

29:54

of movies, then I don't need, why don't we need a studio?

29:57

Why would they be the scared ones? Maybe

29:59

you need a. like a different kind of studio. Or a different kind

30:01

of studio. A different kind of studio. But I mean,

30:04

in the strike, the frightened

30:08

ones were the writers and the, you

30:10

know, were the studios. Wasn't

30:12

that backwards? I haven't

30:15

thought about it.

30:16

It can be, but the implications of it, it goes

30:18

back to what we were talking before. The implications, because

30:20

they are so horizontal, it is right to

30:22

think about it like what does it do to the studios as well,

30:24

right? Yeah.

30:26

And you know, the reason why that happens

30:28

is that

30:29

it's the order of either negotiations

30:32

or who first got concerned about

30:35

it and did something about it, right? Which is

30:37

in the context of the strike. You

30:39

know, I don't know what the equivalent conversations are

30:41

going inside the studio and whether they have a war room

30:43

saying what this is going to mean to us, right? But

30:46

it doesn't get exercised through a strike, but

30:48

maybe through a task force inside, you know, the

30:50

companies about what are they going to do, right? And

30:53

to go back to your thing, you said the first thing you do is you

30:55

make a list of what technological capabilities are, but

30:58

don't technological capabilities change every,

31:00

I mean, you're

31:02

racing ahead so fast. So you can't, can

31:05

you have a contract? Sorry

31:07

for getting a little weeds here, but this is interesting. Can

31:09

you have a, you can't have a five year contract

31:12

if the contract is based on an assessment of technological

31:15

capabilities in 2023, because by the time it gets to 2028, it's

31:18

totally

31:22

different, right?

31:24

Yeah. So, you know, I mean, where

31:26

I was going is like there are some

31:28

abstractions around that is like, you

31:31

know, what can we do with my image,

31:33

right? Like if I generally get the category

31:35

that my image can be reproduced, generated

31:37

content and so on, it's like, let's talk about

31:40

the abstract notion about who has rights to that

31:42

or do we both get to benefit from that? If

31:45

you get that straight, yes, the nature

31:47

of how the image gets altered, created at

31:49

something will change underneath, but the

31:51

concept will stay the same. And so

31:53

I think is what's important is to get the categories right.

31:56

Yeah. Yeah. If

31:58

you had to, if you just think. about the biggest technological

32:03

revolutions of the post-war

32:05

era, the last 75 years. We can

32:08

all come up with a list. Actually, it's really fun

32:10

to come up with a list. I was thinking about this when we were,

32:13

you

32:13

know,

32:14

containerized shipping is my favorite. The

32:18

Green Revolution, the internet.

32:22

Where is AI in that list? So

32:26

I would put it first. In that context

32:28

that you put forth over since World

32:30

War II, undoubtedly

32:33

computing as a category is one

32:35

of those trajectories that

32:37

has reshaped our world. And

32:40

I think within computing, I

32:42

would say the role that

32:44

semiconductors have had has been

32:46

incredibly defined. I would say AI

32:49

is the second example of

32:51

that as a core architecture that

32:54

is going to have an equivalent level of impact.

32:57

And then the third leg I would put to that equation

32:59

would be quantum and quantum information. And

33:01

that's sort of like I like to summarize that the future

33:03

of computing is bits, neurons, and qubits. And

33:06

it's that idea of high precision computation,

33:08

the world of neural networks and artificial

33:10

intelligence and the world of quantum. And

33:13

the combination of those things is going to

33:15

be the defining force of the next hundred years in

33:18

that category of computing. But it makes the list

33:20

for sure. If it's that high up

33:22

on the list, this is a total hypothetical,

33:25

if you were starting over, if you're

33:28

starting IBM right now, would

33:30

you say, oh, our AI operations actually should be

33:34

way bigger? Like how many thousands

33:36

of people working for you? So within

33:38

the research division, it's about

33:40

like 3,500 scientists. In a perfect

33:42

world, would you, if it's that big, isn't that

33:45

too small? I think blue?

33:47

Yeah. Well, that's like in the research division. I

33:49

mean, IBM overall. I know, I know. There's thousands

33:52

of people working on that. But I mean, like, so

33:54

starting from first, so we have a, we've

33:57

got a technology that you're ranking

33:59

with. compute and you

34:01

know up there with as a well

34:08

what I'm basically asking is are we under invested

34:10

in this you know

34:12

but so yeah it's a good

34:14

question so like what I would say is that I

34:16

think we should segment how many

34:18

people do you need on the creation

34:21

of the technology itself and what is the

34:23

right size of research and engineers and compute

34:26

to do that and how many people do you

34:28

need in the sort of application

34:31

of the technology to create better products

34:34

to deliver services and consulting and

34:36

then ultimately to diffuse it through you know

34:38

sort of all spheres of society and

34:41

the numbers are very different and that is not different

34:43

than anywhere else I mean I mean if you give

34:45

examples of since you were talking about

34:47

in the context of World War two how many people does

34:49

it take to create you know an atomic

34:52

weapon as an example it's a large number

34:54

I mean it wasn't just Los Alamos there's a lot of

34:56

people in okay it's a large number but it

34:58

wasn't a million people right yeah

35:01

so so you could have highly concentrated

35:03

teams of people that with

35:05

enough resources can do extraordinary scientific

35:08

and technological achievements and that's

35:10

always by definition is going to be a fraction of

35:12

like 1% compared to the total

35:14

volume that is going to require to then deal with it yeah

35:17

but the application side is infinite

35:19

almost that's exactly so that is where

35:21

like in the end the bottleneck really is so

35:24

with you know thousands of

35:26

scientists and engineers you can create world-class

35:29

AI right and so

35:31

no you don't need 10,000 to be able to create

35:33

the large language model in the generative model but you need

35:36

thousands and you need you know very

35:39

significant amount of computing data you need that the

35:42

rest is okay I build

35:44

software I build databases or I build a

35:47

software product that allows you to do inventory

35:49

management or I build you know a photo

35:51

editor and so on now

35:53

that product incorporating

35:55

the AI

35:55

modifying expanding it and so

35:58

on well now you're talking about the

36:00

entire software industry. So now you're talking about

36:02

millions of people, right, who are

36:04

necessary, you know, who are required to

36:06

bring AI into their product. Then you go

36:09

a step beyond the technology creators

36:11

in terms of software and you say, well, okay,

36:13

now what? The skills to help organizations

36:15

go on deployed in the department

36:18

of, you know, the interior, right? And then

36:20

I said, okay, well, now you need like consultants

36:23

and experts and people to work. They are

36:25

to integrate into the workflow. So now you're

36:27

talking into the many tens of millions of people

36:29

around them. So I see it as these concentric

36:32

circles of it. But to some degree

36:34

in many of these core technology areas, just

36:36

saying like, well, I need a team of like a hundred thousand

36:38

people to create like AI or a, or

36:40

a new transistor or a new quantum computer. It's

36:43

actually a diminishing return, right? In the end,

36:45

like many people connecting with each other is very

36:47

difficult. But on the application side,

36:49

I was just thinking about to go back to our, our, our

36:53

example of that college, just

36:55

the task of sitting down with a

36:58

faculty and working with them

37:00

to reimagine what they do with

37:03

this, with these new set of tools in mind,

37:05

with the understanding that the students coming in are probably

37:07

going to know more about it than they do. That

37:10

alone, I mean, that's a, that is a Herculean

37:13

people problem. It's a people problem.

37:16

Yeah. That's why I started in terms of the barriers of

37:18

adoption of that. I mean, the context of IBM, an

37:20

example, that's why we have

37:22

a consulting organization, IBM consulting

37:24

that complements IBM technology. And

37:27

the IBM consulting organization has over 150,000

37:29

employees because of this question, right?

37:33

Because you have to sit down and you say, okay, what

37:35

problem are you trying to solve? What is

37:37

the methodology we're going to do? And here's the technology

37:40

options that we have to be able to bring into the table.

37:42

In the end, the adoption across

37:46

our society will be limited by

37:48

this part. The technology is going

37:50

to make it easier, more cost-effective

37:52

to implement those solutions.

37:55

But you first have to think about what you want to do,

37:58

how you're going to do it, and how you're going to not

38:00

bring it into the life of this in this context faculty

38:02

member or you know the administrator

38:05

and so on in this college. Was that Hollywood that

38:08

notion I thought which was absolutely I

38:12

thought really interesting that in a Hollywood

38:14

strike you have to have this conversation about a distributive

38:17

justice conversation about how do we that

38:20

it's a really hard conversation right to

38:22

have in a boy so this brings me to

38:24

my next point which is that you we were talking backstage you have

38:28

you have two daughters one

38:30

in college one about to go to college that's right so

38:33

they're both science-minded so

38:35

tell me about the conversations you you

38:38

have with your daughter you you have a unique conversation

38:40

with your daughters because your conversation your

38:43

advice to them is is

38:45

influenced by what you do for a living yes

38:48

it's true so did

38:50

you warn your daughters away from certain fields

38:53

did you say whatever you do don't

38:55

be you know no

38:57

no that's not my style I mean for

38:59

me not I try not to be like you know preachy

39:02

about that so for me

39:04

was just about showing by example of things

39:06

I love right and yes I care about

39:09

and then you know bringing them to the lab and seeing

39:11

things and then the natural conversations of things

39:14

working on or interesting people I meet

39:16

so so to the extent that they have chosen

39:18

that and obviously this has an influence on them

39:21

it has been through seeing it you know

39:24

perhaps through my eyes right I'm going to see me

39:26

do and that I like my profession right but one of your

39:28

daughters you said is thinking

39:30

that she wants to be a doctor but

39:33

being a doctor in a post AI world

39:35

is surely a very different proposition than

39:37

being a doctor in a pre AI world do

39:39

you think have you have you tried to prepare

39:42

her for that difference have you

39:44

explained to her what you think will happen to this profession

39:46

she might enter yeah I mean not

39:49

in like you know incredible amount of detail

39:51

but but but yes at the level

39:53

of understanding what is changing like

39:56

this lens of the information lens with

39:58

which you can look at the world what is possible

40:02

and what it can do. Like what is our role

40:04

and what is the role of the technology and how that shapes

40:06

out that level of abstraction for sure.

40:09

But not at the level of like don't be a radiologist,

40:12

you know, because this is what happens. I was gonna

40:14

say, if you're unhappy with your current job, you

40:16

could do a podcast called Parenting Tips with

40:18

Dario, which is just an

40:20

AI person, gives you advice

40:22

of what your kids should do based on exactly

40:24

this, like should I be a radiologist? Dario,

40:27

tell me. I'm sorry guys, it seems to me like

40:29

a really important question. Yeah. Let

40:31

me ask this question in a more, I'm joking, but in a more

40:34

serious way. Surely

40:36

it would, I don't mean to use your

40:38

daughter as an example, but let's imagine we're giving

40:40

advice to someone who wants to enter medicine. A

40:43

really useful conversation to have is, what

40:46

are the skills that will

40:48

be most prized in

40:51

that profession 15 years

40:53

from now? And are they different from the skills that

40:55

are prized now? How would you answer that question?

40:58

Yeah, I think for example,

41:01

this goes back to how is the scientific

41:04

method in this context like the practice

41:06

of medicine gonna change. I think

41:08

we will see more changes on how we practice the

41:10

scientific method and so on as a consequence

41:13

of what is happening with the world

41:15

of computing and information, how we represent

41:18

information, how we represent knowledge, how

41:20

we extract meaning from knowledge as a

41:22

method than we have

41:24

seen in the last 200 years. So

41:26

therefore, what I would strongly encourage

41:29

is not about like, hey, use these tools for doing

41:31

this or doing that, but in the curriculum

41:33

itself, in understanding how we do

41:35

problem solving in the age

41:38

of data and data representation and so

41:40

on, that needs to be embedded in the curriculum

41:43

of everybody that is, I

41:45

would say, actually quite horizontally, but certainly

41:47

in the context of medicine and scientists and so

41:49

on, for sure. And to

41:51

the extent that that gets ingrained, that

41:54

will give us a lens that no matter what

41:56

specialty they go with in medicine,

41:58

they will say, actually.

41:59

The way I want to be able to tackle improving

42:02

the quality of care, the way to do that,

42:04

in addition to all the elements that we

42:06

have practiced in the field of medicine,

42:09

is this new lens. And are we representing

42:11

the data the right way? Do we have the right tools

42:13

to be able to represent that knowledge? Am

42:16

I incorporating that in my own, sort

42:18

of with my own knowledge in a way that gives me better

42:20

outcomes, right? Do I have the rigor of benchmarking

42:24

too and quality of the results?

42:26

So that is what needs to be incorporated. How?

42:29

Well, in a perfect world, if

42:32

I asked you to, your team, to

42:35

rewrite curriculum for American Medical Schools, how

42:38

dramatic a revision is

42:40

that? Are we tinkering with 10% of the curriculum

42:42

or are we tinkering with 50% of it? I

42:46

think there would be a subset

42:49

of classes that is about the method, the

42:51

methodology, what has changed, like have these

42:53

lens of it to understand. And

42:56

then within each class, that

42:59

methodology will represent something that

43:01

is embedded in it, right?

43:03

So it will be substantive,

43:06

but doesn't mean

43:07

replacing the specialization and

43:10

the context and the knowledge of each domain. But

43:12

I do think everybody should have sort

43:14

of a basic knowledge of the horizontal, right?

43:17

What is it? How does it work? What tools

43:19

do you have? What is the technology? And like,

43:21

you know, what are the do's and don'ts around

43:23

that? And then every area you say,

43:25

and you know, that thing that you learn, this is how it applies

43:27

to anatomy.

43:28

And this is how you know, how it applies

43:30

to, you know, radiology if you're studying that or

43:33

this is how you apply, you know, in the context of discovery,

43:36

right, of cell structure. And this is how we can use

43:38

it or protein folding. And this is

43:40

how it does. So that way you'll

43:42

see a connecting tissue throughout

43:44

the whole thing. Yeah. I mean, I

43:46

would add to that, because I was thinking

43:48

about this, that it's

43:52

also this incredible opportunity to do what

43:54

doctors are supposed to do but don't

43:56

have time to do now, which is

43:58

they're so consumed with

43:59

figuring out what's

44:02

wrong with you, that they have little

44:04

time to talk about the implications of

44:06

the diagnosis. Well, we really wondered

44:09

if we can

44:10

free them of some of the burden of

44:12

what is actually quite a prosaic question of what's wrong

44:15

with you, and leave the hard human

44:17

thing of, let me, should you be

44:19

scared or hopeful, should

44:21

you, what do you need to do?

44:24

Let me put this in the context of all the patients I've seen,

44:26

that conversation, which is the most important one, the

44:28

one that seems to me, so

44:31

like if I had to, I would add, if we're

44:33

reimagining the curriculum of

44:35

med school, I'd like, with

44:37

whatever, by the way, very little time,

44:40

maybe we have to add two more years to med school. But

44:43

like a whole. That's not gonna be popular. That's not

44:45

gonna be popular. But the whole thing about bringing

44:48

back the human side of, Yeah.

44:51

you know, now, if I can give you 10 more

44:54

minutes, how do you use that 10 more minutes? But

44:56

in that, in that reconceptualization

44:59

that you just did, is what we should be doing

45:01

around that, because I think the debate as

45:03

to like, well, am I gonna need doctors

45:05

or not, is actually not a very useful debate. But

45:08

rather, this other question is, how is your

45:10

time being spent? What problems are you getting

45:12

stuck? I mean, I generalize this by

45:14

like the obvious observation that if you look

45:16

around in our professions, in our daily lives,

45:19

we have not run out of problems to solve. So

45:21

as an example of that is, hey, if I'm spending

45:23

all my time trying to do diagnosis, and I could do

45:25

that 10 times faster, and it allow me actually

45:28

to go and, you know, and take

45:30

care of the patients and all the next steps of what

45:32

we have to do about it, that's probably a trade

45:34

off that a lot of doctors would take, right?

45:37

And then you say, well, you know, to what degree does

45:39

it allow me to do that? And I can do these other

45:41

things. And these other things are critically important

45:44

for my profession around that. So

45:46

when you actually become less abstract,

45:48

and like we get past the futile

45:50

conversation of like, oh, there's no more jobs,

45:53

and AI is gonna take it all of it, which is kind of nonsense,

45:55

is you go back to to say in practice,

45:58

in your context, right?

45:59

you. What

46:01

does it mean? How do you work? What

46:03

can you do differently around that? Actually that's

46:05

a much richer conversation and very often we would

46:07

find ourselves that there's a portion of the work we

46:09

do that we say I would rather do less of

46:11

that. This other part I like a lot

46:14

and if it is possible that technology

46:16

could help us make that trade-off I'll take it

46:18

in a heartbeat. Now poorly

46:21

implemented technology can also create another

46:23

problem you say hey this was supposed to solve me things

46:26

but the way it's being implemented is

46:28

not helping me right is making my life much

46:30

more miserable or so on or I've lost connection

46:33

in how I used to work etc. So

46:36

that is why design is

46:38

so important that is why also workflow

46:41

is so important in being able to solve these

46:43

problems but it begins

46:45

by you know going from the intergalactic

46:48

to the reality of it of that faculty

46:50

member in the liberal arts college or you know

46:52

or a you know a practitioner in medicine

46:55

in a hospital and what it means for them

46:57

right. Yeah what struck

46:59

me Dario throughout our conversation is how

47:02

much of this revolution

47:05

is non-technical. As I

47:07

say you guys are doing a technical thing here

47:10

but the real the revolution is going to require

47:12

a whole range of people doing things that

47:14

have nothing to do with software

47:17

that have to do with working out new new

47:19

human arrangements. Talking about that

47:21

I mean does I keep going back to the

47:24

Hollywood strike thing that you have to have

47:26

a conversation about our values

47:29

as creators of movies

47:33

how are we going to divide up the credit

47:36

and the like that's a conversation

47:38

about philosophy and you

47:41

know. It is and it's in the grand

47:43

tradition of why you know

47:47

a liberal education is so important

47:49

in the broadest possible sense right.

47:51

There's no common conception

47:54

of the good right. That is always a

47:56

contested dialogue that

47:58

happens within our society and And technology

48:00

is going to fit in that context too. So that's

48:02

why, personally, as a philosophy, I'm not at technological

48:05

determinants. And I don't like

48:07

when colleagues in my profession start

48:10

saying, well, this is the way the technology is going to

48:12

be, and by consequence, this is how society

48:14

is going to be. I'm like, that's a highly contested

48:17

goal. And if you want to enter into a realm

48:19

of politics or a realm of other ones, go

48:21

and stand up on a stool and discuss

48:24

whether that's what society wants, you will find that

48:26

there's a huge diversity of

48:28

opinions and perspective, and that's what makes

48:31

a democracy the richness of our society.

48:34

And in the end, that is going to be the centerpiece

48:36

of the conversation. What do we want? Who

48:40

gets what? And so

48:41

on. And that is, actually, I don't think it's

48:43

anything negative. That's as it should be,

48:45

because in the end, it's anchored of who we

48:47

want as humans, as friends,

48:49

families, citizens. And we have

48:51

many overlapping

48:52

sets of responsibilities. And as a technology

48:54

creator, my only responsibility is not

48:57

just as a scientist and a technology creator. I'm

48:59

also a member of family. I'm a citizen, and I'm

49:01

many other things that I care about.

49:03

And I think that that's sometimes in the debate

49:05

of the technological

49:07

determinists. They start

49:09

now budding into what

49:11

is the realm of justice

49:15

and society and philosophy and democracy.

49:18

And that's where they get the most uncomfortable, because

49:21

it's like, I'm just telling you what's

49:23

possible. And when there's pushback,

49:26

it's like, yeah, but now we're

49:28

talking about how we live

49:29

and how we work and

49:32

how much I get paid or not paid. So

49:34

that technology is important. Technology

49:37

shapes that conversation. But we're going to have

49:39

the conversation with a different language,

49:42

as it should be. And technologies

49:44

need to get accustomed to if they want to participate

49:46

in that world with the broad consequences, hey,

49:49

get accustomed to deal with the complexity of

49:51

that world of politics, society,

49:53

institutions, unions, all that stuff.

49:56

And you can be whiny about it. It's

49:58

like, they're not adopting my technology. That's what

50:00

it takes to bring technology into the world. Yeah.

50:04

Well said.

50:05

Thank you Dario

50:07

for this wonderful conversation.

50:10

Thank you to all of you for

50:12

coming and listening. Thank

50:16

you. Thank

50:18

you. Dario Gill transformed how

50:21

I think about the future of AI.

50:23

He explained to me how huge of a leap

50:25

it was when we went from chess playing

50:27

models to language learning models.

50:30

And he talked about how we still have a

50:32

lot of room to grow. That's why it's important

50:35

that we get things right. The future

50:38

of AI is impossible to predict,

50:40

but the technology has so much potential

50:43

in every industry. Zooming

50:45

into an academic or medical setting showed

50:47

just how close we are to the widespread

50:49

adoption of AI. Even

50:52

Hollywood is being forced to figure this

50:54

out. Humans of all sorts

50:56

will have to be at the forefront of integration

50:59

in order to unlock the full power of AI

51:02

thoughtfully and responsibly. Humans

51:05

have the power and the responsibility to

51:07

shape the tech for our world. I,

51:10

for one, am

51:11

excited to see how things play out.

51:14

Smart Talks with IBM is produced by

51:17

Matt Romano, Joey Fishground, David

51:19

Jha, and Jacob Goldstein. We're

51:22

edited by Lydia Jean Cott. Our engineers

51:24

are Jason Gimbrel, Sarah Bruguire,

51:27

and Ben Tolode. A theme song

51:30

by Gramiscope. Special thanks to

51:32

Andy Kelly, Kathy Callahan, and

51:34

the 8-Bar and IBM teams, as

51:37

well as the Pushkin marketing team. Smart

51:40

Talks with IBM is a production of Pushkin

51:42

Industries and Ruby Studio at

51:44

iHeartMedia. To find more Pushkin

51:46

podcasts, listen on the iHeart

51:49

Radio app, Apple Podcasts, or

51:51

wherever you listen to podcasts.

51:54

I'm Malcolm Dabham. This

51:56

is a paid advertisement from IBM.

51:59

you

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features