Podchaser Logo
Home
AI & The Productivity Paradox

AI & The Productivity Paradox

Released Tuesday, 25th June 2024
Good episode? Give it some love!
AI & The Productivity Paradox

AI & The Productivity Paradox

AI & The Productivity Paradox

AI & The Productivity Paradox

Tuesday, 25th June 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:02

Welcome, Welcome, Welcome to Smart

0:05

Talks with IBM.

0:10

Hello, Hello, Welcome to Smart Talks

0:12

with IBM, a podcast from Pushkin

0:14

Industries, iHeartRadio and

0:16

IBM. I'm Malcolm Gladwell. This

0:19

season, we're diving back into the world

0:21

of artificial intelligence, but with a

0:23

focus on the powerful concept

0:25

of open its possibilities,

0:27

implications, and misconceptions.

0:30

We'll look at openness from a variety of angles

0:33

and explore how the concept is already

0:35

reshaping industries, ways of

0:37

doing business and our very notion of

0:40

what's possible. And for the first episode

0:42

of this season, we're bringing you a special

0:44

conversation. I recently sat

0:46

down with Rob Thomas. Rob is

0:48

the senior vice president of Software

0:51

and chief Commercial Officer of IBM.

0:53

I spoke to him in front of a live audience

0:56

as part of New York Tech Week. We

0:58

discussed how business is can harness

1:00

the immense productivity benefits of AI

1:03

while implementing it in a responsible

1:05

and ethical manner. We also

1:08

broke down a fascinating concept that

1:10

Rob believes about AI, known

1:12

as the productivity paradox. Okay,

1:16

let's get to the conversation. How

1:24

are we doing good?

1:26

Rob?

1:26

This is our our second

1:29

time. We did one of these

1:31

in the middle of the pandemic. But now it's

1:33

all such a blur now that us can figure out when it was.

1:35

I know it's hard to those are like a blurry

1:37

years. You don't know what happened, right.

1:39

But well, it's good to see

1:41

you, to meet you again. I

1:44

wanted to start by going back. You've been at

1:46

IBM twenty years, is that

1:48

right?

1:48

Twenty five in July, believe it or not.

1:51

So you were a kid when you joined.

1:52

I was four.

1:53

Yeah, So

1:56

I want to contrast present

1:58

day Rob and twenty

2:00

five years ago. Rob.

2:03

When you arrive at IBM, what

2:05

do you think your job is going to be? It, your

2:07

career is going. Where do you think the kind of problems

2:10

you're going to be addressing are?

2:13

Well, it's kind of surreal because I joined IBM

2:15

Consulting and I'm coming out

2:17

of school and you

2:20

quickly realize what the job of a consultant

2:22

is to tell other companies what to do. And

2:25

I was like, I literally know nothing, and

2:28

so you're immediately trying to figure out, so how am I going

2:30

to be relevant given that I know absolutely nothing to

2:33

advise other companies on what they should be doing. And

2:36

I remember it well, like we were sitting

2:38

in a room. When you're a consultant,

2:40

you're waiting for somebody else to find work for you. A

2:43

bunch of us sitting in a room, and somebody

2:45

walks in and says, we

2:48

need somebody that knows visio.

2:49

Does anybody know Visio? I'd never

2:51

heard of viseo.

2:52

I don't know if anybody in the room has. So

2:55

everybody's like sitting around looking at their

2:57

shoes. So finally I was like, I

2:59

know it. So I raised my hand. They're

3:01

like, great, we got a project for you next week.

3:04

So I was like, all right, I have like three

3:06

days to figure out what visio is, and

3:10

I hope I can actually figure out how to use it now.

3:12

Luckily, it wasn't like.

3:14

A programming language. I mean, it's pretty much

3:16

a drag and drop capability.

3:19

And so I literally left the office,

3:21

went to a bookstore, bought

3:23

the first three books on Visio I could find, spent

3:26

the whole week in reading the books, and showed

3:29

up and got to work on the project.

3:31

And so it was a bit of a risky

3:33

move, but I

3:35

think that's kind of you against

3:38

this. Well, but if you don't take risk, You'll never

3:40

you'll never achieve, and so does

3:43

some extent. Everybody's making everything

3:45

up all the time. It's like, can you

3:47

learn faster than somebody else? Is

3:49

what the difference is in almost every

3:52

part of life. And so it

3:54

was not planned, but it was an accident, but it

3:56

kind of forced me to figure out that you're gonna

3:58

have to figure things out.

4:00

You know, we're here to talk about AI.

4:02

And I'm curious about the evolution

4:04

of your

4:07

understanding or IBM's understanding of my AI.

4:09

At what point in the last twenty five years

4:11

do you begin to think, oh, this is

4:14

really going to be at the core of what we think

4:16

about and work on at this company.

4:20

The computer scientist John

4:22

McCarthy, he was he's the person that's

4:24

credited with coining the phrase artificial

4:27

intelligence.

4:27

It's like in the fifties.

4:30

And he made an

4:32

interesting comedy said he said, once it works,

4:34

it's no longer called AI, and

4:38

that then became it's called like the AI

4:40

effect, which is it seems very

4:43

difficult, very mysterious, but once it becomes

4:45

commonplace, it's just no

4:47

longer what it is. And so if

4:50

you put that frame on it, I think We've

4:52

always been doing AI at some level, and I

4:54

even think back to when.

4:55

I joined IBM in ninety nine.

4:57

At that point there was work on rules

5:01

based engines, analytics.

5:04

All of this was happening.

5:05

So it all depends on

5:08

how you really define that term. You could

5:10

argue that elements of statistics,

5:14

probability, it's not exactly

5:16

AI, but it certainly feeds into it.

5:18

And so I feel like we've been working

5:21

on this topic of how do we deliver better

5:24

insights better automation

5:27

since IBM was formed. If you read about

5:29

what Thomas Watson Junior did, that was all

5:31

about automating tasks that

5:34

AI well, probably certainly not by today's

5:36

definition, but it's

5:39

in the same zip code.

5:40

So from your perspective, it feels a lot more

5:42

like an evolution than a revolution. Is that a fair

5:44

statement?

5:45

Yes, which I think most

5:47

great things in technology tend

5:50

to happen that way. Many of the revolutions,

5:53

if you will, tend to fizzle out.

5:55

But even given that is there, I guess what I'm

5:57

asking is, I'm curious about whether there was a

6:00

a moment in that evolution when

6:03

you had to readjust your expectations about

6:05

what AI was

6:07

going to be capable of. I mean, was there, you

6:09

know, was there a particular

6:12

innovation or a particular problem

6:15

that was solved that made you think, oh,

6:17

this is different than what I thought.

6:22

I would say the moments that caught

6:24

our attention certainly casper

6:27

Off winning the chess tournament Nobody

6:29

or Deep Blue beating casper

6:31

Off, I should say, nobody really thought

6:33

that was possible before that, and

6:36

then it was Watson

6:39

winning Jeopardy. These were moments that said,

6:41

maybe there's more here than we even thought was possible.

6:45

And so I do think there's points

6:48

in time where we realized

6:50

maybe way

6:52

more could.

6:53

Be done than we had even imagined.

6:56

But I do think it's consistent

6:59

progress every month and every year versus

7:02

some seminal moment.

7:04

Now.

7:04

Certainly large language models

7:06

as of recent have caught everybody's attention because

7:08

it has a direct consumer application.

7:11

But I would almost think of that as

7:15

what Netscape was for the

7:17

for the web browser. Yeah, it brought

7:19

the Internet to everybody, but that

7:22

didn't become the Internet per se.

7:25

Yeah.

7:25

I have a cousin who worked for IBM

7:28

for forty one years. I saw him this weekend.

7:30

He's in Toronto, by the way, I said,

7:32

do you work for Rob Thomas? He

7:35

went like this, He goes, he

7:39

said, I'm five layers down. But

7:43

so I always whenever I see my cousin, I ask him,

7:45

can you tell me again what you do? Because it's always changing,

7:47

right, I guess this is a function of working at IBM.

7:50

So eventually he just gives up and says,

7:53

you know, we're just solving problems. So what we're doing, which

7:55

I sort of loved as a kind of frame,

7:58

And I was curious, what's what's the coolest

8:00

problem you ever worked on? Not biggest, not

8:02

most important, but

8:05

the coolest, the one that's like that

8:07

sort of makes you smile when you think back on it.

8:09

Probably when I was in microelectronics,

8:12

because it was a world

8:14

I had no exposure to. I hadn't studied

8:16

computer science, and

8:19

we were building a lot of high

8:22

performance semiconductor technology,

8:24

so just chips that do a really great

8:27

job of processing something or

8:29

other. And we

8:31

figured out that there was a market in consumer

8:34

gaming that was starting to happen, and

8:37

we got to the point where we became

8:39

the chip inside the Nintendo. We

8:43

the Microsoft

8:45

Xbox Sony PlayStation, so

8:47

we basically had the entire gaming market running

8:50

on ib and chips and.

8:52

To use every parent basically

8:55

is pointing at you and saying.

8:57

You're the Probably

9:00

well, they would have found it from anybody. But it

9:03

was the first time I could explain

9:06

my job to my kids, who were quite young at that time,

9:09

like what I did, Like it was more

9:11

tangible for them than saying we solve

9:13

problems or douce you know, build solutions like

9:15

it became very tangible for them,

9:18

and I think that's, you know,

9:20

a rewarding part of the job is when you can help

9:23

your family actually understand what you do. Most people can't

9:25

do that. It's probably easier for you. They can, they can see the

9:27

books, but for

9:30

for some of us in the business the business world,

9:32

it's not always as obvious. So that was like one example

9:35

where the dots really connected.

9:38

There were a couple

9:40

there's a couple of stuck about a little bit of this in the context

9:42

of of AI. I love because I love

9:44

the frame of problem solving

9:47

as a way of understanding what the function

9:49

of the technology is. So I know that you

9:51

guys did something, did some work with

9:55

I never know how to pronounce it

9:57

is it Sevilla Sevilla with

9:59

the football club Severe in Spain. Tell

10:01

me about tell me a little

10:03

bit about that. What problem were they trying to

10:05

solve and why did they call you?

10:07

In Every

10:11

sports franchise is

10:14

trying to get an advantage, right, Let's just be that clear.

10:16

Everybody's how can I use data,

10:19

analytics, insights, anything

10:22

that will make us one percent better on

10:24

the field at

10:26

some point in the future. And

10:30

Seville reached out to us because

10:32

they had seen some of the We've done some work

10:34

with the Toronto Raptors in the past and others,

10:37

and their thought

10:39

was maybe there's something we could do. They'd heard all about

10:43

generative AI, they heard about large language

10:45

models.

10:46

And the problem, back to.

10:47

Your point on solving

10:49

problems, was we want to do a way

10:51

better job of assessing

10:53

talent, because really

10:56

the lifeblood of a sports franchise

10:58

is can you continue to cult a talent?

11:01

Can you find talent that others don't

11:03

find? Can you see something in somebody

11:05

that they don't see in themselves or maybe no other.

11:08

Team season them?

11:09

And we ended up building somed

11:12

with them called Scout Advisor, which

11:14

is built on Watson X, which

11:17

basically just ingests tons

11:20

and tons of data, and we

11:23

like to think of it as finding, you know, the needle

11:25

in the haystack of you know, here's

11:28

three players that aren't being considered.

11:30

They're not on the top teams

11:32

today, and I think

11:35

working with them together we found some pretty good insights

11:37

that's helped them out.

11:38

How What was interesting to me was we're

11:40

not just talking about quantitative

11:43

data. We're also talking about qualitative

11:45

data. But that's the puzzle

11:47

part of the thing that fastens me. How does one

11:49

incorporate qualitative analysis into

11:51

that sort of so you just feeding

11:54

in scouting reports and things like

11:56

that.

11:58

I got to realize think about how much I can act actually

12:00

disclosed it.

12:03

But if you think about so, quantitative

12:06

is relatively easy.

12:08

Every team collects that, you

12:11

know, what's their forty yard

12:13

dash? They use that term, certainly not in Spain.

12:16

That's all quantitative. Qualitative is

12:19

what's happening off the field. It

12:22

could be diet, it could be habits, it

12:24

could be behavior. You

12:26

can imagine a range of things that would all

12:28

feed into an athlete's

12:31

performance and so relationships.

12:35

There's many different aspects, and.

12:37

So it's trying to figure out the

12:39

right blend of quantitative and qualitative

12:42

that gives you a unique insight.

12:44

How transparent is that kind of system? I

12:46

mean, is it telling you it's saying

12:49

pick this guy not this guy, But is it telling

12:51

you why it prefers this guy to this guy?

12:53

Is that?

12:54

I think for anything in the realm of AI, you

12:57

have to answer the why question, otherwise

12:59

you fall into the trap of the

13:03

you know, the proverbial black box, and

13:05

then wait, I made this decision, I'd never

13:07

understood why it didn't work out.

13:09

So you always have to answer why without

13:11

a doubt?

13:12

And how is why? Answered?

13:16

Sources of data, the reasoning

13:19

that went into it, and so it's

13:21

basically just tracing back the

13:23

chain of how you got to the answer. And

13:26

in the case of what we do in Watson X is

13:28

we have IBM models. We also

13:30

use some other open source models, So it

13:32

would be which model was used, what

13:35

was the data set that was fed into that model, How

13:37

is it making decisions?

13:38

How is it performing? Is

13:40

it robust?

13:42

Meaning is it reliable in terms of if you

13:44

feed it two of the same data set, do you get

13:46

the same answer. These are all the

13:48

you know, the technical aspects of understanding

13:50

the why.

13:52

How quickly do you expect all

13:54

professional sports franchises to adopt

13:56

some kind of are they already there? If I went out

13:58

and pulled the general managers of

14:01

the one hundred most valuable sports

14:03

franchises in the world, how many of them would be using

14:05

some kind of AI system to assist

14:08

in their efforts.

14:10

One hundred and twenty percent would, meaning

14:13

that everybody's doing it, and some think

14:15

they're doing way more than they probably actually are. So

14:18

everybody's doing it. I think what's weird

14:20

about sports is everybody's

14:23

so convinced that what they're doing is

14:25

unique that they

14:28

generally speaking, don't want to work with a third party

14:30

to do it because they're afraid that that

14:32

would expose them. But in reality,

14:35

I think most are doing eighty to ninety

14:37

percent of the same things.

14:39

So but without a doubt, everybody's doing

14:41

it. Yeah.

14:43

Yeah. The other

14:45

I say that I loved was there was one but a

14:48

shipping line tricon on the Mississippi

14:50

River. Tell me a little bit about

14:53

that project. What problem were they trying to solve?

14:56

Think about the problem that I

14:59

would say every Boddy noticed if you go back

15:01

to twenty twenty, was things

15:04

are getting hold held up in ports. It

15:06

was actually an article in the paper this morning kind of

15:08

tracing the history of what happened twenty

15:10

twenty twenty one and

15:12

why ships were basically sitting at seas

15:14

for months at a time. And

15:17

at that stage we just we had a massive

15:19

throughput issue. But moving

15:23

even beyond the pandemic, you can see it now

15:26

with ships getting through like

15:28

Panama Canal, there's like a narrow

15:31

window where you can get through, and if

15:33

you don't have your paperwork done,

15:36

you don't have the right approvals, you're not going through

15:38

and it may cost you a day or two and that's a lot of money.

15:41

In the shipping industry and the tricon

15:43

example, it's really just about

15:46

when you're pulling into a port,

15:49

if you have the right paperwork done,

15:52

you can get goods off the ship very

15:54

quickly. They ship a

15:57

lot of food, which by definition,

16:00

since it's not packaged food, it's fresh food,

16:02

there is an expiration period and

16:05

so if it takes them an extra two

16:07

hours, certainly multiple

16:10

hours or a day, they have a massive

16:12

problem because then you're going to deal with spoilage and

16:15

so it's going to set you back. And what

16:17

we've worked with them on is using

16:20

an assistant that we've built in Watson

16:22

x called orchestrate, which basically

16:25

is just AI doing digital

16:28

labor, so we can replicate

16:31

nearly any.

16:32

Repetitive task and

16:34

do that with software instead of humans.

16:37

So, as you may imagine, shipping

16:39

industry still has a lot of paperwork that

16:42

goes on, and so being able to

16:44

take forms that normally would be multiple

16:46

hours of filling it out, Oh this isn't right, send

16:48

it back. We've basically built

16:50

that as a digital skill inside

16:53

of watsonex orchestrate, and

16:55

so now it's done in minutes.

16:58

Did they really did Did they realize

17:00

that they could have that kind of efficiency

17:02

by teaming up with you? Or is that something you came to them

17:05

and said, guys,

17:08

we can do this way better than you think.

17:09

What's the.

17:11

I'd say, it's always, it's always

17:14

both sides coming together at a moment

17:16

that for some reason makes sense because

17:19

you could say, why didn't this happen like five years ago, like

17:22

seems so obvious. Well, technology wasn't

17:24

quite ready then, I would say,

17:27

But they knew they had a need because

17:29

I forget what the precise number is, but you

17:32

know, reduction of spoilage has massive

17:35

impact on their bottom line, and

17:38

so they knew they had a need, we.

17:41

Thought we could solve it, and the

17:43

two together.

17:44

Who did you guys go to them thought?

17:47

Or did they come to you?

17:48

I recall that this one was an inbound

17:51

meaning they had reached out to IBM

17:54

and that we'd like to solve this problem. I think

17:56

it went into one of our digital centers, if

17:58

I if I recall so literary.

18:00

Call, yeah, but the other

18:02

the reverse is more

18:04

interesting to me because there seems to be a

18:07

very very large universe of people who have

18:09

problems that could be solved this way and

18:11

they don't realize it.

18:13

What's your.

18:15

Is there a shining example of this of

18:17

someone you just can't you just think could

18:19

benefit so much and isn't benefiting right

18:21

now?

18:24

Maybe I'll answer it slightly differently.

18:26

I'm I'm surprised by

18:29

how many people can benefit that you wouldn't even

18:32

logically think of.

18:33

First, let me give you an example.

18:35

There's a

18:38

franchiser of hair salons,

18:41

sport Clips is the name. My

18:44

sons used to go there for haircuts because they have like TVs

18:46

and you can watch sports, so they loved

18:49

that. They got entertained while they would get their haircut. I

18:52

think the last place that you would think is using

18:54

AI today would be a franchiser

18:57

of hair salons. Yeah,

18:59

but just follow it through. The

19:02

biggest part of how they run

19:04

their business is can I get people to cut hair?

19:08

And this is the high turnover industry because

19:10

there's a lot of different places you can work if you want to cut

19:12

hair. People actually get injured cutting hair

19:14

because you're on your feet all day, that type of thing. And

19:18

they're using same technology orchestrate

19:21

as part of their recruiting process.

19:24

How can they automate a lot of people submitting

19:26

resumes, who they speak

19:28

to, how they qualify.

19:30

Them for the position.

19:32

And so the reason I give that example

19:34

is the opportunity for AI, which

19:37

is unlike other technologies,

19:39

is truly unlimited. It

19:42

will touch every single business.

19:45

It's not the realm of the fortune five hundred

19:47

or the fortune one thousand. This

19:50

is the fortune any size.

19:52

And I think that may be one thing that people underestimate

19:55

about AI.

19:56

Yeah, what about I mean I was thinking

19:58

about education as as a kind of I

20:01

mean, education is a perennial whipping

20:06

boy for you guys that are living

20:08

in the nineteenth century, right. I'm just curious

20:10

about if a superintendent

20:14

of a public school system or the president of the

20:16

university sat down and had lunch

20:18

with you and said, do

20:21

the university first. My cost are out of control,

20:24

my enrollment

20:26

is down, my students hate

20:28

me, and my board is revolting.

20:31

Help.

20:33

How would you think about

20:36

helping someone in that situation.

20:39

I spend some time with universities. I

20:41

like to go back and there's.

20:42

Alma maters

20:44

where I went to school, and so

20:46

I do that every year. The challenge

20:49

I have hall of Seeming University is there has to be

20:51

a will. Yeah, and I'm

20:53

not sure the incentives are quite right today because

20:58

bringing in new technology, say we want

21:00

to go after we can help you figure out student

21:02

recruiting or

21:05

how you automate more of your education,

21:09

everybody suddenly feels threatened that university.

21:11

Hold on, that's my job.

21:13

I'm the one that decides that, or I'm

21:15

the one that wants to dictate the course. So

21:18

there has to be a will. So

21:20

I think it's very possible, and

21:23

I do think over the next decade you

21:25

will see some universities that jump all over

21:27

this and they will move ahead, and you see

21:30

others that do not.

21:31

Because it's very possible.

21:35

Where how does when you say there

21:37

has to be a will. Is that

21:39

the kind? Is that a kind of thing that

21:41

that people that IBM to think about, Like

21:45

when in this conversation you hypothetical conversation

21:47

you might have with the university president, would

21:49

you give advice on where

21:52

the will comes from?

21:55

I don't do that as much in a university context.

21:57

I do that every day in a business context, because

22:02

if you can find the right person in a business

22:04

that wants to focus on growth

22:07

or the bottom line or how do you create

22:09

more productivity. Yes, it's going to create

22:11

a lot of organizational resistance

22:14

potentially, but you can find somebody that will

22:16

figure out how to push that through. I

22:19

think for universities, I

22:21

think that's also possible. I'm not sure

22:23

there's there's there's a return on investment

22:26

for us to do that.

22:27

Yeah, yeah, yeah, God,

22:30

let's let's find some terms AI

22:34

years I told you'd

22:36

like to use what does that mean?

22:39

We just started using this term literally

22:41

in the last three months, and

22:45

it was it was what we observed internally,

22:48

which is most technology

22:50

you build, you say, all right, what's going to happen in year

22:52

one, year two, year three, and

22:55

it's you know, largely by by

22:57

a calendar AI years are the idea

23:00

that what used to be a year is

23:02

now like a week. And

23:04

that is how fast the technology is moving.

23:07

And do you give you an example. We had one

23:09

client we're working with.

23:11

They're using one of our granite

23:13

models, and the results they were getting we're not

23:15

very good. Accuracy was not there, their

23:18

performance was not there. So I

23:20

was like scratching my head. I was like, what is going on? They

23:23

were financial services, the

23:25

bank, So I'm scratching my head, like what is going

23:27

on? Everybody else is getting this and like these

23:30

results are horrible. And I

23:32

said to the team, which version of the model

23:35

are you using? This was in

23:37

February, Like we're using the one from October.

23:41

I was like, all right, now we know precisely the problem

23:44

because the model from October is

23:46

effectively useless now since we're here in February.

23:49

Serious, actually useless,

23:52

completely useless.

23:53

Yeah, that is how fast this is

23:55

changing. And so the minute, same

23:58

use case, same day, you

24:00

give them the model from late

24:03

January instead of October,

24:05

the results are off the charts.

24:07

Yeah.

24:07

Wait, so what exactly happened between October

24:10

and January?

24:10

The model got way better?

24:12

Could dig into that, like what do you mean by the way.

24:14

We are constant.

24:15

We have built large

24:17

compute infrastructure where we're doing model

24:19

training. And to be clear, model

24:22

training is the realm of probably in

24:25

the world my guess is five to ten companies.

24:28

And so.

24:30

You build a model, you're constantly training

24:33

it, you're doing fine tuning, you're

24:35

doing more training, you're adding data every

24:37

day, every hour it gets better. And

24:40

so how does it do that. You're feeding

24:42

it more data, you're feeding it more

24:45

live examples. We're

24:47

using things like synthetic data at this point,

24:49

which is we're basically creating data to do the training

24:52

as well. All of this feeds into

24:54

how useful the model is. And

24:56

so using the October

24:59

model, those were the results in October, just

25:01

a fact, that's how good it was then. But

25:04

back to the concept of AI years, two

25:07

weeks is a long time.

25:10

Is that are we in a steep

25:12

part of the model learning carve or do you expect

25:14

this to continue along this at

25:16

this pace?

25:19

I think that is the big question and

25:23

don't have an answer yet.

25:24

By definition, at some point you would think it would

25:26

have to slow down a bit, but it's not obvious

25:29

that that is on the horizon.

25:31

Still speeding up. Yes, how

25:33

fast can it get?

25:37

We've debated, can you actually have

25:39

better results in the afternoon than you did in the morning.

25:42

Really it's nuts.

25:44

Yeah, I know, but that's why

25:47

we came up with this term, because I think you also

25:49

have to think of like concepts that.

25:53

Gets people's attention.

25:54

So you're basically turning into a bakery.

25:56

You're like the bread from yesterday.

25:59

You know you can have it for twenty five cents. But

26:02

I mean you do proferential pricing. You could

26:04

say, we'll judge you

26:06

x for yesterday's model, two

26:09

x for today's model.

26:12

I think that's dangerous as a

26:14

merchandising strategy, but I guess your point.

26:17

Yeah, but that's crazy.

26:19

And this, by the way, so this model is the same

26:21

true for almost You're talking specifically about

26:23

a model that was created to help

26:26

some aspect of a financial services.

26:29

So is that kind of model accelerating

26:31

faster and learning faster than other models for other

26:34

kinds of problems?

26:35

So this domain was code,

26:38

Yeah, and so by

26:40

definition, if you're feeling feeding in more data,

26:43

some more code, you get those kind of results.

26:46

It does depend on the model type. There's

26:49

a lot of code in the world and so we

26:51

can find that we can create it. Like I said,

26:55

there's other aspects where there's probably

26:57

less inputs available, which

26:59

means you probably won't get the same level of iteration.

27:02

But for code, that's certainly the cycle times that we're

27:04

seeing.

27:05

Yeah, and how do you know that Let's

27:07

stick with this one example of this model you have.

27:10

How do you know that your model is better

27:12

than big company

27:14

b down the street? Client

27:16

asks you, why would I go with IBM as opposed to

27:20

some the some firm in the valley that says,

27:22

let's they have a model on this, what's your how

27:24

do you frame your advantage?

27:28

Well, we benchmark all of this, and

27:31

I think the most important is metric

27:33

is price performance, not

27:35

price, not performance, but the combination of

27:37

the two.

27:38

And we're super competitive there.

27:41

Well, for what we just released, with

27:43

what we've done in open source, we know that nobody's

27:46

close to us right now on code.

27:47

Now.

27:48

To be clear, that will probably change because

27:50

it's like leapfrog.

27:51

People will jump ahead, then we jump back

27:53

ahead.

27:54

But we're very confident

27:56

that with everything we've done

27:59

in the last few months taken a huge lead

28:01

forward here.

28:01

Yeah, it's I mean this

28:04

goes back to the point I was making in the beginning, so

28:06

about the difference between your twenty

28:09

something self in ninety nine and yourself

28:11

today. But this time compression

28:15

has to be a crazy adjustment. So

28:18

the concept of what you're working on and

28:20

how you make decisions internally and things has

28:23

to undergo this kind of revolution.

28:25

If you're switching from I mean back

28:27

in the day, a model might be useful for how

28:31

long.

28:31

Years years I think about you

28:34

know, statistical models that set inside

28:36

things like SPSS,

28:38

which is a product that a lot of.

28:40

Students use around the world.

28:41

I mean, those have been the same models for twenty years

28:44

and they're still very good at what they do. And

28:46

so yes, it's a completely it's

28:49

a completely different moment

28:51

for how fast this is moving. And I think

28:54

it just raises the bar for everybody,

28:56

whether you're a technology

28:58

provider like us, or you're

29:01

a bank or an insurance company or a

29:03

shipping company, to say, how

29:05

do you really change your

29:07

culture to be way more aggressive

29:11

than you normally would be?

29:14

Does this means it's a weird question, But does

29:17

this mean a different set of kind

29:19

of personality or character traits

29:21

are necessary for a decision maker in

29:24

tech now than twenty five years ago.

29:29

There's a book I saw recently,

29:32

it's called The Geek Way, which talked

29:34

about how technology companies

29:36

have started to operate in

29:38

different ways, maybe than many

29:41

traditional companies, and

29:45

more about being data driven, more

29:48

about delegation. Are

29:51

you willing to have the

29:53

smartest person in the room make decisions opposed

29:55

to the highest paid.

29:56

Person in the room.

29:57

I think these are all different aspects that

29:59

ever company.

30:00

Is going to face.

30:01

Yeah, yeah, next

30:04

term, talk about open. When you

30:06

use that word open, what do you mean.

30:10

I think there's really only one definition of

30:12

open, which is for technology

30:14

is open source. An

30:17

open source means the code

30:19

is freely available. Anybody

30:22

can see it, access it, contribute

30:26

to it.

30:26

And what is Tell me about why that's an important

30:28

principle.

30:32

When you take a topic like AI. I

30:35

think it would be really bad for the world

30:39

if this was in the hands of one or two companies,

30:43

or three or four, doesn't matter the number, some small

30:46

number. Think about like in

30:48

history sometimes early nineteen hundreds,

30:51

the Interstate Commerce Commission

30:53

was created, and the whole idea

30:55

was to protect farmers

30:57

from railroads, meaning they

31:00

wanted to allow free trade. But they

31:02

knew that well, there's only so many railroad tracks,

31:04

so we need to protect farmers from

31:06

the shipping costs that railroads could impose.

31:09

So good idea, but over time

31:12

that got completely overtaken by the railroad lobby

31:15

and then they use that to basically

31:17

just increase prices, and it

31:19

made the lives of farmers way more

31:21

difficult. I think you

31:23

could play the same analogy through with AI.

31:27

If you allow a handful of companies

31:29

to have the technology, you

31:31

regulate around the principles of those

31:33

one or two companies, then you've trapped the entire

31:35

world.

31:36

Think that would be very bad. So

31:39

the danger of that app for

31:42

sure.

31:42

I mean there's companies in Watson in

31:44

Washington every week trying to

31:47

achieve that outcome.

31:49

And so the.

31:50

Opposite of that is to say it's going to be an

31:52

open source because

31:54

nobody could dispute open source because it's

31:57

right there, everybody can see it. So

32:00

I'm a strong believer that open source will win for

32:02

AI. It has to win. It's not

32:05

just important for business, but it's important

32:07

for humans.

32:10

On the I'm curious about

32:12

on the list of things you worry about, Actually,

32:16

let me before I ask, let me ask this question very

32:18

generally, what is the list of things you worry

32:20

about? What's your top five business

32:22

related worries right now?

32:25

Tops from those are the first question. We

32:27

could be here for hours for me to answer.

32:30

I did say business related. We could leave. You know, your

32:34

kids' haircuts got it out of.

32:36

The Number

32:38

one is always It's the

32:40

thing that's probably always been true, which

32:42

is just people. Do

32:45

we have the right skills? Are we doing a good

32:48

job of training our people? Are

32:50

our people doing a good job of working with clients

32:53

like That's number one? Number

32:55

two is innovation? Are

32:59

we pushing the envelope enough? Are

33:02

are we staying ahead? Number

33:05

three is which kind of feeds into

33:07

the innovation one is risk taking? Are

33:09

we taking enough risk? Without

33:11

risk, there is no growth. And

33:13

I think the trap that every larger

33:15

company inevitably

33:18

falls into is conservatism.

33:21

Things are good enough, and

33:23

so it's are we pushing the envelope?

33:25

Are we taking enough risk to

33:27

really have an impact? I'd say those are probably

33:29

the top three that I spend talk

33:32

about.

33:32

The vast trend to define productivity paradox

33:35

something I know you've thought a lot about what does

33:37

that mean?

33:39

So I started thinking hard about this because all

33:41

I saw and read every day was

33:44

fear about AI, and

33:48

I studied economics, and

33:51

so I kind of went back to like basic

33:54

economics, and there's been like a macro investing

33:58

formula I guess I would say it's

34:00

been around forever that says growth

34:02

comes from

34:05

productivity growth plus

34:08

population growth plus

34:10

debt growth. So

34:13

if those three things are working, you'll

34:15

get GDP growth. And

34:17

so then you think about that and you say, well, debt

34:20

growth, we're probably not going

34:22

back to zero percent interest rates, so

34:25

to some extent there's going to be a ceiling on that.

34:28

And then you.

34:29

Look at population growth. There

34:31

are shockingly few countries

34:33

or places in the world that will see population growth

34:36

over the next thirty to fifty years. In

34:38

fact, most places are not even at

34:40

replacement rates. And

34:43

so I'm like, all right, so population growth is not going to be there.

34:46

So that would mean if you just take.

34:48

It to the extreme, the

34:50

only chance of continued

34:53

GDP growth is

34:55

productivity.

34:57

And the best way to

35:01

solve productivity as AI.

35:03

That's why I say it's a paradox.

35:05

On one hand, everybody's scared after

35:07

death it's going to

35:09

take over the world, take all of our

35:11

jobs, ruin us, But

35:14

in reality, maybe it's the other way, which is it's

35:16

the only thing that can save us.

35:18

Yeah, and if you believe.

35:20

That economic equation, which I think has proven

35:23

quite true over hundreds of years, I

35:25

do think it's probably the only thing that can save us.

35:28

Actually looked at the numbers yesterday for

35:30

total random reason on population growth

35:33

in Europe and receive this is

35:35

a special bonus question. See how smart you are?

35:37

Which country in Europe continentally

35:40

Europe has the highest population growth?

35:43

It's small continental Europe,

35:48

probably one of the Nordics, I would guess.

35:50

Close Luxembourg.

35:53

Okay, something that's going on in Luxembourg. I

35:57

feel like, well, all of this need to investigate.

36:00

There're at one point four nine, which in the day, by

36:02

the way, would be a relatively that's

36:04

the best performing country. I mean in the

36:06

day, you'd countries had routinely had two

36:09

points something, you know, percent

36:11

growth in a given year. Last

36:14

question, you're writing a book. Now, we were talking

36:16

chatting about it backstage, and now

36:18

I appreciate the paradox

36:20

of this book, which is universe

36:23

with a model is better in the afternoon than it is in

36:25

the morning. How do you write a book that's like

36:27

printed on paper? I expected

36:29

to reuse Aul.

36:34

This is the challenge. And I am

36:37

an incredible author of useless books.

36:39

I mean most of what I've spent time

36:41

on in the last decade of stuff that's completely

36:44

useless, like a year after it's written. And

36:47

so when we

36:49

were talking about it, I was like, I would like to do something around

36:51

AI that's timeless. Yeah,

36:54

that would be useful ten or twenty

36:56

years from now. But

36:58

then to your so, how

37:01

is that even remotely possible if

37:04

the model is better in the afternoon and in the morning.

37:07

So that's the challenge in front of us.

37:09

But the book is around AI value creation, so

37:12

kind of links to this productivity paradox,

37:14

and how do you actually get

37:17

sustained value out

37:19

of AI, out

37:22

of automation, out of data

37:24

science? And so the biggest

37:26

challenge in front of us is can we make this relevant?

37:30

How's the day that it's published?

37:31

How are you setting out to do that?

37:35

I think you have to to some extent level

37:38

it up to bigger concepts, which

37:40

is kind of why I go to things like macroeconomics,

37:43

population geography

37:45

as opposed to going into the weeds

37:48

of the technology itself. If you write

37:50

about this is how you get better performance

37:52

out of a model we can

37:55

agree that will be completely useless

37:57

two years from now, but maybe even two months

37:59

from now, and so it will

38:01

be less in the technical

38:03

detail and more of what

38:06

is sustained value creation for AI, which

38:09

if you think on what is hopefully a

38:11

ten or twenty year period, it's probably

38:14

we're kind of substituting AI for technology.

38:16

Now I've realized, because I think this has always

38:18

been true for technology. It's just now

38:21

AI is I think that everybody wants to talk about.

38:25

But let's see if we can do it. Time will

38:27

tell.

38:28

Did you get any inkling that the pace

38:30

that this AI year's phenomenon

38:32

was gonna that things with the pace

38:34

of change was going to accelerate so much because

38:37

you had More's law, right, You had a model in

38:40

the technology world for this

38:42

kind of exponential increase in so

38:45

were you were you

38:47

thinking about that kind of accelerate

38:50

similar kind of acceleration in

38:52

the.

38:55

I think anybody that said they expected

38:57

what we're seeing today is probably exactly.

39:01

I think it's way faster than

39:03

anybody expected.

39:05

Yeah, but technologies,

39:08

back to your point at More's law has always

39:10

accelerated through the years,

39:12

So I wouldn't say it's a shock, but

39:15

it is surprising.

39:16

Yeah, You've had a kind of extraordinary

39:20

privileged position to watch

39:23

and participate in this revolution, right, I

39:25

mean, how many other people have been in that

39:27

have ridden this

39:30

wave like you have?

39:32

I do wonder is this really

39:34

that much different or does it feel different just

39:36

because we're here? I mean

39:39

I do think on one level, yes, So

39:41

in the time I've been an IBM, Internet happened,

39:45

mobile happened, social

39:48

network happened, blockchain

39:50

happened.

39:51

AI, So a lot has happened.

39:53

But then you go back and say, well, but if I'd been here between

39:57

nineteen seventy and ninety five, there

40:00

were a lot of things that are pretty fundamental

40:03

then too, say, I wondered, almost,

40:04

do we do we always exaggerate

40:06

the timeframe that we're in. I

40:10

don't know.

40:11

Yeah, but it's

40:13

a good idea though.

40:16

I think the ending with the phrase

40:18

I don't know it's a good idea

40:20

though. Great

40:23

way to wrap this up.

40:24

Thank you so much, Thank you, Malcolm.

40:29

In a field that is evolving as quickly as artificial

40:32

intelligence, it was inspiring

40:34

to see how adaptable Rob has been over

40:36

his career. The takeaways from

40:38

my conversation with Rob had been echoing

40:41

in my head ever since. He

40:43

emphasized how open source models

40:45

allow AI technology to be developed

40:47

by many players. Openness

40:50

also allows for transparency.

40:52

Rob told me about AI use cases

40:55

like IBM's collaboration with

40:57

Sevilla's football club. That exam

41:00

really brought home for me how AI

41:02

technology will touch every industry.

41:05

Despite the potential benefits of AI,

41:08

challenges exist in its widespread

41:10

adoption. Rob discussed how

41:12

resistance to change, concerns

41:15

about job security and organizational

41:17

inertia can slow down implementation

41:20

of AI solutions. The

41:22

paradox, though, according to Rob, is

41:24

that rather than being afraid of a world with

41:27

AI, people should actually be more

41:29

afraid of a world without it. AI,

41:32

he believes, has the potential to make

41:34

the world a better place in a

41:36

way that no other technology can. Rob

41:39

painted an optimistic version of

41:41

the future, one in which AI technology

41:44

will continue to improve at

41:46

an exponential rate. This

41:48

will free up workers to dedicate their

41:50

energy to more creative tasks.

41:53

I for one am on board

41:57

Smart Talks with IBM is produced by Matt

41:59

Romano, Joey Fishground, and

42:01

Jacob Goldstein. We're edited

42:03

by Lydia Gene kott Our engineers

42:06

are Sarah Bruguer and Ben Tolliday.

42:08

Theme song by Gramscow. Special

42:11

thanks to the eight Bar and IBM teams,

42:13

as well as the Pushkin marketing team.

42:15

Smart Talks with IBM is a production

42:18

of Pushkin Industries and Ruby Studio

42:20

at iHeartMedia. To find more

42:23

Pushkin podcasts, listen on the

42:25

iHeartRadio app, Apple Podcasts,

42:28

or wherever you listen to podcasts.

42:31

I'm Malcolm Gladwell. This is a paid

42:33

advertisement from IBM. The

42:35

conversations on this podcast don't

42:38

necessarily represent IBM's

42:40

positions, strategies, or

42:42

opinions.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features