Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
LinkedIn presents. I'm
0:06
Rufus Griskim and this is The Next Big
0:08
Idea. Today,
0:10
Bill Gates on AI, the path to
0:13
super intelligence and what it means for
0:15
all of us. I
0:31
suspect that every moment in
0:34
human history has felt pivotal,
0:36
precarious, as if anything could
0:38
happen. But it also must be
0:40
true that some moments are more pivotal
0:43
than others. This is one
0:45
of those moments. We've
0:47
seen the impact of transformative technological
0:49
change. The internet has sped the
0:52
world up and social media, now
0:54
on most every phone in most
0:56
every hand, has polarized our communities,
0:58
hyperbolized our politics, and
1:00
now we are in the early
1:02
moments of the AI revolution. What
1:05
will the next decade bring? There
1:08
are few people I would rather
1:10
ask this question than Microsoft co-founder
1:12
and global philanthropist Bill Gates. Bill's
1:15
been at the forefront of the race to build
1:17
machines that can empower humans for 50 years, ever
1:21
since he declared his mission to put a
1:23
computer on every desk in every home. He
1:26
was instrumental in driving the development of personal computing in
1:28
the 80s, the growth of
1:30
the internet in the 90s, and more
1:32
recently leading the charge to eradicate malaria
1:34
and other diseases. In
1:37
the last few years, he's been on
1:39
the front lines of Microsoft's partnership with
1:41
OpenAI and the development of GPT. How
1:44
is it, you may be wondering, that Bill Gates
1:47
has ended up joining us today? Well,
1:49
for the last few months, I've
1:51
been reading a book that's being
1:53
published serially by Harvard Business Review.
1:56
It's called AI First, and it
1:58
features interviews with folks like Reid
2:00
Hoffman, Mustapia, and Salomon, Sam Altman,
2:02
and Bill, who collectively make the
2:04
case that AI isn't overhyped, it's
2:06
underhyped. We thought it would
2:08
be interesting to not just interview the
2:10
co-authors of this book, career technologist Andy
2:13
Sack, an old friend of mine, and
2:15
former Starbucks chief digital officer Adam Brotman,
2:18
and they suggested inviting one of their
2:20
most interesting interviewees, Bill Gates. And
2:22
so what's Bill's take on the AI
2:25
revolution? Superintelligence is coming.
2:27
There's no clear way to slow it
2:29
down. And the technology available
2:31
today is already a game changer.
2:34
This is largely a good thing.
2:36
We can harness AI to solve
2:38
our biggest global problems. We
2:40
are likely to live in decades to
2:42
come in a world of superabundance, but
2:45
it will take vigilance to make sure
2:47
it's the world we want for ourselves
2:50
and generations to come. By the
2:52
way, the format of today's show is a little
2:54
different from what you're used to. First, we'll hear
2:56
a conversation I had with Andy and Adam, co-authors
2:59
of AI First, about how they came
3:01
to write this book. Then
3:03
we'll bring on Bill for a
3:05
wide-ranging conversation about artificial intelligence and
3:08
our collective future. The
3:19
LinkedIn Podcast Network is sponsored by
3:21
Oracle. AI may be the most
3:23
important new computer technology ever. Do
3:25
more and spend less like some
3:27
of the world's most successful companies.
3:29
Take a free test drive of
3:31
OCI at oracle.com/LinkedIn Podcast Network. Welcome
3:42
Andy and Adam to The Next Big
3:44
Idea. Thanks for having us. Glad
3:47
to be here. Happy to be here. Andy,
3:49
you're a serial entrepreneur. You've built
3:52
and invested in countless startups. You
3:54
advised Microsoft CEO Satya Nadella. You're
3:57
the founder and managing director of Keen Capital, a
3:59
blockchain firm. fund and you
4:01
have the rare distinction of being an old friend of
4:03
mine. And you, Adam,
4:05
are no slouch. You were the first chief digital
4:08
officer at Starbucks where you led the development of
4:10
their app and payment platform. Quite a good app,
4:12
by the way. Thank you. You
4:14
were co-CEO of J crew. And
4:18
now the two of you have joined forces
4:20
to start a new company, forum three, to
4:23
help companies take advantage of the power
4:25
of AI. Does the
4:27
world really need another consulting
4:29
firm? No.
4:36
But you wouldn't define forum three as a consulting firm.
4:38
What are you guys setting out to do? It's
4:40
a great question in the sense of we're, we
4:43
provide software, we're building software,
4:45
we provide services, consulting services,
4:47
and other services. And
4:49
we're writing a book, which we're going
4:52
to talk to you about called AI First that's
4:54
being published by Harvard Business Review. But
4:56
they're all related to the topic of
4:58
taking advantage of AI to transform
5:00
your business, transform
5:03
your marketing efforts in building your brand.
5:05
And so we've, we've actually taken to
5:07
describing our forum three as an AI
5:09
lab because we can't come up with
5:11
a better, more descriptive term,
5:13
but, but it's actually an appropriate
5:15
term and kind of gives you
5:17
a sense of how Andy and I think about
5:19
the space. We're not taking
5:22
a traditional approach to building
5:24
the forum three company around AI. And
5:26
I think that's related to that, how
5:28
non-traditional this new technology is. So
5:31
you've written this book, you're publishing it
5:33
serially, which is very interesting. It's
5:36
called AI First. Why
5:38
AI First? It's
5:40
worth noting that our original title,
5:44
which is the title that we when
5:46
we wrote the proposal for Harvard
5:48
Business Review original title was our
5:51
AI journey. And
5:53
Harvard Business Review approached us a bit
5:55
over a year ago and at the time We
6:00
had just pivoted to become a
6:03
generative AI company at Forum
6:05
3. Both
6:07
Adam and I, our company Forum 3, were
6:10
on a collective journey to explore
6:12
what was this generative AI, which felt
6:15
like a very significant
6:17
technological development. Having
6:20
been a career
6:22
technologist, really started my first Internet company in
6:24
1995. A
6:27
bit over a year ago, I was like, this is
6:29
a big freaking deal. Little did I know just how
6:31
big of a freaking deal it was. The
6:34
title, our AI journey, started that
6:36
way. We started with a bunch
6:38
of interviews with thought leaders, one
6:41
of which we're going to get
6:43
to talk with today with Bill Gates,
6:46
but we also spoke with Sam
6:48
Altman and Reid Hoffman and Mustafa Salomon,
6:50
it's at Emifiu.
6:52
It's really been about Adam and
6:54
I educating ourselves about what is
6:56
this technology, what does it mean
6:59
for business leaders, what does
7:01
it mean for society, how does it
7:03
change the rules of the game? At
7:07
one point, I argued with Harvard Business Review,
7:09
I wanted to call the book The
7:11
Holy Ship Moment. That title
7:13
was not approved understandably,
7:16
but I think it is
7:18
a holy ship moment certainly
7:20
for business, certainly for
7:22
technology, and it's
7:24
a really groundbreaking technology that
7:27
we're mostly excited
7:29
about the possibilities and opportunities
7:31
comes. Really, when we talked
7:34
about what title to name it,
7:36
AI first, it was something that
7:38
we arrived at because as we
7:40
went along, we realized that it
7:42
was a total shift in mindset
7:44
that was required for
7:46
myself, for Adam, about how we think
7:48
about our specific little business, but also
7:50
how we approach business. When you think
7:53
about it from the individual to the
7:55
organization, you need a shift in mindset
7:57
and thus the name AI first. Well,
8:00
I had a few holy shit moments reading
8:02
the first four chapters of your book, which
8:04
I think is what's been published so far.
8:07
You're publishing it serially, which I wouldn't be
8:09
surprised if we see more of that kind
8:11
of approach to book writing in the future.
8:14
One holy shit moment for me was when
8:16
Sam Altman told you that he thought we'd
8:18
have AGI, which of course is artificial general
8:20
intelligence, defined as machine
8:22
intelligence that matches or exceeds
8:24
human intelligence within five years.
8:28
Within five years, I think most people would put it out
8:30
further if they think it's going to happen. You
8:33
asked Sam what AGI would mean
8:35
for business, for example, for marketing
8:38
teams. And he
8:40
said, it will mean that 95%
8:42
of what marketers use agency strategists
8:44
and creative professionals for today will
8:47
nearly instantly at almost no cost
8:49
be handled by the AI. And
8:52
the AI will likely be able
8:54
to test the creative against real
8:57
or synthetic customer focus groups. Again,
8:59
all free, instant and nearly perfect
9:01
images, videos, campaign ideas, no problem.
9:04
That's pretty astonishing. Do you guys buy it? Do
9:06
you think that this might be five years out?
9:09
Yeah. I mean, it's worth remarking
9:11
that when he said that to us,
9:14
we stepped outside the office and
9:16
didn't talk, which is rare
9:18
for Andy and I, didn't talk for like a
9:21
couple minutes. We just sat there like looking at
9:23
the San Francisco scenery and taking it in because
9:25
it was both how fast this was moving and
9:27
what it really meant. And then we
9:29
got into the book and we talked to Reed Hoffman
9:32
next. We talked to Bill, we talked to Mustafa Suleiman.
9:34
These are the top, top people in the field. And
9:38
they started reinforcing and validating what Sam
9:40
was saying and giving us more details
9:43
about it. So, yeah, I
9:45
say, well, we were holy
9:48
shit, quiet, stunned,
9:50
had to step aside after that Sam meeting.
9:53
Now we're more like ringing the alarm bell saying, yeah,
9:55
I mean, I don't know if it's five years, what
9:57
your definition is, but this thing is coming fast. and
10:00
the genie's out of the bottle for good and for
10:02
bad. So you've interviewed Bill
10:05
Gates, Mustafa Suleiman,
10:09
Reid Hoffman, you mentioned. What
10:11
surprises have you encountered along the
10:13
way? The biggest surprise
10:15
for me, I would say, is I
10:19
don't think that people have an awareness
10:22
of just how fundamental and significant
10:25
of a technology shift this is
10:27
and how fast it's coming. And
10:30
it's now. I learned, as
10:33
I talked about, it's such a significant moment
10:35
and how significant it's gonna change. The
10:38
rules of business, the game of
10:41
business, what's defensible, how to approach
10:43
strategy. Like it's, you
10:45
need to start to wrap your one's
10:48
mind around what it
10:50
means, because it's happening today. Certainly,
10:53
many of us have a certain amount of
10:55
concern and fear when it
10:58
comes to thinking about this
11:00
pace of tech acceleration and
11:03
moving beyond the AGI inflection point, and we'll talk
11:05
about that with Bill. But I'm
11:09
experiencing equal parts, adrenaline rush
11:11
and concern. On the adrenaline rush
11:14
side, what I remember from the
11:17
mid 90s, which was really just
11:19
the early days of the dawn
11:22
of the internet, I remember seeing the first
11:24
mosaic browser. I think the three of us
11:26
were all just out of college, right? And
11:28
at that time, and the
11:31
decision to get in early and
11:34
try to figure out this new technology and
11:36
try to think in advance about how
11:39
it would play out. I think that
11:41
was a decision that really benefited all three of
11:43
us. When I think
11:45
back on the inflection point
11:47
of the advent of the smartphone, I
11:50
was not thinking enough about that. We
11:54
could have sat in a room and said, you know what?
11:56
You got a mobile device that's a powerful computer with a
11:58
GPS unit in it, we can create Uber. Like
12:00
I did not have that sequence of thoughts. But
12:04
this feels like another such moment. I mean,
12:06
I have like pattern recognition is just exploding
12:08
with like, this is, we've seen
12:10
this movie before and
12:13
we should all be paying really
12:15
intense attention to what's happening. What's
12:18
wild about this one, we're all kind
12:20
of applying the same pattern recognition. However,
12:23
this one is different. It's
12:25
more powerful, but it's also more dangerous
12:27
and more confusing,
12:29
right? It's like intelligence as
12:32
a service, production level intelligence.
12:34
And so on the
12:36
one hand, I'm like you, and I think Rufus, you
12:38
and I, and Andy have talked about this in the
12:40
past, and this isn't new, but like, we're applying
12:45
our pattern recognition and there's like
12:47
this feeling of excitement.
12:50
And okay, we see this, let's get on it. But
12:52
there is a feeling of apprehension as well
12:55
about what it means to- Oh, for sure. Misinformation
12:58
and jobs and maybe even worse
13:00
that goes with it. And that
13:03
wasn't the same feeling we had
13:05
with the other seminal
13:08
moments. That's true. That's
13:10
a key difference here. And I think it's
13:12
good that we're acknowledging that. Yeah,
13:14
there's a question of, I mean, back
13:16
in those prior revolutions, I think I felt
13:19
nothing but let's hit the accelerator and I
13:21
find myself thinking now, let's hit the brakes.
13:23
And there's a separate question that builds uniquely suited
13:25
to answer, which is even
13:28
if we thought it made sense to apply a braking
13:31
mechanism to this process, is there any effective
13:33
way to do that given the global nature
13:35
of this process and given that we're not
13:37
all a bunch of friends, all
13:39
the entities building these technologies? So
13:42
I think that'll be an interesting thing to get Bill's take
13:44
on. You couldn't ask a better person
13:46
a more like perfect question for him to answer.
13:49
So like, I'm excited to hear what he says.
13:52
Coming up after the break, we'll hear from
13:55
Bill and what he has to
13:57
say may surprise you. This
13:59
technology in terms
14:01
of its capability, we'll
14:03
reach superhuman levels. We'll be right back. If
14:30
you're interested in the story behind the
14:32
business headlines, check out Big Technology Podcast,
14:34
my weekly show that features in-depth interviews
14:36
with CEOs, researchers, and reformers in business
14:39
and technology. Hi,
14:59
I'm Alex Kantrowitz. I'm a longtime
15:01
journalist, CNBC contributor, and the host
15:03
of the show. I
15:06
empty my Rolodex every Wednesday to bring
15:08
you awesome episodes, so go check out
15:10
Big Technology Podcast. It's available in all
15:12
podcast apps. We'd love to have you
15:14
as listener. Bill,
15:22
Andy says you win about as frequently as he wins
15:24
on the pickleball court. Is that sound right to you?
15:27
Pretty equal, yeah. Hey,
15:30
Bill. Hi. Bill
15:33
Gates, welcome to The Next Big Idea. Thank
15:35
you. Bill,
15:37
Andy and Adam and I were just talking about
15:39
the digital transformations we've seen in our own lives
15:41
in the last 40 years. And
15:44
you haven't just seen these transformations. You've played
15:47
an instrumental role in moving them forward. You've
15:50
said that the demo you saw
15:52
last September of GPT-4 was
15:55
mind-blowing. Was it
15:57
more mind-blowing than the first demo of the
15:59
Graphically-based demo? user interface that you saw at
16:01
Xerox PARC in 1980? I'd
16:04
say yes. I mean, I'd
16:07
seen graphical interface prior
16:09
to the Xerox PARC stuff, and that
16:11
was an embodiment that helped
16:14
motivate a lot of what Apple
16:17
and Microsoft did with
16:20
personal computing in the decade
16:23
after that. But compared
16:26
to unlocking
16:28
a new type
16:30
of intelligence that can read
16:32
and write, graphics interface
16:34
is clearly less impactful,
16:38
which is saying a lot. Well,
16:40
I was interested to learn that AI is not
16:43
a new interest of yours. You
16:45
were intrigued as a student way back in the
16:47
70s, and I gather
16:49
you wrote, I think, a letter to
16:51
your parents and said effectively, mom, dad,
16:53
I may miss out on the AI
16:56
revolution if I start this company, which
16:58
is the company that became Microsoft. The
17:01
AI revolution took a little longer than maybe you
17:03
might have guessed back then. Now it's
17:05
happening. What interested you about
17:07
AI in those early days,
17:09
and is it becoming what you'd
17:11
imagine back then? Well,
17:14
certainly anybody who writes software is
17:17
thinking about what human
17:19
cognition is able to achieve
17:22
and making that comparison. And
17:25
when I was in high school, there
17:28
were things like Shaky the Robot at
17:30
Stanford Research Institute, which would engage
17:33
in reasoning and come up with an execution
17:35
plan and figure out
17:37
to move the ramp and go up the ramp and
17:39
grab the blocks. And
17:42
it felt like some of these key capabilities,
17:47
whether it was speech recognition, image
17:49
recognition, and it would be
17:51
fairly solvable. There were a
17:53
lot of attempts and so-called rule-based systems
17:55
and things that just didn't capture
17:58
the richness and so- So in
18:00
our respect for human cognition constantly
18:03
goes up as we try
18:05
to match pizzas of it. But
18:07
we saw with machine learning techniques,
18:10
we could match vision
18:13
and speech recognition, so
18:15
that's powerful. But
18:17
the holy grail that even
18:20
after those advances I kept highlighting
18:22
was the ability to
18:25
read and represent knowledge like
18:27
humans did was just, you
18:30
know, nothing was good at all.
18:33
Then language translation came down,
18:35
but still that was a
18:37
very special case thing.
18:42
But GPT-4 in
18:44
a very deep way, far beyond
18:46
GPT-3, you know, showed that
18:49
we could access and represent
18:51
knowledge. And it's, you know, the
18:53
fluency in many
18:55
respects, although not the accuracy,
18:57
is already superhuman. Yeah,
19:00
it's just astounding. We never would have guessed
19:03
that moving the chess pieces on the chessboard
19:05
would be harder than becoming a better chess
19:07
player than Kasparov. But
19:10
it's interesting to see how what
19:12
the challenges turn out to be. And
19:15
as you said, that Xerox PARC demo
19:17
set the agenda for Microsoft for maybe
19:19
the next 15 years, right? Development of
19:21
Windows and Office. And
19:23
do you think that the impact of what's
19:25
happening right now in AI is going to
19:27
set the agenda for the
19:30
next many decades and even more
19:32
so? It's
19:34
absolutely the most important thing going on. It'll
19:37
shape humanity in a very dramatic
19:39
way. It's at the
19:41
same time that we have, you know, synthetic
19:43
biology and robotics being controlled
19:45
by the AIs. So
19:48
we have to keep in mind those
19:51
other things. But the dominant change
19:53
agent will be AI. In
19:56
1980, you had a light bulb moment when
19:58
you famously declared, there will be a- a
20:00
computer in every home, on every desk.
20:03
What do you think the equivalent is for AI? Do
20:06
you think we'll have an AI advisor in
20:08
every year? Well, the form
20:10
factor, the hardware form factor doesn't
20:12
matter that much. But the idea
20:15
of the earbud that's both
20:18
adding audio and canceling
20:20
out audio and enhancing
20:22
audio clearly will be a
20:24
very primary form factor just
20:26
like glasses that
20:29
can project arbitrary video into
20:32
your visual fields will
20:35
be the embodiment
20:38
of how you're interacting.
20:41
But the personal agent that I've
20:43
been writing about for decades, that's
20:46
superior to a human insistent
20:48
in that it's tracking
20:50
and reading all the things that you wanted
20:53
to read, and just there to help
20:55
you, and
20:57
understands the context enough that silly
21:00
things like, you don't trust software today
21:02
to even order your
21:05
email messages. It's in a stupid
21:07
dumb time-ordered form because
21:10
the contextual understanding of, okay,
21:13
what am I about to do next?
21:15
What's the nature of the task that
21:18
these messages relate to? You
21:21
don't trust software to combine all
21:23
of the new information,
21:25
including new communications. You
21:27
go to your mail
21:30
and that's time-ordered, you go to your
21:32
texts and that's time-ordered, you go to
21:34
your social network and that's time-ordered. I
21:36
mean, computers are operating at a
21:39
almost trivial level of semantics
21:41
in terms of understanding what's
21:43
your intent when you sit down with the
21:46
machine or helping you with
21:48
your activities. Now
21:50
that they can essentially
21:52
read like a
21:55
white-collar worker, that
21:58
interface will be entirely agent
22:00
driven, you know, agent executive
22:02
assistant, agent mental therapy, agent
22:05
friend, agent girlfriend, agent expert,
22:08
all driven by deep AI.
22:12
It seems like it will be useful in
22:14
proportion to how much it knows about us.
22:17
And I imagine at some point in the not too
22:19
distant future, probably all four of us will be asked
22:22
if we wanna turn on audio so our
22:25
AI assistant can effectively like listen to
22:27
our whole life. And I would
22:29
think that there'll be benefits to do that because
22:32
we'll get good counsel, good advice.
22:36
Do you think that's true? And do you think, will you turn
22:38
it on when invited
22:40
to turn on the audio?
22:42
Well, computers today see every
22:44
email message that I write and
22:48
certainly digital channels are seeing all
22:51
my online meetings and
22:55
phone calls. So you're
22:57
already disclosing into digital
22:59
systems a lot about
23:02
yourself. And so
23:04
yes, the value added of the
23:06
agent in terms of summarize
23:09
that meeting or help me with
23:11
those follow-ups, you know, be
23:13
phenomenal. And the agent
23:15
will have different modes in terms
23:17
of which of your information it's
23:21
able to operate with. So there will
23:23
be partitions that you
23:25
have, but for your essentially
23:28
executive assistant agent, you
23:30
won't exclude much at all from
23:32
that partition. Rufus before we
23:34
go further down the agent
23:37
pathway, one question that I've
23:39
been thinking about since our interview
23:41
with you, Bill, for AI First, in
23:44
which you talked about really
23:46
comparing your experience
23:48
at Xerox PARC versus your experience
23:51
experiencing chat GBT4,
23:55
I think you're in the most
23:57
unique position. There are probably a couple of other
23:59
people. that I could think of. But
24:02
you're in the most unique position to
24:04
have the set of understanding of
24:06
computer technology as well as
24:09
building business and how computers
24:11
affect human beings. I'm
24:14
curious, if what you said in this
24:16
conversation, which was chat GPT, was as
24:18
big, it sounded like you even said
24:20
it was bigger than your Xerox
24:22
PARC moment, what does
24:25
that make you think about when you
24:27
think about your grandchild's life and what
24:30
advice do you have for the next
24:33
generation of leaders for
24:35
tackling the challenges that are
24:37
unique to AI? I'm curious
24:40
about that perspective. There's
24:42
certainly novel problems in
24:44
that other technologies
24:48
develop slower and
24:50
the upper bound of their
24:52
capabilities is pretty
24:54
identifiable. This technology, in
24:57
terms of its capability, will
25:01
reach superhuman levels. We're
25:03
not there today if you put in
25:05
the reliability constraint. A lot of
25:07
the new work is
25:11
adding a level of metacognition
25:15
that done properly will solve
25:18
the erratic nature
25:20
of the genius
25:22
that is
25:25
easily available today
25:27
in the white-collar realm and over time
25:30
in the blue-collar realm as well. So
25:33
yes, this is a huge milestone
25:36
that some of those past
25:38
things are helpful to, but it's
25:41
novel enough that nobody's faced
25:45
the policy issues, which
25:47
are mostly of a very positive nature in
25:49
terms of white-collar
25:52
labor productivity. What's
25:55
the thing that excites you the most about
25:57
the invention? today.
26:00
shortages, there's no
26:03
organization that faces white-collar shortages as
26:06
much as the Gates Foundation where
26:08
we look at health
26:10
in sub-Saharan Africa
26:13
or other developing countries or
26:15
lack of teachers who
26:17
can engage you in a deep way, ideally
26:19
in your native language.
26:22
And so the idea that by using
26:25
the mobile phone infrastructure
26:27
that continues to
26:30
drive pretty significant penetration even
26:32
in very poor countries, the
26:35
idea that medical advice and
26:37
personal tutors can
26:40
be delivered where, you
26:43
know, because it's meeting you in
26:45
your language and your semantics, there
26:48
isn't like some big training thing that's
26:51
taking place there. You just pick up your
26:53
phone and listen to what it's saying. So
26:57
it's very exciting to take
27:00
the tragic lack of
27:02
resources that particularly
27:05
people in developing countries have to deal
27:08
with. You've been working for
27:10
your 20 years on
27:12
the Gates Foundation and really
27:14
tackling these issues of global
27:16
healthcare, education, climate change. Do
27:19
you think that AI will be an
27:21
accelerant that will make it possible to
27:24
accomplish in five or 10
27:26
years what it took the last 20
27:28
years to accomplish or how meaningful do
27:30
you think the acceleration is likely to
27:33
be in these areas?
27:36
Well, the very tough problems of
27:40
some, you know, diseases that we don't
27:42
have great tools for, AI will
27:44
help a lot. The last
27:47
20 years, you know, was pretty
27:49
miraculous in that we cut child to
27:51
death in half from 10 million
27:53
a year to 5 million a year. That
27:56
was largely by using getting
27:59
tools. like certain
28:01
vaccines to be cheaper
28:04
and making sure they were getting
28:06
to all the world's children. And
28:08
so that was kind of low-hanging
28:10
fruit and now we
28:12
have, you know, tougher issues. But with
28:15
the AIs, the upstream
28:17
discovery part of, okay, why
28:19
do kids get malnourished or, you know,
28:21
why has it been so hard to
28:23
make an HIV vaccine? Yes,
28:25
we can be, you know, way
28:27
more optimistic about
28:30
those huge breakthroughs.
28:33
You know, AI will help us with every
28:35
aspect of these things, the advice,
28:38
the delivery, the diagnosis.
28:41
The scientific discovery piece is,
28:44
you know, moving ahead at a pretty
28:46
incredible clip and the Gates
28:49
Foundation's very involved in funding quite
28:51
a bit of that. Yeah,
28:53
we had your friend Saul
28:55
Kahn on the show recently
28:58
and got the chance to spend a bunch of
29:00
time with Kahn Meego and I was just
29:03
astonished by what that can do. I
29:05
know you were recently in New Jersey
29:08
visiting schools that are implementing Kahn
29:11
Academy's new programs and
29:14
that's pretty exciting, this idea that improving
29:16
education at scale for billions of people,
29:18
the impact of that is
29:21
pretty hard to
29:23
measure. Yeah, I mean, Saul's
29:25
book doesn't say,
29:28
okay, what world are we educating kids for? It's
29:30
just if all AI was was
29:33
available in education, you know,
29:35
that's pretty miraculous because you have
29:38
the other things shifting
29:40
at the same time, it's a
29:42
little more confusing. But, you
29:44
know, that realm where he says, okay, what
29:46
if it was just an education, you know,
29:49
it's incredibly positive. Yeah,
29:52
well, that's that gets to the personal part
29:54
of your, you know, I think you
29:56
have a new granddaughter. I know Adam has a seven
29:58
year old and when we think of this question of
30:00
like what does it look like? I
30:03
mean fantastic that our kids
30:05
will have an Aristotle level
30:07
private tutor to help further
30:10
accelerate their educational process. But there is the
30:12
question of like what will they
30:14
need to know to be effective in the
30:16
world? And my kids
30:19
and Andy's kids are a little older, but I know
30:21
Adam, you've got a younger daughter and Bill,
30:24
you've got a new granddaughter. It's
30:26
interesting because Bill, I wanted to come
30:28
at this from a slightly different direction, but since you brought
30:30
it up, she's able
30:34
to really, she watches me use
30:36
Whisper Mode on ChatChaPT. She's seen
30:39
me live in an AI world and it's fascinating
30:41
to watch her be very
30:43
comfortable with a voice interface. Especially
30:45
at her age, it's actually easier for her to
30:47
do voice interface than she's still
30:50
learning how to spell. I mean, she just
30:52
figured out how to read. So I thought
30:54
that was an interesting, I'll call it look
30:56
into how much this can be, not just
30:58
natural language chat, but even voice chat versus
31:01
point and click. But Bill, I was going to ask
31:03
you something about the direction, maybe
31:05
come at this from a slightly different direction, which
31:08
is what do you think about
31:11
this debate? There's a little bit of a debate going on.
31:13
Maybe that's too strong of a word about whether
31:16
or not the fact that all these frontier
31:18
or foundation models have sort
31:20
of clustered at the benchmarks around ChatChaPT4.
31:23
And there's some people that
31:25
are on the side that were plateauing
31:27
or something like that. But most of
31:30
the smartest researchers I follow tend
31:32
to still say with the fact that the scaling
31:34
laws are going to continue to apply for at
31:36
least the next couple of years. I'd love to
31:38
get your take on A, where
31:41
do you come out on that
31:43
discussion? And B, do you
31:45
find yourself rooting for it to plateau? Or
31:47
do you like emotionally agnostic
31:49
because of some of the concerns
31:51
around the technology? Well, the
31:54
big frontier is not so much
31:56
scaling. We have probably
31:58
two more turns of the crank on
32:00
scaling, whereby
32:03
accessing video data and getting
32:05
very good at synthetic data,
32:09
that we can scale up probably
32:11
two more times. That's
32:14
not the most interesting dimension. The
32:17
most interesting dimension is what I
32:19
call metacognition, where understanding
32:21
how to think about a problem in
32:24
a broad sense and step back and
32:27
say, okay, how important is this answer?
32:30
How could I check my answer? What
32:33
external tools would help me with this? The
32:36
overall cognitive strategy
32:39
is so trivial today that
32:42
it's just generating through
32:44
constant computation each token
32:47
in sequence, and it's mind-blowing that
32:49
that works at all. It
32:52
does not step back like a human and think,
32:55
okay, I'm going to write this paper, and here's
32:57
what I want to cover. I'll
33:00
put some facts in. Here's what I want to do for
33:02
the summary. You see
33:04
this limitation when you have
33:06
a problem like various math things,
33:08
like a Sudoku puzzle, where
33:11
just generating that upper left-hand
33:13
thing first causes it
33:15
to be wrong on anything
33:17
above a certain complexity. We're
33:20
going to get the scaling benefits, but
33:24
at the same time, the various
33:27
actions to change the
33:29
underlying reasoning algorithm from
33:32
the trivial
33:34
that we have today to
33:37
more human-like metacognition, that's the
33:39
big frontier. It's
33:41
a little hard to predict how
33:43
quickly that'll happen. I've seen that
33:46
we will make progress on that next year,
33:48
but we won't completely solve it for
33:51
some time after that. Your
33:54
genius will get to be more predictable.
33:56
Now, in certain domains, confined
33:59
domains... We are getting to the
34:01
point of being able
34:03
to show extreme accuracy on
34:06
some of the math or even some of the health type
34:09
domains. But the open-ended thing
34:11
will require general breakthroughs
34:13
on metacognition. And
34:16
do you think that metacognition will
34:19
involve building in a looping
34:21
mechanism so the AI
34:23
develops an ability to ruminate, as
34:25
we homo sapiens do? And
34:28
is there, I've heard some people like
34:30
Max Tegmark suggest that that could be
34:33
part of what makes us conscious is
34:35
this ability to have conversations with ourselves.
34:38
Yeah, consciousness may relate
34:41
to metacognition. It's not a
34:43
phenomena that is
34:45
subject to measurement, so it's always tricky. And
34:48
clearly these digital things are
34:52
unlikely to have any
34:54
such equivalent. But it
34:57
is the big frontier, and
34:59
it will be human-like in terms of,
35:01
you know,
35:03
knowing to work hard on certain hard
35:05
problems and having a sense of confidence
35:08
and ways of checking what
35:11
you've done. One
35:13
of the things that I'll just say in
35:16
the process of writing and interviewing you
35:18
for AI First, as well
35:20
as Reid Hoffman and Sam
35:24
Altman, Mustafa, it's been an
35:26
education for Adam and I. And
35:29
I come away from these conversations regularly
35:32
going, oh, my goodness.
35:36
And I'm
35:38
blown away at the, like
35:40
I'm paying attention every day
35:42
to the pace of the
35:44
technological advance by really many
35:46
different companies, large companies, there's a lot
35:48
of money, there's a lot of talent
35:51
being poured into us. And so the
35:53
pace of the development and
35:55
the potential impact of that
35:57
technological advance astounded.
35:59
by and have some limited
36:02
understanding. Do you think
36:04
we're moving too fast? You know, if
36:06
we knew how to slow it down, a
36:09
lot of people would probably say, okay, let's
36:15
consider doing that. You
36:18
know, as Mustafa writes in his
36:20
book, the incentive structures really
36:24
have some mechanism
36:26
that's all that plausible
36:28
of how that
36:31
would happen, given the individual
36:34
and company and
36:37
even government-level thing. If
36:39
the government-level incentive structure was understood, you
36:42
know, that alone might be
36:44
enough. And, you know, like
36:46
the people who say, oh, it's fine that it's open
36:49
source, you know, they're willing to say,
36:51
well, okay, if it gets too good,
36:53
maybe we'll stop open sourcing it. But,
36:56
you know, will they know what
36:58
that is? And would they
37:01
really say, okay, maybe the next
37:03
one? You know, so you pretty
37:06
quickly go to, let's
37:08
not let people
37:11
with malintent benefit
37:14
from having a better
37:17
AI than, you know,
37:20
the sort of defense, good intent side
37:23
of, you know, cyber defense
37:25
or war defense
37:27
or bioterror defense. You're
37:30
not going to completely put the genie back
37:33
in the bottle. And yet, that
37:36
means that, you know,
37:38
somebody with negative intent will be
37:40
empowered in a new way. So
37:43
perhaps not a good idea for
37:45
the most sophisticated AI models to
37:47
be open source in your judgment,
37:49
given this global environment.
37:52
Yeah. And people sort of seed that
37:54
point in principle, but then
37:57
when you try to get to say,
37:59
okay, specifically, where would you
38:01
apply that? It gets a bit less
38:03
clear. I
38:06
mean, Adam and I were talking yesterday about how even
38:09
if it were possible, hypothetically, to stop
38:11
AI development exactly where it is right
38:14
now, it would probably take 10 years
38:16
of forum three and other folks
38:19
helping companies and individuals figure
38:21
out how to apply the
38:23
technology that currently exists. I'm
38:26
not sure about that because, you
38:28
know, it's pretty clear, you know, I
38:30
want to make an image. Okay, what
38:33
do I have to learn? I have to
38:35
learn English. This is the software meeting us,
38:37
not us meeting the software. You know, so
38:39
it's not like there's some new menu, you
38:41
know, file, edit, window, help, and oh, you
38:44
got to learn that, you have to type
38:46
the formula into the cell. This
38:48
is you saying, hmm, I wish I
38:50
could do data analysis to see which
38:53
of these, you know, products is
38:55
responsible for the slow down. And
38:58
it understands exactly what
39:01
you're saying. So the idea that there's an
39:03
impedance of adoption, it's
39:06
not the normal thing. Yes, company
39:09
processes that are very
39:11
used to doing things the old way will
39:14
have to adjust. But if you look at tele-support,
39:17
telesales, data
39:19
analytics, you know, give somebody a
39:21
week of watching an advanced
39:25
user and, you
39:27
know, say no manual of any kind,
39:29
just, you know, learn by example of
39:32
how the stuff is being used. The
39:34
uptake, assuming there's no limit in
39:37
terms of the, you know, server
39:39
capacity that connects these things up,
39:41
which I don't expect certainly
39:44
in rich countries, there'll be
39:46
a gigantic limitation there. And
39:49
you're talking about an adoption rate
39:51
that won't be overnight, but
39:53
it won't be like, you know,
39:56
10 years. Like take
39:58
human translation. The
40:00
idea that a free product
40:03
provides arbitrary audio and
40:06
text human translation.
40:09
I mean, that was a holy grail of, oh my God,
40:11
if you ever had a company that could do that, it
40:13
would collect tens of billions
40:15
in revenue and solve the Tower
40:17
Babel. Here,
40:19
a small AI
40:21
company is providing
40:24
that as an afterthought free
40:26
feature. Right. It's
40:29
pretty wild and you say, well, oh, how
40:31
are people going to adapt to free translation?
40:35
I don't think it's going to take them that long
40:37
to know, hey, I want to know what that guy
40:39
was saying. Yes, the quality
40:41
of that a year from now and
40:43
the coverage of, say, all
40:45
African languages will get
40:47
completed. The foundation's making sure
40:50
that even obscure languages
40:52
that are not written
40:54
languages that were in partnership
40:58
with others, gathering the data for
41:00
those, the Indian government's doing that for
41:03
Indian languages. I
41:06
don't think saying, hey, calm down,
41:08
it takes a long time to figure
41:11
out how to utter the description
41:13
of the birthday card you want.
41:16
It'll take 10 years for the
41:19
lagging people to switch
41:21
their behavior. Well, we see,
41:23
I think Sam Altman said on your
41:25
podcast, Unconfused Me, which I enjoy, that
41:27
they're seeing a productivity improvement of up
41:29
to 300%, I
41:32
think, among their developers. In
41:34
other sectors, I think we've seen reports of 25, 50%
41:37
increases in productivity. Just getting
41:40
that, the great Gibson line, the
41:42
future is here, it's just not evenly distributed. It
41:45
does feel like getting all companies to
41:48
fully benefit from that level of productivity
41:50
enhancement, it certainly will be
41:52
a process of some kind. I
41:54
was interested in your comment in the first chapter
41:56
of AI First, which is about productivity,
41:59
you said, Productivity isn't a
42:01
mere measure of output per hour.
42:03
It's about enhancing the quality and
42:05
creativity of our achievements.
42:09
What do you mean by that? Well, whenever
42:11
you have a productivity increase, you
42:14
can take your X
42:16
percent increase and increase the
42:18
quantity output. You can improve the quality of
42:20
the output, or you can
42:23
reduce the human labor hours that
42:25
goes in input. And so you
42:27
always take those three things. You
42:30
know, there are some things when they get
42:32
more productive, like when the tire industry went
42:34
from non-radial tires
42:37
to radial tires, even
42:39
though the cost per
42:42
year of tire usage went
42:44
down by a factor of four, people
42:46
didn't respond by saying, okay, I'm gonna drive
42:49
four times this much. So
42:51
the demand elasticity
42:54
for some things like computing or
42:57
the quality of a news
42:59
story, there's very high demand
43:02
elasticity. If you can do a better
43:04
job, you just leave the human labor
43:07
hours alone and take most
43:09
of it in the quality dimension. And
43:11
then you have a lot of things where
43:13
that's not the case at all. The
43:16
appetite for miles driven
43:19
did not change. The society is full
43:21
of many things that are
43:23
across that spectrum. And
43:26
so whenever you have rapid productivity increases,
43:29
you know, there was a memo inside Microsoft
43:31
about how we were gonna make databases so
43:33
efficient that it would become
43:35
a zero-sized market. Now
43:38
in that case, we're still in the
43:40
part of the curve where
43:42
you have demand elasticity, but you know,
43:45
someday even in that domain
43:47
we'll get past incremental
43:50
demand. If you were
43:53
making a guess right now, and
43:55
you mentioned healthcare and education, how
43:59
would you respond? to the question about,
44:02
what do you think the first big,
44:06
I'll call it breakthrough application
44:08
will be? Like for example, like one of
44:10
the podcasts that Andy and
44:12
I like to listen to, they were talking this
44:14
weekend, they keep saying, oh, we haven't seen the
44:17
big breakthrough application.
44:19
And I'm, which
44:22
is interesting because I'm not
44:24
sure that's true, but let's just take it
44:26
for on its face value that we're still in the sort
44:28
of, I'll call it experimentation
44:30
phase or whatever, which is what they
44:32
were trying to say. I'm curious to
44:34
get your, what's your thought? Like where
44:36
do we see the first big, the
44:39
Uber, like if like, location services and
44:41
mobile cloud, the first big app was
44:43
kind of Uber and everyone talked about
44:45
Uber being an example of that. And
44:47
then it was probably before that, it
44:50
was probably Google Maps, right? It was probably
44:52
map technology. That's right, that's right. So
44:54
we have, Bill, when you just think
44:56
out, do you go right to education,
44:58
healthcare? Where
45:00
does your head go when you think, oh, I'll
45:02
bet you the first big breakthrough app, consumer
45:05
app, or even industrial app will be what?
45:08
Well, I guess the naysayers are pretty creative
45:10
to be able to say something hasn't happened.
45:16
I mean, they don't think, memorizing
45:19
meetings or doing translation
45:22
or making product programmers
45:24
more productive. I mean,
45:27
it's mind blowing. This
45:29
is white collar capability with
45:32
a footnote that in
45:34
many open-ended scenarios, it's not as
45:36
reliable as humans are. And
45:39
you can hire humans and they can go haywire
45:42
and so you have some monitoring, but
45:44
that, these things it
45:46
put into new territory are
45:51
somewhat less predictable as
45:53
there's some domains where we can
45:55
bound what goes on like
45:59
support calls. or telesales calls
46:01
where you're not pushing off the
46:03
edge at
46:05
all. So I
46:08
don't know, I just can't imagine
46:10
what they're talking about. Yeah. Let
46:14
me try and I think it's the
46:16
comment when people say that, notwithstanding
46:22
what you just said, Bill, they're
46:25
creative in their naysaying capabilities. Because
46:27
I think that's your response is accurate
46:30
for sure. It's the
46:32
second order effect. When the car was developed, it could
46:34
get you from point A to point B. And
46:37
you might even be able to predict
46:39
the development of roads and highways, etc.
46:41
But you might not be able to
46:43
predict Los Angeles
46:46
or suburbs, drive-in
46:48
movie theaters. I
46:50
think in more modern stance,
46:53
the World Wide Web came along and
46:55
there were lots of brochure ware and
46:58
there was travel age, Expedia came
47:00
along. And that was all sort
47:03
of like run-of-the-mill first order effect.
47:05
But people point at Uber
47:07
as a second order effect
47:09
on the technology that was like,
47:11
you couldn't have predicted that. Now,
47:14
maybe you could, maybe you couldn't.
47:16
But that's what Adam's question I
47:18
think is going for. When you
47:20
look at AI, in many ways,
47:23
the game of search has already
47:25
changed, which is ubiquitous consumer
47:27
activity. And certainly, chat
47:30
GBT was a monumental, the
47:33
fastest growing adopted technology in our
47:36
ever. So I'm not minimizing or
47:38
giving credence to the naysayers, but
47:40
it's really about the second order
47:43
effects. Chat GBT 3 was not
47:45
that interesting. I mean, it was
47:47
interesting enough that a few people that opened the eye, felt
47:50
the scaling effect would cross a
47:52
threshold. And I didn't
47:56
predict that and very few
47:58
people did. And we all know that. only
48:00
crossed that threshold less
48:02
than two years ago, a
48:05
year and a half in terms
48:07
of general availability. So we are
48:09
very much in the people
48:11
who are open-minded,
48:14
and are willing to try out new things
48:17
are the ones using
48:19
it. But you just
48:21
demo, okay, here's image editing and
48:24
no, I'm not teaching you 59 menus
48:27
and dialogues in Photoshop to
48:30
do editing. I'm telling you type, get
48:33
rid of that green sweater, and people are like, oh,
48:35
I don't know if I could do that. I mean,
48:37
that sounds very hard. When
48:40
you show people that, it's like, what?
48:43
Make that photo bigger. I didn't take
48:45
a shot that was bigger, but I'd
48:47
like the photo to be
48:49
bigger. So fill in the missing piece to make
48:51
it bigger. It's like, what? Or
48:54
patient follow-up, where it calls you up
48:57
and talks to you about to do
48:59
a failure prescription, how are you feeling?
49:02
What are you doing? I mean, people may
49:04
get saturated if they really try and
49:07
expose themselves to the various
49:11
examples. I
49:13
do think they'd be saturated though, my
49:15
God, this is a lot
49:18
of extremely concrete capability.
49:21
Then you think, okay, when
49:23
I call up to ask about my taxes,
49:25
when I want my medical bill explained, that
49:28
white collar worker is
49:32
almost free type mentality,
49:35
is the best way
49:37
to predict what this
49:39
thing suffuses to, even though I fully
49:41
admit there's a footnote there that it's,
49:44
in some ways, still a little bit of
49:46
a crazy white collar worker. We're
49:51
going to get rid of that footnote
49:53
over a period of years. I
49:56
know one of those crazy white collar workers who's the
49:58
CEO of a company that's growing. very quickly who
50:00
asked his top salespeople, what
50:03
takes you the most time during
50:05
this day? And they said, drafting
50:08
follow-up emails following sales calls. And
50:10
he created an instance of GPT
50:13
to, you know, pulled in all their
50:15
best practices, best communications, automatically
50:18
transcribes every phone call and automatically
50:21
generates the follow-up email. And
50:24
he's laying off half of his sales team
50:26
so that the best half of his sales
50:28
team can now work twice as efficiently. So
50:30
there we have both a success
50:33
story in the sense that it's a
50:35
highly efficient and wildly
50:37
impressive implementation of the technology. But
50:41
for the other half of the sales team, it's not quite as
50:43
exciting unless they can use new
50:46
AI technologies to build a competing
50:48
company or to do something else, which I guess,
50:50
you know, gets to this broader question of like,
50:52
to what extent do we think this empowers the
50:55
little guy versus the big
50:57
guy? I mean, we're seeing that just a
50:59
few big companies seem to
51:01
be the dominant players in the development of
51:03
the technology. But on the
51:05
other hand, it does seem that everyone has
51:08
access to GPT4 Omni at
51:10
now for free. So there's also
51:12
an equalizing element. Well,
51:16
it's important to distinguish two
51:19
parts of economic activity. One
51:21
is the economic activity building
51:23
AI products and
51:27
both base level AI products and
51:29
then vertical AI products.
51:33
And we can say for sure that
51:36
the barriers to entry are uniquely
51:38
low in that we're in
51:40
this mania period where, you know,
51:43
somebody literally raised $6
51:45
billion in cash for
51:48
a company and many others raised
51:51
hundreds of millions. And,
51:53
you know, so the idea
51:56
that there's, you know, there's
51:58
never been as much capital. going
52:01
into a new category. You could even say
52:03
a new mania category. I mean, this makes
52:06
the internet or the early auto
52:08
industry mania look quite
52:10
small in terms of the percentage of IQ
52:14
and the valuations that
52:17
come out of this. I mean, there
52:19
was no company before the turn of
52:21
the century that had ever been worth
52:23
a trillion dollars. Here we have
52:25
one chip company who doesn't make chips. It's
52:28
a chip design company. That
52:30
in six months adds a
52:33
trillion dollars of value. And so
52:35
the dynamics within the AI
52:38
space is both
52:40
hyper-competitive, but with lots of entry.
52:43
And yes, Google and Microsoft have
52:46
the most capital, but that's
52:48
not really stopping people
52:50
either in the base capabilities
52:52
or in those verticals.
52:55
Once you leave the AI tools domain,
52:57
which as big as it is,
53:00
is a modest part of the
53:02
global economy, how that gets applied
53:05
to, okay, I'm a
53:07
small hospital chain versus a big
53:09
hospital chain. Now, when
53:11
I have these tools to set up, level
53:14
the playing field or not,
53:17
you would hope that it would, and that you
53:19
can offer for the
53:21
same price or less a far
53:24
better level of service. All
53:26
of these things are in the furtherance
53:28
of getting the
53:30
value down to the customer. And
53:33
figuring out early in an industry where the
53:36
barriers are so that
53:38
some of the improvements stick
53:41
with companies versus perfect
53:43
competition where it all goes to
53:45
the end users. That's very hard
53:48
to think through. Like
53:50
picks and shovels is saying, okay,
53:52
look to the side industries, as
53:56
well as to the primary industry. Savings and
53:58
loans did better than home. builders because
54:02
there was a more scarce capability
54:06
there that a few did
54:09
better than others. It's asking
54:11
a lot, but it is
54:13
people are being forced to think about the
54:16
competitive dynamics in these other
54:18
businesses. When you free
54:21
up labor, that labor society is
54:23
essentially richer that through
54:26
your tax system, you can take that labor and put
54:28
it into smaller class
54:30
size or helping
54:33
the elderly better, and your
54:35
net better off. Now, for the person involved,
54:37
they may like
54:39
that transition or not, and it
54:41
requires some political capacity to do
54:43
that redirection, and you can have
54:45
a view of our current
54:48
trust in our political capacity to
54:51
reach consensus and create
54:55
effective programs. But
54:59
the frontier of possibilities is
55:02
improved by increased productivity. You'd never
55:04
want to run the clock backwards
55:06
and say, thank God we were
55:08
less productive 20 years ago. We
55:10
were talking earlier about the impossibility
55:12
of slowing down or the great
55:15
difficulty of slowing down the current
55:17
pace of AI development. Do
55:19
you think AI companies should be
55:21
governed, and if so, by whom?
55:24
By boards, by government,
55:26
by all of the above? Well,
55:28
government is the only
55:30
place where the overall well-being of society
55:33
is a whole, including
55:37
against attack
55:39
and a judicial
55:41
system that's fair and creating
55:45
educational opportunities. So you
55:48
can't expect the private
55:50
sector to walk
55:53
away from market-driven
55:57
opportunity unless the government
55:59
decides. what the rules
56:01
are. So this is, although the
56:03
private sector should help educate government work
56:05
with government, the governments
56:09
will have to play a big role here,
56:12
you know, so that's a dialogue
56:14
that people are investing in. Now
56:16
governments will take the things that
56:19
are most concrete, like what
56:21
are the copyright rules or what are
56:23
the abuses of deep fakes or, you
56:26
know, in some applications does the emeral
56:28
liability, say, of health
56:30
diagnosis or hiring
56:33
decisions, you
56:35
know, mean that you ought to move
56:37
more slowly or create some liability
56:40
for those things. They'll
56:42
tend to focus in on those short-term issues,
56:44
which, you know, that's fine, but, you
56:47
know, the biggest issue has to do with the
56:51
adjustments to productivity
56:53
that overall,
56:55
you know, should be a
56:58
phenomenal opportunity if political
57:01
capacity and the speed which
57:03
which was coming were paired
57:05
very well. Our environment
57:07
of polarization doesn't
57:09
help the effectiveness of our
57:11
government and I think
57:13
you mentioned on your podcast that in
57:16
a worst-case scenario we could imagine polarization,
57:18
you know, breaking our democracy. Do you
57:20
think AI can help us all get along and
57:24
if so how would it do that? Well
57:26
it's such a powerful tool that
57:28
at least we ought to consider
57:30
in for all our tough problems
57:33
where it can be beneficial or
57:35
where it can exacerbate things. So
57:37
certainly if somebody wants to
57:40
understand okay where
57:43
did this come from this article
57:45
or this video,
57:47
you know, can you what is
57:49
the provenance, you know, is that
57:52
provably a reliable source or
57:54
is this information accurate or, you
57:56
know, in general in my newsfeed,
57:58
you know, what am I seen
58:01
that somebody who's voting for
58:03
the other side, what did they seem? And
58:06
try to explain to me what
58:11
has pushed them in that direction. You'd
58:14
hope that, again,
58:16
going back to the paradigm of
58:19
white collar capability being
58:22
almost free, that well-intended
58:26
people who want to bridge those
58:29
misunderstandings would have
58:32
the tools of AI to
58:34
highlight misinformation for them or
58:36
highlight bias for them or
58:39
help them be in the mindset
58:41
and understand, okay, how
58:44
do we bridge the different
58:46
views of the world that
58:48
we have? So, yes,
58:51
although it sounds outlandish, it's like
58:54
when people say, oh, let's use geoengineering for
58:56
climate, they're like, oh, no, you always
58:59
think technology might be
59:01
the answer. And, you
59:03
know, okay, I'm somewhat guilty of that.
59:05
But here, the AIs
59:07
are going to be both part
59:10
of the solution, while
59:12
if we're not careful, also
59:16
potentially exacerbating these things.
59:19
And you can almost say it's good that the
59:21
blue collar job
59:23
substitution stuff is more delayed than
59:25
the white collar stuff. So, you
59:27
know, it's not just any one
59:29
sector and actually it's the
59:32
more educated sector that's
59:34
seen these changes first. I
59:37
hadn't thought of that. Okay, last question.
59:40
You've said that a possible future
59:42
problem that befuddles you is how
59:45
to think about our purpose as humans in
59:47
a world in which machines can solve problems
59:49
better than we can. Is
59:52
this a nagging concern that you continue to wrestle
59:54
with? How do you think about it now? Well,
59:57
I don't think somebody who spent 68
1:00:01
years in a world of shortage, I
1:00:03
doubt that either at that
1:00:06
absolute age or having been immersed
1:00:08
in such an utterly different environment,
1:00:12
that the ability to imagine this
1:00:16
post shortage type
1:00:18
world will come
1:00:20
from anyone near my
1:00:22
age. So I view
1:00:25
it as a very important problem that
1:00:29
people should contemplate, but
1:00:31
no, that's not one that
1:00:35
I have the solution or would expect
1:00:38
to have. Although
1:00:40
you have some experience with living in a
1:00:42
post scarcity world in the sense that you
1:00:45
haven't had scarcity in your own personal
1:00:47
life for a few years now. I
1:00:49
haven't had financial scarcity, but
1:00:52
somebody who's had the
1:00:54
enjoyment of being successful and
1:00:56
sees problems out there like
1:00:58
malaria or polio or measles,
1:01:01
the satisfaction that, okay, the
1:01:04
number of people who work on this, the
1:01:06
amount of research money for this is very,
1:01:08
very scarce. And so I feel
1:01:10
a unique value added in taking
1:01:13
my own resources and working with governments
1:01:15
to orchestrate, okay, let's not have any
1:01:17
kids die of malaria, let's not have
1:01:19
any kids die of measles. So you're
1:01:22
right financially that, what
1:01:26
I do for fun is
1:01:28
a potential kind of thing
1:01:30
that people can do, play pickleball,
1:01:32
because the machines, the fact the machines
1:01:34
will be good at pickleball, that
1:01:38
won't bother us, we'll still enjoy
1:01:41
that as a human thing. But
1:01:46
the satisfaction of helping
1:01:50
out reduce scarcity, which
1:01:52
is the thing that motivates me, that
1:01:55
also goes away. Yeah,
1:01:58
yeah, yeah, yeah. So the true last
1:02:01
question, rumor has it you're working on a memoir.
1:02:04
Can you tell us anything about that? Yeah,
1:02:06
we announced that in
1:02:09
next February, sort of
1:02:12
a first volume that covers my life
1:02:15
up till the first two or
1:02:17
three years of Microsoft, about age 25
1:02:19
or so, called Source Code will come
1:02:22
out. So I'm
1:02:24
working on editing that
1:02:27
right now since
1:02:30
we're about to hit deadlines. But
1:02:32
yeah, we got a good reception to
1:02:35
the pre-announcement of that
1:02:37
first volume. Is GPT helping you out
1:02:39
with that? Actually
1:02:43
no, not because I'm against it or anything.
1:02:45
I suppose in the end we maybe
1:02:48
we should, but no, it's still
1:02:51
we're being a little traditional in terms
1:02:53
of how we're both writing and editing.
1:02:55
Well, there'll be two volumes or three
1:02:57
volumes, do you think? Three.
1:03:01
So we'll probably wait three years before we
1:03:03
do a second one, but there's
1:03:05
kind of a period that's Microsoft-oriented
1:03:08
and a period that's sort
1:03:10
of giving all the money
1:03:12
away, focused. Well,
1:03:14
if you and Andy play enough pickleball, maybe
1:03:17
you'll live long enough to write a fourth
1:03:19
volume. That's the career of
1:03:21
so. Making AI good
1:03:23
will make that the fourth
1:03:26
volume. Exactly. Well, Bill,
1:03:28
thank you so much for joining us
1:03:30
today. Such an interesting conversation. Yeah,
1:03:33
fantastic. Thanks, Bill. Thanks, Bill. John
1:03:43
Lennon said, count your
1:03:45
age by friends, not years.
1:03:48
I've always liked this quote and I've tried
1:03:50
to apply it. Please
1:03:52
be building new friendships, expanding
1:03:54
communities. And I've
1:03:57
tried to apply the same approach
1:03:59
to the process of learning. always
1:04:01
be learning, ingesting new ideas, testing
1:04:03
my assumptions. But where
1:04:05
can you find a flow of the
1:04:07
best new ideas vetted by experts? There
1:04:10
is so much noise out there. I'm
1:04:12
so glad you asked. This is
1:04:15
why we started the next big
1:04:17
idea club. We've partnered with
1:04:19
hundreds of the world's leading nonfiction authors
1:04:21
to create audio summaries of their books.
1:04:23
We call these summaries Book Bites, and
1:04:26
our app features a new one every
1:04:28
single day. You can listen to a
1:04:30
Book Bite in 12 minutes
1:04:33
or read it in five. There's no
1:04:35
other place on the planet where you
1:04:37
can listen to book summaries created by
1:04:39
authors themselves. And that's not
1:04:41
all we have waiting for you when you download the
1:04:43
next big idea app. We also
1:04:45
have video and audio masterclasses,
1:04:48
ad-free versions of this podcast,
1:04:50
new original audio books, and tons
1:04:53
of other member benefits. So
1:04:55
what are you waiting for? Open
1:04:57
your app store, search for the
1:04:59
next big idea. There is no
1:05:01
better way to get smart fast.
1:05:03
Download the next big idea app
1:05:06
right now. Wow,
1:05:13
Adam and Andy, so interesting.
1:05:15
Let's unpack some of our favorite moments. Adam,
1:05:18
for me, there was when you said, some
1:05:21
people say we're waiting for the breakout application
1:05:23
for AI. What's it going to be?
1:05:26
And Bill said, the naysayers are
1:05:28
pretty creative to be able to
1:05:30
say that nothing transformative has happened.
1:05:32
What's happening is mind blowing. I
1:05:34
thought that was a great moment.
1:05:38
I mean, there's several, I'm sure we'll talk about them.
1:05:40
That was definitely my favorite because classic
1:05:44
Bill in the sense of he's just
1:05:46
got such a great and unique perspective
1:05:48
of the way he sees the world
1:05:50
and explains the world, and he's right.
1:05:52
Like it's the killer app is here.
1:05:56
And, you know, he relates
1:05:58
to another moment where he's he said, look,
1:06:01
one of the holy grails for a
1:06:03
long time was like a perfect translator
1:06:06
app, like real time natural language. And
1:06:08
this is like a free afterthought
1:06:11
feature of the
1:06:13
foundational AI systems that are out there.
1:06:16
And so his comment
1:06:18
about, which I agree
1:06:20
with about, it's
1:06:22
kind of interesting that people are saying that
1:06:25
there's, we're still waiting for the Uber
1:06:27
of AI and yet this
1:06:30
white collar intelligence as a service
1:06:32
at production level is
1:06:34
available. And he pointed out, he goes,
1:06:37
it's still got issues and it hallucinates
1:06:39
and it has problems and whatever, but
1:06:41
as it is today, it is
1:06:44
quite the killer app. Yeah, I
1:06:46
mean, I don't think that sentiment can
1:06:49
be emphasized sufficiently enough, both
1:06:53
just how profound the technology
1:06:55
is today and the
1:06:57
fact that we take for granted that in
1:06:59
an instant, this podcast
1:07:02
could be translated into, I think 150 different
1:07:04
languages instantly. Both
1:07:10
the taking for granted of that technological
1:07:13
leap forward, as well as
1:07:15
all the plethora of
1:07:18
other capability set where
1:07:20
it exists today and that we're both
1:07:23
looking for and
1:07:26
sort of scoffing at the expectation
1:07:28
of the next consumer app, like
1:07:30
Uber, it's just
1:07:32
completely under appreciating the moment that we
1:07:34
are in. Yeah, and it
1:07:37
relates to another point because he was
1:07:39
making the point Bill was just now
1:07:41
about how it's not
1:07:43
like it's doing all this and you need to
1:07:45
like go to school on how to use it.
1:07:47
He said it's the software meeting the human, you
1:07:50
just need to say what you want it to do.
1:07:53
And to the extent it can do it, it just does it.
1:07:55
And that's unlike any other software
1:07:57
we've ever experienced. So it's universally demonstrated.
1:08:00
democratically accessible both in terms of
1:08:02
its ease of use in terms of its
1:08:04
ability to show up at production scale at
1:08:06
a on a smartphone its capability
1:08:09
set I thought that was a really poignant
1:08:11
moment well, and then he made the point
1:08:13
about the acceleration
1:08:15
of just the capital and And
1:08:18
of the businesses and bills not someone who's
1:08:20
easily impressed by you know business
1:08:22
growth, right? But he pointed out that there was no
1:08:24
company in the world before 2000 that was worth a
1:08:26
trillion dollars We just had
1:08:29
one chip design company add a trillion
1:08:31
dollars of value in six months obviously
1:08:33
referring to Nvidia Right someone
1:08:35
just raised six billion dollars for an AI company.
1:08:37
I think he was referring to Elon Musk Right,
1:08:40
but but clearly Bill Gates himself
1:08:42
is kind of wide-eyed about the
1:08:44
pace of this Investment and
1:08:47
acceleration of business value. Yeah, I thought
1:08:49
another another interesting moment. Tell me what
1:08:51
you guys think was when
1:08:54
we asked him about where this is going
1:08:56
and The scaling laws into
1:08:58
the apply and I thought you know He
1:09:00
gave a pretty specific answer which I learned
1:09:02
from like he was saying we get two
1:09:04
more turns the crank on scaling Literally in
1:09:07
terms of like how much more data we
1:09:09
can feed to it And my guess is
1:09:11
we get quite a few more turns the
1:09:13
crank when it comes to compute And we'll
1:09:15
see how much of the scaling relates to
1:09:17
compute versus data But his point was like
1:09:20
it's not about that as much as it's about Metacognition
1:09:22
I think was the word he used and and
1:09:24
this idea like how do you get the systems
1:09:26
to think? Deeper and new
1:09:28
level to thinking etc. That was a great answer
1:09:31
And I hadn't I thought it was a new
1:09:33
way I don't know about you
1:09:35
guys and thinking about the scaling laws and the and
1:09:37
the and the progress these are making Yeah,
1:09:40
yeah, I mean what's astonishing is we have
1:09:42
this kind of you know, GPT for Omni
1:09:44
level level intelligence
1:09:47
when the systems are really highly inefficient as
1:09:49
I understand it and We're
1:09:51
gonna build in we're in the process
1:09:53
of building in much more intentional and
1:09:56
efficient storage of information and ways
1:09:58
of thinking And then
1:10:00
of course, I have a
1:10:02
geeky obsession with human consciousness and
1:10:05
the question of whether it may become possible
1:10:07
to build some version of consciousness on silicon.
1:10:09
So I was pretty interested
1:10:12
in his comment that, yeah, metacognition is
1:10:14
the next capability we need to build
1:10:16
into AI. And yes, consciousness may be
1:10:18
related to metacognition. He did
1:10:20
say computers are unlikely to mirror humans
1:10:22
in this way of being conscious, but
1:10:25
unlikely doesn't mean it won't happen. What
1:10:28
was his point, which was he
1:10:30
was being humorous, I think, that
1:10:35
thank goodness it's the white knowledge
1:10:37
workers that AI
1:10:39
is coming for. What was
1:10:41
his point at that juncture?
1:10:43
I took it to mean,
1:10:45
because we were talking about
1:10:47
the societal implications and
1:10:49
the inference, you didn't say this was things
1:10:52
like, are we gonna need universal
1:10:54
basic income? And what happens
1:10:56
if you're displaced from
1:10:58
your current job or
1:11:01
need to be retrained into a new job? I think
1:11:03
his point was that white collar workers, I think he
1:11:05
literally said, tend to be more
1:11:07
college educated. And therefore, in theory,
1:11:11
are probably more malleable to
1:11:13
being retrained into another white
1:11:16
collar job to learn how to use
1:11:18
these systems. Whereas, as
1:11:20
opposed to, I think he was saying, and I don't know
1:11:22
this to be true, that it may be harder to retrain
1:11:24
a blue collar job than a white collar job. But I
1:11:26
think that was his point, whether it's true or not. I
1:11:30
took it as, maybe
1:11:32
there's more of a safety net for white collar workers,
1:11:34
that kind of stuff. Well,
1:11:36
and if you think of how destabilizing
1:11:39
it would be for society to suddenly
1:11:41
have every truck
1:11:44
and taxi driver in
1:11:46
the world out of a job. And I mean, that's
1:11:48
what we all thought was gonna happen 10 years ago,
1:11:50
right? And it
1:11:52
was a great kind of
1:11:55
nuance that I had not thought
1:11:57
about, that actually it's good for social stability
1:12:00
we're gonna have a whole bunch of attorneys with
1:12:03
other people who are losing their jobs. And you
1:12:05
know what, they're gonna be okay, right? Yeah, they're
1:12:07
probably more, I mean, this is true and thought
1:12:09
of it, because Bill does such a good job
1:12:11
of thinking macro, like to his point about the
1:12:14
work he's done to save, you know, child
1:12:16
mortality and all that kind of stuff. But,
1:12:19
you know, white collar workers, college educated
1:12:22
people, I'm guessing statistically, I don't know
1:12:24
this, probably I'm more likely to have
1:12:26
a higher percentage of home equity ownership
1:12:28
of a 401k. Like
1:12:31
I don't know that about Becky, there's more of
1:12:33
a safety net in general that has been built
1:12:35
up under that group. So yeah, I think that
1:12:37
was his point, Andy, that like,
1:12:39
and it's weird, remember Sam Altman mentioned that to us when
1:12:41
we met with him. He actually said to Andy and I, I
1:12:43
don't know if it made its way into the book, but I'll
1:12:46
give you kind of behind the scenes, he said, I thought
1:12:49
the thing it would be worst at would
1:12:51
be creative thinking, like creativity.
1:12:54
Right, yes. So he wasn't talking
1:12:56
about white collar versus blue collar,
1:12:58
but it's similar. Like he was
1:13:00
saying, I thought it would come
1:13:02
after, like it
1:13:04
would be better at like, I'll call it like, you know, rote
1:13:08
summarization and data analysis. And he was shocked
1:13:10
at how creative it could be, like it
1:13:12
could produce, I mean, the diffusion models can
1:13:14
produce an image, can produce a video, but
1:13:16
it can be creative in its thinking, in its
1:13:19
strategic thinking, which is why we
1:13:21
write about, we really emphasize to our
1:13:23
business clients, you really need to
1:13:25
be inviting AI to the table all the time, because
1:13:28
people don't think of it as a creative
1:13:30
tool, and creative thinking and
1:13:32
helping you come up with, you
1:13:34
know, solutions to your thorny
1:13:37
problems as a white collar worker. And it's actually quite
1:13:39
good at that. You know, Adam
1:13:41
Grant made the point, I think it was
1:13:43
in his book Originals, that
1:13:46
creative success is highly correlated
1:13:48
with the quantity of ideas
1:13:50
that are generated. So you
1:13:52
look at like Picasso, the
1:13:54
quantity of drawings
1:13:56
and paintings you generated, and Buzzfeed
1:13:58
famously used to... generate like 20
1:14:01
headlines for every article, pick the
1:14:03
best one to create this incredible
1:14:05
clickbait. And
1:14:07
it strikes me that having AI as
1:14:09
a creative partner will make it easier
1:14:11
for people in business to
1:14:13
be able to generate not just one or
1:14:16
two or three ideas for
1:14:18
a given angle on a marketing
1:14:20
campaign or a communication, but
1:14:23
a dozen or several dozen. And
1:14:26
it will still, at least for some time,
1:14:28
be the human that's doing the critical sort
1:14:31
of editorial selection process. You know, it's actually
1:14:33
interesting about that point, Ruth, is that one
1:14:35
of the things we've learned is
1:14:38
that in the best practice of prompting, if you
1:14:40
want to be a really good prompter, there's
1:14:42
a couple different techniques that work really well.
1:14:44
One of them is called chain of thought
1:14:46
prompting, which is where you're actually making
1:14:49
and forcing the AI to go
1:14:51
through its steps and show its reasoning, just like
1:14:53
a human would, as opposed to just trying to
1:14:56
skip to the answer. Related
1:14:58
to chain of thought like that is you
1:15:01
ask the AI to actually
1:15:03
produce 30 answers. So
1:15:05
for example, it's like a tagline. You actually tell it, I
1:15:07
want you to produce 30, and
1:15:10
then I want you, before you stop
1:15:12
in your prompt answer, to
1:15:15
rank the top five of the 30 you
1:15:17
produced and tell me why. And so all
1:15:19
of a sudden, you get an answer that's
1:15:21
so much better than if you just said,
1:15:24
give me a tagline. Well, getting to the
1:15:26
AI risk topic, I was interested to hear
1:15:28
Bill say that, yes,
1:15:31
if there was actually a way to
1:15:33
slow down, in response to, I think
1:15:35
it was your question, Andy, if there
1:15:38
was a way to slow down AI
1:15:40
development, a lot
1:15:42
of people leading companies would
1:15:45
probably choose to do so. I thought that
1:15:47
was his subtle way of saying, yes, if
1:15:49
we could slow down AI development now, that
1:15:51
would be a good idea. He didn't say that out right, but I
1:15:53
think that was the implication. But
1:15:56
then he sort of went on to the
1:15:58
practical matter that... Which is... is
1:16:00
more capital and it's charging ahead.
1:16:02
Yeah, it's charging ahead. And incentives.
1:16:05
Incentives, and it's a global environment.
1:16:08
And we have to, you know, we have to, the good guys
1:16:10
have to have better technology than the bad guys. I
1:16:13
thought it was also interesting how he mentioned government
1:16:15
regulation and he did, if I
1:16:17
heard what he just said correctly, if I
1:16:19
interpreted correctly, he was saying, yeah, like it's
1:16:22
the only way, like it's the only way
1:16:24
that we have a chance. Yeah, it's the
1:16:26
only party. Yeah. Right. And
1:16:29
so what it states is just
1:16:31
so far behind on a regulatory
1:16:33
basis, policing, privacy
1:16:36
in particular than say Europe.
1:16:40
Like they just have so many more protections. And
1:16:42
do I think that either the US
1:16:44
or Europe are going to get regulation
1:16:47
right for, it's really a tricky,
1:16:49
it's a very tricky topic. And
1:16:52
if anyone would have a negative association with
1:16:54
government regulation, it would be Bill Gates, right?
1:16:56
Yeah, you would think, yeah. I mean,
1:16:59
he had a, right. The antitrust
1:17:01
stuff that Bill and Microsoft went
1:17:03
through was extremely painful. So
1:17:05
the fact that he's saying, and we've heard
1:17:07
Sam Oldman say this too, you
1:17:09
know, please regulate this sector. It's
1:17:11
important. I mean, those
1:17:14
weren't his exact words, but
1:17:16
clearly everybody agrees it's
1:17:18
important. Well, Andy and Adam,
1:17:20
I'd love to pose a question to you that we pose
1:17:22
to Bill, which is what's your
1:17:25
advice for your kids when
1:17:28
it comes to how to respond to
1:17:30
this AI journey
1:17:32
of ours, this AI transformation?
1:17:35
Is it jump in with two feet, learn
1:17:38
how to deploy and engage with
1:17:40
AI as fast as you can?
1:17:43
Yes. I mean, I think
1:17:47
I'm reminded of at different
1:17:49
points. I mean, I remember seeing my
1:17:51
first browser, the Mosaic browser, back
1:17:53
in 1994. I
1:17:56
think when it comes to technology, it's
1:17:59
a tool. It
1:18:01
can be really, really useful and powerful, and
1:18:03
I've been fortunate enough to be a career
1:18:07
technologist. I've enjoyed
1:18:10
the career, but I think AI is
1:18:13
as significant if not more significant than
1:18:15
the browser. So I've encouraged
1:18:18
both my kids to, or in
1:18:20
their 20s to certainly
1:18:23
dive in and be aware and use it
1:18:25
for their professional and personal enjoyment
1:18:28
and advancement. It's interesting. I would
1:18:30
say to my
1:18:32
daughter, the same thing
1:18:34
I would say to an adult
1:18:37
right now, which is what
1:18:40
AI doesn't change is the
1:18:42
fact that you still, to be successful
1:18:45
in life, in my opinion, need to
1:18:47
demonstrate a growth mindset, intellectual
1:18:49
curiosity, and most important, passion
1:18:52
towards something. A
1:18:55
meta point here is that Andy and I
1:18:57
are passionate about how connecting
1:19:00
the dots between technology and business, and
1:19:03
brands, and experiences, and
1:19:05
we've made a career out of it. But
1:19:08
to be honest, I would do
1:19:10
what I'm doing with Andy for free. Don't tell
1:19:12
Andy that, but if I could pay my bill
1:19:14
some other way, I'd do it for free. The
1:19:16
truth is that I mean
1:19:18
that. I love what I do, and
1:19:20
so it's cliche, but how does
1:19:22
that relate to your question? Well, I
1:19:25
was talking to someone else whose kid
1:19:27
is more like Andy's kid's age in
1:19:29
law school, and I was like,
1:19:31
hey, and they were worried, oh my God, because of AI,
1:19:35
they're not going to be lawyers and
1:19:37
accountants. I'm like, I can tell you
1:19:39
this much, if they love law or
1:19:41
accounting, or they love the craft and
1:19:43
the profession, then there's going to be,
1:19:46
like we say this to all of our clients, there's going to be the leading
1:19:50
law firms are going to be the
1:19:52
best at using AI to further what
1:19:54
they do. My advice would be
1:19:56
yes, definitely like be
1:19:59
literate. and be proficient
1:20:01
and experiment with these platforms as
1:20:04
much as you can because whatever, but that's
1:20:06
not going to be what makes you successful.
1:20:08
But if you don't do that, whatever your
1:20:10
passion is, if you don't have that tool
1:20:12
in your tool belt, you're just
1:20:15
gonna feel like you can't succeed as
1:20:17
well because you don't have that AI
1:20:19
literacy. That
1:20:23
was Adam Brotman and Andy Sack.
1:20:26
To read their new book, AI First, and learn more
1:20:28
about what they're doing at Forum 3, follow
1:20:31
the link in the episode notes. Today's
1:20:33
episode would not have been possible without the
1:20:36
help of many, many people. Special
1:20:38
thanks to Joanna, Jen, Andy,
1:20:41
and of course, Bill Gates. Today's
1:20:43
episode was produced by Caleb Bissinger,
1:20:46
sound design by Mike Toda. I'm
1:20:48
your host, Rufus Grissom. Next
1:20:50
week, we'll be celebrating Independence Day,
1:20:53
not with fireworks, but with a
1:20:55
conversation about my favorite founding father.
1:20:58
And I'm like, huh, Ben Franklin. Hadn't
1:21:01
given him much thought any more than most
1:21:03
Americans see him on the hundred dollar bill
1:21:05
occasionally when I have them.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More