Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Anatomy of an ad. Subconsciously trigger
0:02
emotions through music. Perfect.
0:06
Define an opportunity. Imagine talking to millions of
0:08
people across the U.S. like I am now.
0:11
Identify a problem. Creating an
0:13
audio ad is time-consuming. Offer
0:15
a solution. Utilize cutting-edge AI.
0:18
Imagine creating all that in under 30 seconds.
0:21
Well, we did. To create this ad.
0:24
To learn more about AI in the audio industry, download
0:26
the white paper from audiostack.ai. Hello,
0:42
everyone, and welcome to Talk Nerdy. Today is
0:44
Monday, June 10, 2024, and
0:49
I'm the host of the show, Dr. Cara
0:51
Santa Maria. And as always,
0:53
before we dive into this week's episode, I
0:55
do want to thank those of you who
0:57
make Talk Nerdy possible. Remember,
1:00
Talk Nerdy is and will always be 100% free
1:02
to download. And
1:04
the way that we keep this train on the
1:06
tracks, the way that I pay the edit team,
1:10
and the way that I am able to
1:12
continue to include
1:15
my incredible producer on
1:17
the show, Noel Dilworth, is through the
1:20
support of individuals just like you. So
1:22
the model that we use for the
1:24
show is the Patreon model, which
1:27
is very similar to kind of an NPR or PBS
1:29
model, where
1:32
individuals who can afford to support the show and
1:34
who choose to support the show can make
1:37
a per-episode donation by
1:39
visiting patreon.com/Talk
1:41
Nerdy. As
1:43
large or as small as you want, that
1:46
really does help offset everything, and it makes
1:48
it so that individuals who are not in
1:50
a position to pay don't have
1:53
to pay but can fully appreciate every
1:55
single episode. In now, our back catalog
1:57
of, gosh, 500... episodes
2:00
100% for
2:03
free. This week's
2:06
top patrons include,
2:09
let's see, Anu
2:12
Baravaj, Daniel Lang, David
2:14
J.E. Smith, Mary Niva,
2:16
Brian Holden, David Compton,
2:18
Gabrielle F. Aramillo, Joel
2:20
Wilkinson, Pasquale Gelati, and
2:23
Ulrika Hagman. Thank you
2:25
all so much. So
2:28
let's get into it. This week
2:30
I had the opportunity to speak
2:32
with Dr. Leila Takayama. She
2:34
is a human-robot interaction specialist
2:37
with a long
2:39
history of training and
2:41
publication in social
2:43
science and design across robotic
2:46
and actually AI systems as
2:48
well. So she earned her
2:51
PhD from
2:53
Stanford University and
2:56
she has worked in a lot
2:58
of different multidisciplinary organizations. Currently, she
3:00
is the vice president of design
3:03
and human-robot interaction for robust AI
3:05
and she's going to tell us
3:08
a little bit more during the
3:10
show about the really cool work
3:12
that robust AI is doing. So
3:15
without any further ado, here she is,
3:17
Dr. Leila
3:20
Takayama. Well,
3:22
Leila, thank you so much for joining me
3:24
today. Sure. Thanks for having me. So
3:28
I am excited to talk
3:30
all about your interesting work
3:32
in AI and robotics and
3:35
kind of human-machine relationships
3:38
and design. But before we get into
3:40
the work that you're actively doing, I'm
3:43
super curious. I think
3:45
my listeners probably know this by now because I tend
3:47
to follow a bit of a formula, but
3:49
I'm super curious how you got to where
3:51
you are. So I'd love to learn a
3:54
little bit more about your background and your
3:56
education. Sure. So my
3:59
background. is actually closer to
4:01
yours. So I have a background in
4:03
psychology. And I started
4:05
out in cognitive science, which
4:07
is this interdisciplinary field where
4:09
you study neuroscience and computer
4:11
science and psychology, philosophy, linguistics.
4:15
And so I kind of stumbled
4:17
into looking at
4:19
human-computer interaction first because
4:21
computers are so poorly designed to
4:23
work for people. And
4:26
then I stumbled into human-robot interaction next
4:28
because they're even worse, which
4:32
is kind of a sorry state of
4:34
the world. But it felt to me like
4:37
robotics has all this promise, supposedly, right?
4:39
Sci-fi tells us that it's gonna be
4:41
amazing. And then you actually
4:43
interact with robots and it's just awful. And
4:46
so I feel like there's a big disconnect
4:49
there that could actually be addressed by folks
4:51
who understand what people can
4:53
do and what they want to do. And
4:56
how do we amplify human skills as opposed
4:58
to try to just do exactly what humans
5:00
do already? So it's
5:03
interesting thinking that coming from cognitive science,
5:06
the study of the human mind and brain and
5:08
how it relates to all of these things is
5:12
so fundamental for whether we're
5:14
talking about computing or robots,
5:18
we are the users and we
5:20
are those who engage, yet at
5:22
the same time, it's not
5:24
always the case that that, I
5:27
guess, layer is included from
5:29
the onset, is
5:32
it? Like oftentimes we're talking like
5:34
engineers, physicists, you know, people
5:37
who aren't thinking about the human aspect. Absolutely.
5:42
As much as I love my engineering friends, they
5:45
often design things for themselves or
5:48
for their best friends, right? And so I think
5:51
when you do that, you're limited in what you're
5:53
gonna actually come up with and you're limited in
5:55
who's gonna be able to use your stuff. And
5:58
so, A lot
6:00
of the folks that I work with today are more folks
6:03
who have had that perspective and may have
6:05
fallen flat on their faces and been frustrated
6:07
that no one was smart enough to use
6:10
their robot or use their computer. And
6:13
then they want more. The ones
6:15
who want their wonderful tools
6:18
to get put in the hands
6:20
of a broader population of people
6:22
are the ones who want to
6:24
engage more with human-centered design. And
6:26
so they tend to be the ones that I'm gravitated towards
6:29
because they care about people who are
6:31
not exactly like themselves. Right.
6:34
And that's such an important point
6:36
is when we talk about human-centered design, there is
6:38
no one way to be human. It's
6:41
so variable. So we have to sort of
6:43
be creative in making sure that
6:45
we reach the human
6:48
experience as the kind of royal
6:50
we. Yes.
6:52
I think for me, one
6:54
of the biggest red flags is
6:57
when somebody tells me, I made this
6:59
awesome robot. And I asked them, cool,
7:01
who's going to use it? And their
7:03
answer is, everyone. That's
7:06
when I know, OK, we got some work to
7:08
do here. Let's
7:10
get a little more specific. Yeah.
7:14
When I worked for quite
7:16
some time in television, and I used
7:18
to struggle with that as well when I would
7:21
read for a new show or we'd be talking
7:23
about how we're going to host this thing or
7:25
how we're going to approach it. And yeah, when
7:27
the network or the production company would be like,
7:29
well, OK, so who's the audience? They're everyone. It's
7:31
like, I can't talk to everyone. I
7:33
have to know who I'm talking to. I
7:36
don't talk to children the same way I talk
7:38
to adults. And I don't talk to experts the
7:40
same way I talk to people who don't have
7:42
expertise in these topics. Absolutely. Yeah. And it's the
7:45
same problem when you're designing a computational system. You
7:48
got to know, who are these people? What are
7:50
they trying to do? What do they know? What
7:52
are they capable of? And then you can figure
7:54
out, is this going to work for them or
7:56
not? And I think maybe
7:58
we. The
8:01
individual listener right now who doesn't
8:04
work in a tech field
8:08
may not, I'm speaking
8:11
for myself here, but may
8:13
not realize or often think
8:15
about how many different ways
8:18
that humans interact with machinery,
8:20
with computational machines, with robots.
8:23
I think we're all thinking about our phones and
8:25
our laptops. My
8:27
everyday interaction, I'm
8:30
the user and my phone needs to be
8:32
user friendly for everybody. There are
8:34
so many different industries and so many
8:36
different applications, aren't there, where you
8:38
have to think really specifically? Yeah,
8:41
that's totally true. I think
8:43
maybe another, if we want to
8:45
go the robot direction, probably the most popular robot
8:49
in the world would be the Roomba.
8:52
The little vacuum cleaning robots that look like a
8:54
big hockey puck. Most
8:58
people who have pets that have fur tend
9:00
to have one of these robots too. I
9:03
think that's another common example of
9:05
a hardware software system that's
9:08
in our everyday lives and sometimes we don't
9:10
call it robot, you just call it
9:12
the vacuum cleaner because it vacuums and
9:14
it's really great that it does that for
9:17
me. As
9:19
you're saying, not everyone orients towards
9:21
these the same way, not everyone
9:23
uses them the same way.
9:25
I think one of the beautiful things about
9:28
robots in particular is that
9:31
they tend to come with
9:33
some gender norms. A
9:36
really good friend of mine, Jodi Forlisi,
9:38
who's at Carnegie Mellon, did this awesome
9:40
study where they gave
9:43
randomly assigned half of these families
9:46
a Roomba and
9:48
they randomly assigned the other half an
9:50
equally valuable vacuum cleaning stick that you
9:52
just hold in your
9:55
hand and push around. They
9:58
let them just use them or not. And
10:00
they came back and interviewed them a few times to see
10:02
how it was going. And,
10:05
you know, cleaning the house tends to
10:07
be gendered or female, right?
10:10
And so women tend to be
10:12
expected to take on that work.
10:14
But when you have a power
10:16
tool that's called a robot in
10:18
your house, guess who helps out more
10:21
with cleaning the house? Which is just
10:23
great, right? Like that wasn't an intended
10:26
outcome, but the men participated more
10:28
in cleaning the house because
10:31
it was a robot. But they didn't when it
10:33
was just a handheld vacuum stick, right?
10:36
That's so fascinating. I
10:39
love that. And it's not
10:41
just sorry, stay. I mean,
10:43
and that must be also
10:45
kind of sometimes frustrating when
10:48
you're wanting to design or
10:50
to approach these
10:53
problems based
10:55
on usability. And
10:57
at the same time, you probably
10:59
don't want to reinforce gender norms
11:01
or racist or sexist approaches. And
11:03
so what a difficult thing to
11:05
say. We want to overcome, but
11:07
we also want to make sure
11:09
that it fits within the zeitgeist.
11:11
Yeah. Yeah. And it seemed to
11:14
have come up with a design that kind
11:16
of works across
11:18
the genders, right? Which is
11:20
kind of nice to see. It
11:23
doesn't always work that way, right? Right.
11:26
And it's interesting that you did mention
11:28
the kind of gendered nature because you're
11:30
right. I do think that
11:32
we often think of robots as being
11:36
male or as somehow relating
11:38
to male tasks more. Where
11:41
do you think that that comes from? If
11:43
I have to watch another robot deliver beer
11:45
to someone sitting on the couch, I'm going
11:48
to scream. For
11:50
some reason, everyone thinks that this is a
11:53
new idea, but grad students in research labs
11:55
have been doing this for decades now. And
11:58
you can guess what the majority of... of those
12:00
grad students are in terms of gender. Right.
12:03
And is that why you
12:05
think? Because they are, like,
12:08
because historically the field was dominated
12:10
by men, so the perspectives were
12:12
very, it was very male gaze.
12:15
That would be my guess, yes,
12:17
that just participation in imagining a
12:19
future with robots, right, has largely
12:22
been dominated by men. It
12:25
doesn't, that's not true everywhere. I
12:28
will say, like, in my fields, right, I
12:30
study human-robot interaction, we have
12:33
better representation of women in the field.
12:36
It's not 50-50, I don't think. But,
12:38
you know, when I go to the human-robot interaction
12:41
design conferences and you go to
12:43
the bathroom, there's actually a line in
12:46
the women's restroom. And we celebrate that, because
12:48
that's not true at the
12:50
straight up robotics conferences. Right.
12:53
And so there is, it's interesting that
12:55
the minute you add the human component,
12:57
more women feel comfortable entering
12:59
the field. Or we're super frustrated
13:02
with what they look like right now. And
13:04
so we're rolling up our sleeves, and we
13:06
want to fix it, right? Design
13:09
for inclusivity. Absolutely.
13:11
And I think, and this extends
13:13
sort of beyond hardware,
13:15
right, because we often think about
13:18
robots as a form of hardware.
13:20
But we're also talking about the
13:22
software, the why of it all,
13:27
the how of it all, and
13:29
actually what these different things are
13:31
doing, whether they have a physicality
13:33
or like in with regards to
13:35
something like artificial intelligence,
13:39
that this is sort of this
13:41
thing that's happening in the background to
13:46
accomplish some sort of task or
13:48
some sort of goal. But those
13:50
gender concerns exist regardless
13:52
of whether we're talking hardware or not, right? Yeah, that's
13:54
right. I mean, if you look at, I don't know,
13:56
like a voice agent, right? So there's been a lot
13:58
of talk lately about that. her,
14:01
the film, and using
14:03
voice casting. Who are you going to
14:05
voice cast and why? What
14:08
is the character that you're trying to show
14:11
or present to your end users? That's
14:13
a very big and important design decision.
14:17
Whether we like it or
14:19
not, we do use stereotypes
14:21
because they're shortcuts. They're heuristics
14:23
that people use cognitively because
14:25
we're cognitive misers. We're kind
14:27
of lazy when it comes
14:29
to thinking through things. If
14:31
you, for example, presented a
14:33
bunch of information about products
14:35
that are stereotypically feminine, so
14:38
it's cleaning products or fashion, people
14:41
will actually believe you more and buy more stuff
14:43
if you use a female voice agent than a
14:45
male one. Whereas if you're selling
14:47
things that are stereotypically male, say, I
14:49
don't know, power tools or sports equipment,
14:51
people who hear it from a male
14:54
voice agent are more likely to
14:56
believe them and buy more stuff
14:58
from them than if it's female.
15:00
We will deny that we're doing
15:02
that because we know that that's
15:05
not right. But
15:07
then if you look at consumer behavior, it follows
15:10
those patterns. We're falling back on those social
15:12
norms that are sort of deeply ingrained in
15:15
us because that's the world that we live in and
15:17
that's the experiences that we've had and those patterns that
15:19
we see. I think one of the
15:23
frustrating things, but also a
15:25
good thing to know about because it could drive
15:28
the way that we choose to design
15:30
things, is picking the perceived gender or
15:33
the perceived age or the perceived geographic
15:35
origin of even just
15:37
voice agents, not robots. Because
15:40
we use those stereotypes to make
15:42
sense of these agents that we're interacting
15:44
with today. It's
15:47
interesting the example that you use there. It's
15:49
such a, I think, clean example
15:52
of, you know, I'm thinking about
15:54
my own consumer behavior and I'm like, yeah, totally. I
15:56
don't want like a man trying to sell me like
15:58
makeup or something like that. But then,
16:01
right, exactly. But then
16:03
you move into this, what
16:06
I find to be a bit more pernicious,
16:09
utilization of the
16:11
female voice
16:14
to encapsulate helper roles. So like,
16:16
oh, my calendar, my assistant, because
16:18
it's the man who does the
16:20
real work, and then it's the
16:22
woman who supports the man. I
16:24
think that it's so
16:26
sad because we
16:28
were supposed to have moved past that. I know
16:30
we're not past it. Of course, we're not past
16:32
it. But we're not in the era of the
16:35
madmen executives in their offices and
16:38
the all the female secretaries sitting out
16:40
in the bullpen. We're not
16:43
in the era anymore. I remember going to Caltech
16:45
and being like, why
16:47
are there only men's restrooms
16:49
on it? Why are the restrooms
16:51
on separate floors? They're like, oh,
16:53
because back in the day, this
16:55
is where the scientists were, and
16:57
all the women were answering phones.
16:59
Like Jesus. But now we're just
17:01
reinforcing that again. Yeah. As
17:03
designers of these systems, we don't have to. I
17:06
think the beauty of technology is you can
17:08
decouple these things that used to be coupled.
17:11
I think that's what my old grad school
17:13
advisor, Cliff Nass, used to say, wouldn't it
17:15
be cool if the
17:17
physics to teachers that are virtual
17:20
were all female? So kids
17:22
grew up expecting their physics instructors to be
17:24
female. That's fine.
17:26
That's super doable. And
17:29
it's just a matter of voice casting. And
17:31
so we could actually change what people
17:33
are exposed to in
17:36
order to start to change those
17:38
patterns that people are noticing in society, even if
17:40
they face different ones later. Yeah.
17:43
And I think that's such an important
17:45
point. There's nothing essentialist about this. There's
17:47
no fundamental difference. These are
17:49
all norms that have been shaped over
17:52
millennia of very often people in
17:54
power maintaining that power. Yeah. And
17:56
we can shake them up. Right.
17:58
I love that. It
18:00
takes me to this other hot topic
18:04
that I see a lot when discussing things
18:07
like AI, especially large
18:09
language models. This idea
18:11
of I
18:15
don't want the robots or I don't want the
18:17
AI to be making art and
18:20
music and poetry. I want the AI to
18:22
be doing my dishes. Yes. I saw that
18:24
tweet and I love it. I
18:27
think this is an important conversation that's
18:29
been happening quite a bit. I'm curious
18:31
your take on that because I'm sure
18:33
that that's front and center for you and
18:35
your work. Oh, absolutely. I think we've talked
18:37
earlier about who is this for. I think
18:39
the very next question and what is it
18:41
for? What's it going to do? Why do
18:44
we want that to do that thing? A
18:48
nice rule of thumb that is
18:50
from robotics is that robots should do things
18:52
that are dirty, dangerous and dull. I
18:56
would add to that things that could be damaging to
18:58
people, but we call them the
19:00
3D's. I think we don't have
19:02
that yet for AI. We have people
19:04
working on AI safety and thinking about
19:07
ethics and policy, but
19:10
I think we don't have a nice
19:12
framework yet for thinking about how are
19:15
you really going to decide which tasks
19:17
are worth tackling first. Coming
19:20
from a human centered design perspective, usually what
19:23
I do is figure out, who
19:25
are we working with? What do they
19:27
care about? What do they hate about? What are their
19:29
pain points and how do we work on those first?
19:33
Because if you go and take away the thing
19:35
that they love about their job first, guess what's
19:37
going to happen? That robot's going to get hijacked.
19:41
Someone's going to hit that big red button and
19:43
shove it aside. But if you
19:45
take on the task that they really wish weren't part of
19:47
their job and you give them more
19:49
time to do the things that they do love, I
19:51
think that can make a really big difference in terms
19:53
of adoption and long term use. Absolutely.
19:57
It's funny, I'm starting to see it in
19:59
the medical... fields more and more where at
20:02
least I'm being served ads right now. I
20:04
love spending time with my patients. I don't
20:07
love charting. I don't want to write notes.
20:10
That is a perfect example. I don't
20:12
want an AI to replace me in
20:15
face-to-face with my patients, but I also
20:17
do want technology
20:19
to help me with the menial and massive
20:21
time suck parts of my job. Yeah. There's
20:23
super tedious parts of our jobs that everyone
20:26
would love to get rid of. Or maybe
20:28
the backbreaking part of your job. Maybe you
20:30
want to work on the part that actually
20:32
makes use of your special
20:34
skills, like having
20:37
a good bedside manner with
20:39
the patient. Those are
20:41
things that I think in my most
20:45
optimistic future of these systems, we
20:48
would be building AI systems or
20:50
robotic systems that are complementary to
20:52
us because we already know
20:54
how to make things that are just like us. Making
20:57
babies. I
20:59
think if we can build things that complement
21:01
our skills so that we can do better,
21:04
that's a more powerful way forward than
21:06
just trying to replicate what we've already
21:08
got. I find you know this. There's
21:12
a lot of people talking about artificial
21:14
general intelligence or AGI. My
21:17
question is, why do we
21:19
want to make it smart the same way
21:21
we are? We have so many limitations. What
21:25
I want is something that's better than me in other ways.
21:29
My memory is not great, but
21:32
computer memory can be pretty awesome compared
21:34
to that. Why aren't we excited about
21:37
leveraging that and working together better? It's
21:41
interesting. You mentioned something there
21:43
that struck me. One
21:47
of the topics that I remember speaking
21:49
about on the other podcast
21:51
that I work on, on the Skeptic's Guide
21:53
to the Universe, and the host, Stephen Novella,
21:56
talking about this tendency of human beings. There
21:58
may even be a new one. name
22:00
for it, like the cognitive bias or maybe a name for it.
22:02
But this tendency of human beings, when
22:05
they are first innovating to try to
22:07
do exactly what you mentioned, it's like
22:09
reinvent the thing that we already have.
22:11
So when you look at the first
22:13
cars, they look like
22:16
horse carriages. Yeah. Like
22:18
they look at it because that's how you would
22:20
get around. Of course, the car is going to
22:22
look like that. It took time to iterate and
22:24
go, oh wait, that's actually not the most ideal
22:26
shape of a car. We
22:29
can make it more streamlined. Do
22:31
you think we're in that era right now
22:33
with robotics and AI, or do you think
22:35
we're still making horse buggies? I
22:38
think we're still making horse buggies, or at least
22:40
a lot of folks are still making horse buggies.
22:43
Yeah, that's just the stage of technological
22:45
development. I think we've still got
22:47
a lot of learning to do, and I don't mean machine
22:49
learning, I mean human learning. Once
22:52
you actually deploy these things and put them in
22:54
the hands of end users and see what really
22:57
happens, that's where I think
22:59
we can start to learn. Maybe having better,
23:07
more streamlined designs of those cars can
23:09
help because they're moving faster. What
23:11
does that look like for a chatbot?
23:13
What does that look like for a floor
23:16
cleaning robot? We're still
23:18
in the very early days of
23:20
trying to stumble our way there. Can
23:23
I ask, as somebody who's done a
23:25
lot of academic work, who works with
23:28
students, who experiences also maybe from a
23:31
consulting perspective, the
23:34
inner workings of corporate America and
23:36
how they think. Bear
23:38
with me on this question, hopefully I can get
23:40
to the root of it. I think about the
23:42
capitalist ethos of grow,
23:48
grow, grow, improve, improve, improve, progress,
23:51
progress, progress, right? I
23:53
think about how those
23:55
pressures on industries often
23:57
induce a need
24:00
and urge a compulsion almost
24:03
to solve problems that don't
24:05
even exist. How
24:07
often is that something that you've grappled
24:10
with in your work? Seeing
24:12
people coming up with these newfangled
24:14
ideas where you're like, what solution does
24:16
that? What problem are
24:18
you solving with that? We don't need
24:21
that just for the sake of making it.
24:24
Oh, I see that a lot. I
24:27
would call that the culture of the demo or
24:29
die culture. You got to do
24:32
something flashy because the executives are coming or
24:34
because the investors are coming. That
24:38
cycle of just demoing and demoing
24:40
and demoing and doing whatever is
24:42
shiny will get you somewhere, but
24:46
it's not necessarily going to get you to
24:48
the point of providing real value
24:52
to a set of customers who might
24:54
actually want that thing in their homes
24:56
or want that thing in their workplace.
24:59
The equivalent of a one-hit wonder product
25:01
or something. Oh, totally. Yeah,
25:03
it's on the shelf and then people get
25:05
tired of it really fast because it's not
25:07
actually solving the problem.
25:10
Yeah. You may have
25:12
seen, especially in hospitals, there's been quite a
25:14
few people trying to build robots for hospitals.
25:16
There are robots in the hospital where I
25:18
work 100%. They come in
25:21
and out of the elevator sometimes. Yes. Do
25:23
you ever get in the elevator with them? Yeah. I
25:26
think the ones that I've seen
25:28
so far are usually with a
25:30
handler still. Oh, okay. That's smart.
25:33
They look like ET. They're
25:35
like this size of ET, I would say.
25:38
Diminutive humans. They
25:41
have some of those
25:43
human-like faces with the
25:45
eyes and stuff, I think, to make them
25:47
more palatable and easier to interact with. I
25:50
don't know yet what they do, but I
25:52
do see them going in and out. Usually
25:55
there's somebody following them with a clipboard. Yeah.
25:58
They're in the learning stage. We're
26:00
trying to figure out what it's for. So that's
26:03
good. That's a good first
26:05
sign. Oh my goodness. Yeah, I mean,
26:07
you've also probably seen, right, there's often
26:09
places, including hospitals, where they just sort
26:11
of have robots on display. My
26:15
friend Matt Bean down at UC Santa Barbara
26:17
has done a ton of work on this
26:20
space where, you know, there is marketing value
26:22
for a hospital to show like, we have
26:24
robots. We are from the future, right? And
26:26
sometimes patients will, you know, ask for the
26:29
robot for their surgery, right, instead of
26:31
having human hands in their body. And
26:34
so there's certainly marketing value
26:36
that some companies are leaning into. I
26:40
think, you know, that's
26:42
one step forward, maybe. But
26:44
it makes me sad when I see robots that
26:47
were designed to perform a task instead
26:49
just being used for advertising. Yeah,
26:53
that is a bummer. Yeah. Yeah,
26:55
I think about like, it's funny because I'm
26:58
sure there is a lot of value
27:01
in a robot-assisted laparoscopy. But when I
27:03
had my, and, you know,
27:05
fans of the show know about it because I talked
27:07
about it on air, but when I had cancer now
27:09
almost two years ago and I had a laparoscopic
27:14
hysterectomy, it was comforting to know that my
27:16
surgeon, who had been my gynecologist for years
27:18
and I knew very well, was going to
27:20
be the one doing the surgery. And that's
27:23
not to say that a robot-assisted surgery, it's
27:25
like there's no people in the room. Like,
27:28
of course there's still surgeons doing the
27:30
surgery. But I
27:32
think we're at the stage where having it be
27:34
a robot is not comforting to me yet. Yeah,
27:38
and if you get to see
27:40
those surgical robots right now, they're a little scary, right?
27:44
And so there, you know, you got to look at
27:46
the data, right? Like what are the patient outcomes? Like,
27:48
is it actually better? Is it helping with shortening
27:50
the recovery time? At
27:53
the end of the day, you're right.
27:55
Like especially in the U.S., right, those
27:57
robots for surgeries are teleoperated by a
27:59
surgeon, right? And
28:01
there's a lot of controls to make sure that
28:03
they don't do the wrong thing. Yeah. And
28:06
this is a great thing if it
28:09
means, you know, increasing equity.
28:11
If it means more people being able
28:13
to have the service, you
28:15
know, available to them or to do it
28:18
cheaper or having a surgeon who's, you know,
28:20
not local, but who's very good at this
28:22
thing, now maybe they can perform it across
28:24
the country. I mean, that's amazing. But
28:27
I'm still really wary. Yeah, especially
28:30
with, you know, the internet being as
28:32
reliable as it is. Seriously, yeah. The
28:34
internet just hiccuped while I was on
28:36
with a client earlier. I was
28:38
doing therapy and the internet first. Like, oh, you're
28:40
frozen. Oh, God. You know, like, this is, we're
28:43
not living in a
28:45
world yet where the
28:47
infrastructure always supports these
28:49
things. Yeah, absolutely. We
28:51
have a long way to go there. And, you
28:53
know, it's like setting up audio visual connections,
28:56
as you know well, right? Like, it's hard to
28:59
get it to work. It's hard to get
29:01
the HVAC system into building the work too.
29:03
So, you know, everyone's comfortable at their temperature.
29:06
Right. So reliability is,
29:09
and I think reliability of a machine
29:12
of a, you know, whether
29:14
we're talking about a robot or like
29:16
an AI, reliability is directly linked to
29:19
trust. And
29:22
so how do you, how do you, you
29:25
know, kind of figure trust into the
29:27
work that you do? Oh my gosh.
29:29
So much, right? There are so many
29:31
ways that robots can break our trust,
29:33
right? And I think one of the
29:36
very common ways that they do that
29:38
is that we over promise on what
29:40
they can do. People
29:43
have watched those, you know, fancy demo
29:45
or die videos, and they get really
29:47
excited about them because they think the
29:49
robots can actually do more than they
29:51
actually can. And so I
29:53
think one of the big challenges that we face with
29:55
the AI systems too, right? You know, they're, they're trained
29:57
on a certain data set. And then
29:59
and they're deployed, it may be a different kind of
30:02
setting. And you cross your fingers and just hope that
30:04
it's going to work OK. Sometimes it
30:06
does, and sometimes it doesn't. And
30:09
so I think setting expectations with
30:11
end users is super important for
30:14
earning the trust and
30:16
being honest about what is possible and what
30:18
we're not so sure about yet, letting
30:20
them know when they're getting into the territory that
30:24
maybe you should be ready to catch it in case it
30:26
falls if you're going to use it over
30:28
there. But if you use it over here, it's going to be fine. And
30:32
it's not just education. It's
30:34
also just figuring out how to be
30:36
more transparent at
30:38
the right time, at the right place with those
30:40
end users to let them know when they're at
30:42
the edges of what we
30:44
know works. Right.
30:47
I think there is sometimes a struggle. I
30:50
don't know if this is a Western thing. I don't know
30:52
if this is an American thing. There's
30:54
this struggle with
30:56
reconciling reality
30:58
with sort of, I mean, I live
31:01
in Los Angeles, right?
31:03
And the
31:05
streets of Hollywood don't look like what it
31:07
looks like in the movies. And
31:09
I think we often. I think that's not a set. Exactly.
31:13
But we have been fed a visual
31:17
story on a set for so
31:20
long that we expect that. Yeah,
31:22
pristine sidewalks. Yeah. And
31:24
so when it comes to AI, when it comes to
31:26
robotics, I think that
31:28
maybe there is some cynicism that we see.
31:31
But I think overwhelmingly,
31:34
there's also
31:36
this frustration
31:38
that you often see with individuals
31:40
that things don't always work
31:43
the way that they expect them to, or that
31:45
we haven't sort of made more progress without really
31:47
taking the time to appreciate
31:49
the incredible amount of
31:51
progress that we have made. Yeah.
31:54
I've actually been talking with more
31:56
than a few academic friends about this because there
31:58
is. Just
32:01
like there's the pressure to demo or
32:03
die, right, in the corporate culture. There's
32:05
also in academia, there's this pressure to
32:07
publish or perish. We
32:09
love the alliteration. And with publishing,
32:12
you better have a result, right? And
32:16
if you don't have a result, or your system
32:18
doesn't perform as well as the last one, right?
32:20
You don't get to tell anyone, which
32:22
is kind of a dump. Right, yeah. The
32:24
negative results, just like they die. Yeah, right. And there are
32:26
folks who are fighting that trend. But
32:29
it's going to be a long and uphill battle.
32:32
The idea that we've been floating around for
32:34
robotics that is starting to get some traction
32:37
is that when you go to a robotics
32:39
conference, there's usually all these videos of the
32:41
cool thing that this lab got the robot
32:43
to do. We want to
32:45
open another track in which we do the blooper
32:48
reels, where we show up all
32:50
the failed attempts. Yeah.
32:52
Because we can learn from that, right? And
32:55
also, it'll help people understand robotics
32:57
is hard, AI is hard, right?
33:00
Then in order to see the progress, you got to
33:02
see the failures too. And I
33:04
think making more of a
33:06
fuss about how hard it is and
33:09
when the failures happen would help with
33:12
being more transparent with the rest of
33:14
the world about where the technology actually
33:16
stands. Yeah, and I think
33:18
that that even spreads out. It's funny,
33:20
but I'm always trying to make these
33:23
parallels. Sometimes they don't, they're
33:25
not as parallel as they are in my mind.
33:27
But I think that's the same thing when we
33:29
talk about the scientific method. And really, in a
33:31
lot of ways, it's the same thing when we
33:33
talk about a humanistic approach
33:35
to politics and governance. It's
33:37
like if we can normalize
33:40
making mistakes, learning from mistakes, and improving
33:42
based on the knowledge we got when
33:45
we made that mistake, I think that
33:47
we will be less of a culture
33:49
of, you changed your mind, therefore I
33:51
don't trust you. It's
33:55
like this weird thing where we want our
33:57
leaders to be unwavering. It's
34:00
like, why is that a good thing?
34:03
Don't we want people who grow? Right.
34:05
Yeah. I think this idea of having
34:07
a learning culture is super
34:10
important. In Silicon Valley where I am right
34:12
now, there's this called people say that there's
34:14
a culture of failing fast and early. But
34:18
we forget the step of like, and then you learn. You
34:21
learn from the mistake. Ideally, you
34:23
share the lessons that you learned with others who
34:25
are working in a similar space so that you
34:27
save them from bashing their head against that same
34:30
wall later. You
34:33
prevent the Elizabeth Holmes of
34:35
the world because it's the
34:37
culture that creates this compulsion
34:40
to cheat and lie your
34:42
way into success. Because
34:46
of course, like if you're- All the pressure is
34:48
to lie. Yeah. Yeah.
34:52
I mean, obviously, when
34:55
somebody does something like that and commits fraud,
34:57
it is their fault that they committed fraud.
34:59
But they're not doing it in a vacuum.
35:01
They're doing it in a culture that fully
35:03
was asking them to do that. Totally. Totally.
35:05
First of those decisions. Yeah. You
35:07
got to walk that fine line as someone
35:10
who's raising money for your startup. You're going
35:12
to tell your potential investors about things that
35:14
you're excited about for the future. But
35:17
in practice, it's really hard to promise that you're going
35:19
to get it done. Because you're doing something new. You're
35:22
actually doing research, quite
35:24
frankly. We don't know
35:26
what's going to happen and that's why I it's exciting and
35:29
you want to break them along for the ride. But when
35:32
you're in a culture where you need to make
35:34
promises so that investors think they're going to get
35:36
a return on their investment, you
35:38
might make those claims too strongly. Yeah.
35:42
It's so much to navigate. I
35:45
think we're living in a society now
35:47
where, I mean, thinking about you
35:50
and your role and your position,
35:53
we can no longer just do one thing
35:55
or be one thing. I think gone are
35:57
the days of our parents or our grandparents
35:59
who. They made widgets in the factory
36:01
and they put the left side
36:03
of the widget on all day every day
36:06
and they were left expert and then they
36:08
retired. Now
36:10
you have to know something about the marketing and you
36:12
have to know something about the sales and
36:15
you have to – it's very,
36:17
very difficult to not be interdisciplinary.
36:20
Yeah, I remember actually majoring in cognitive science.
36:22
My parents would often ask me like, what
36:25
is that? Is that really a thing? I
36:27
don't know, maybe you should pick a real major. You're
36:30
like, it's all the things. It's really
36:32
a thing, I swear. But
36:35
I do feel like maybe that was good practice
36:37
for this time when
36:39
like, yeah, you have to know more
36:41
than one thing. We don't get to only
36:43
be deep in one area because that just don't work. And
36:47
we can't put all our eggs in one basket in
36:49
that way either. I even think about
36:51
– I worked as a science journalist for years and years and
36:53
years. I finally decided to go back to school. I just finished
36:55
my PhD last year at 39 years old. And
37:00
when people – oh, sorry, what?
37:02
Oh, congrats. That's awesome. Oh, thanks.
37:04
That is quite the hurdle at
37:06
any age. I'm going to
37:09
stop and appreciate that. Yes,
37:11
you should. You know what it's like because you did it too.
37:14
And so, when
37:17
people go, why PhD and not MFT or
37:19
why PhD and not ID or why did
37:21
you choose the path that
37:23
you chose? And it's like because I want options.
37:26
I don't really know what I want to be when I grow
37:28
up. And so, if I want to do private practice, I can.
37:30
If I want to work in the hospital, I can. If I
37:32
want to get an academic position, I can. And
37:34
I think that it's the same sort of reasoning
37:38
with having a broad
37:40
skill set, also having
37:44
a focused area of expertise, but knowing that
37:46
you need to be somewhat light
37:48
on your feet, I think allows
37:51
you to do what it
37:54
is that you're doing in this field,
37:56
which is maybe –
37:59
I almost want to say it's like – bringing empathy
38:01
to the table. Yeah.
38:04
Is that a weird way to put it? That is a great way to
38:06
put it. You bring empathy to this field. Yeah,
38:08
I've actually, so I used to
38:10
teach human-computer interaction at the university.
38:13
And I remember I had one student one day
38:15
who came out to me and we were gathering feedback
38:18
on how's the class going for you. And the student
38:20
wrote to me and said, so
38:22
you're telling me that we need to have
38:24
empathy for our end users in order to
38:27
pursue this career path of user-centered design? And
38:29
I was like, yeah. And they're
38:31
like, I don't do that. So are you
38:33
telling me that I can't do this job? And I was like,
38:37
yeah, maybe not. You're probably
38:39
not well. Yeah. You might want
38:41
to explore other career options, honestly.
38:43
Interesting thing to say. Yeah,
38:46
and I think if you're that self-aware, good for
38:48
you. But
38:51
you might not want to pursue a job
38:53
where it really, really helps to care about
38:55
what it's like to be another person and
38:57
try to understand their world and their worldview.
39:00
Wow. Yeah, it
39:02
matters a lot, right? Yeah. I
39:04
think I choose to work with
39:06
people who do have a lot of
39:08
empathy and are curious about other people
39:11
because I think that's how we do a better job of designing
39:13
things for other people. And
39:16
also, to be clear, I feel like it needs
39:18
to be said. Empathy is not a
39:20
gift. It's not something you are endowed with. It's
39:22
a skill. You learn it. You can be more
39:24
empathetic. You just got to practice. Yeah, you choose
39:26
to learn it. You choose to practice it. Yeah.
39:30
Oh, I love that. And I love that you not
39:33
just do it, but I think there's
39:36
a big difference between doing and making
39:38
the thing that you're doing very explicit.
39:41
So obviously, in your work,
39:43
you are bringing empathy to the table. But also,
39:46
when you are talking about your work,
39:48
when you are teaching, when you are
39:51
marketing, that is an explicit part of
39:53
the conversation, which I think
39:55
is necessary because I don't
39:57
think it's implied. I don't think there's people think about
39:59
that. Yeah, I think that's absolutely right. And
40:01
often, empathy gets poo pooed. It's like, oh, that's a
40:04
soft skill. You know, it'd be nice to have. I
40:06
don't know if you have to have it. But you have
40:08
to have it if you're in this career path. Otherwise,
40:11
you're not gonna make it. You're
40:13
gonna miss something important. You're gonna design the wrong
40:15
thing. And it's not
40:17
gonna end up well. Anatomy of an
40:20
ad. Subconsciously trigger emotions through
40:22
music. Perfect. Define
40:25
an opportunity. Imagine talking to millions
40:27
of people across the US, like I am
40:29
now. Identify a problem. Creating
40:31
an audio ad is time consuming.
40:34
Offer a solution. Utilize cutting edge
40:36
AI. Imagine creating all that in
40:38
under 30 seconds. Well, we
40:41
did to create this ad. To
40:43
learn more about AI in the audio
40:45
industry, download the white paper from audiotac.ai.
40:49
Welcome to Iowa, the best place to
40:51
start your next chapter. With
40:54
a campus that's right downtown, you're steps
40:56
away from all the best things in
40:58
life. Friends, food, music, Hawkeye
41:01
games, and so much more. Whether
41:04
you're in the audience or behind
41:06
the scenes, meeting a legend or
41:08
becoming one yourself, this
41:10
is where your story gets interesting.
41:14
This is Iowa. Learn
41:16
more at uiowa.edu. And
41:21
I think we can all think of examples
41:23
in our lives when we've experienced that with
41:25
technology. I'm facing one
41:27
right now in the setting where
41:29
I work, where they issued me
41:31
a computer. And I don't
41:34
like this computer because I was trained on
41:36
a different type of computer. And
41:39
the type of computer I was trained on
41:41
is very user friendly. And every day I'm
41:44
having to task switch between the two
41:46
and I want to throw it out the window. And
41:48
I don't want to start a war, so I'm not going to
41:50
name any brands. But I can guess. But
41:52
you can guess. And
41:55
that's such an interesting thing that
41:57
there's almost like a tribalism that
41:59
comes. from certain user experiences because
42:01
this is easy and that's complicated
42:03
and why would I do that
42:06
versus this? Relative, right. Right. It
42:08
depends on what you've got practice with, right? Mm-hmm.
42:12
Yeah. That is a
42:14
complicated one. That's fascinating. And also, why is
42:16
it in this field, and you mentioned it
42:18
at the beginning, that sometimes
42:21
you'll see, sometimes with engineers, but I
42:23
think you see this across the board,
42:25
you see it in all different aspects
42:27
of academic science as well, or even
42:29
just academics, that there's
42:31
a certain percentage of individuals
42:34
who relish the
42:37
idea of broadening the message and
42:39
saying, I want to make this
42:41
deeply accessible. And then there's other
42:43
people who feel somehow special, the
42:45
less accessible their experience is. Yes.
42:48
Like, I have my friends who are like, I only use Linux
42:51
machines. I'm like, okay. I have
42:53
heard those people, yes. Right. And
42:55
it's like, I'm very proud of you. Like, I don't... Right.
42:58
But cool, good for you. Like, this does not impress me. You
43:01
know? Yeah. Like, where does that
43:03
come from? I mean, I think tribalism is a great
43:05
way to describe that, right? You're trying to identify with
43:08
a group of people that you feel are special and
43:10
that you want to be a member of. And
43:13
part of tribalism is like, there's in-group and there's out-group,
43:15
so you got to have an out-group, right? And
43:18
it might as well be them, because they can't handle command
43:20
lines, right? Or they don't
43:22
like Ubuntu, my version, right? So
43:26
it makes us feel special, right? When we're
43:28
members of a club, whatever
43:30
lines you want to use to draw
43:32
them. Yeah. How
43:34
do you overcome that in your work, though? Because I
43:36
can imagine that sometimes you're actually facing
43:39
that in a corporate setting or you're
43:41
facing that in an engineering setting. And
43:44
like, it's your role, I would
43:47
guess, to try and convince individuals. Like,
43:49
that is not going to give us
43:51
the best bang for our buck. Like,
43:54
that's not going to reach people. Ah,
43:56
yeah. That's happened more than
43:59
a few times. I think,
44:01
so the best example I have of
44:03
where I think we made a little
44:05
bit of progress towards changing their minds
44:08
was back when I
44:10
was working at Willow Garage, we were working
44:12
on the robot operating system, which is ROS.
44:16
And there was this, in
44:19
open source software culture, there is
44:21
often the sense of like, well, if
44:23
you can't use it, then you're just too dumb and you
44:25
don't get to use it, right? It's
44:27
a little, you know, it's the tribe, right? And
44:30
you either belong to the tribe or you don't. And if
44:32
you can't, then we don't care, right? But
44:34
there's also some people on our team who
44:36
are very interested in, you know,
44:39
what they would call, say, democratizing robotics, right?
44:41
And like putting these capabilities in the hands
44:43
of more people, those two
44:45
things are in conflict. And
44:47
so I started, you know, just talking to folks
44:49
walking around the parking lot, trying to understand where they're
44:52
coming from. And I
44:54
kept asking them like, okay, so who's this
44:56
for? Right? Just like we were talking about
44:58
earlier. And they, they would come around to
45:00
like, well, it'd be really cool if, you
45:03
know, the top PhD students, the top robotics
45:05
programs would use our stuff. But
45:08
they don't. And it's like, oh, that's interesting.
45:10
So it just so happens that
45:12
the university right next door to where we were with Stanford,
45:15
and they had one of the top, you know, PhD programs
45:17
in robotics. So I grabbed a bunch of friends I had
45:19
in that program was like, hey, you want to come over
45:21
and try some new software? We have
45:23
cookies. And so they came over
45:25
for the cookies and stuck around
45:27
for the code too. And
45:30
we actually ran some user studies with them,
45:32
where we told them, okay, go ahead and
45:34
like, try to build a little robot model,
45:36
right and try to run it in the
45:38
simulator. And oh, my
45:40
gosh, it was so frustrating.
45:43
And our team who was working on the software
45:45
development for Ross were like, why can't they use
45:48
it? Like these guys are smart, these women are
45:50
smart, like, but they can't use our tool. And
45:52
so I think it, it
45:54
helped a little say like, okay, you're trying to get these people
45:57
to be able to do this thing. And no,
45:59
you don't get to stand next to them and show them how to do
46:01
it because you can't do that at scale. And
46:04
I think it helped us to build this practice
46:06
of like, let's bring in those people who we
46:08
think should be able to use it, let them
46:10
try it, get some feedback from them, and then
46:12
make it better for them over time, right, and
46:15
iterate. And that's just like basic,
46:17
you know, user centered design 101.
46:21
But I think it was kind of newer for the
46:23
folks who were used to developing code for themselves, right,
46:25
and their friends. Yeah, I
46:27
just bringing that to the table, I think
46:29
can start to make some headway in a
46:32
good direction. Absolutely. I mean, I'm reminded, and
46:34
I think I might have even mentioned this
46:36
on the last episode, but I just recently
46:38
watched this documentary about reading Rainbow, which
46:42
I loved reading. I grew up on that.
46:44
And it's such
46:46
a lovely, like, you know, just,
46:48
yeah, it's a good documentary to
46:50
watch if you need a boost.
46:53
But there was a part in it where
46:55
the executive producers were talking about how like,
46:58
we're a bunch of adults trying to write
47:00
a show for children, and try and
47:02
come up with what children
47:04
would want, or what's too much for them, or
47:06
what's not enough for them. Like, you can't do
47:09
that unless you sit with the kids and you
47:11
focus group the show. Like,
47:14
we are adults writing for children,
47:16
we need the children to tell
47:18
us what's working. Totally play testing
47:20
it, right? It's such an
47:23
easy concept. But I think it
47:25
kind of goes back to this weird thing
47:27
where like, when you've busted your ass, and
47:29
you've got the expertise
47:32
and the accolades, then somehow you need to
47:34
prove to everybody else that like, well, I'm
47:36
smart enough to do this thing. And so
47:39
I want it to only be like, I'm the
47:41
key to this lock, like, only
47:43
smart enough. But the truth of
47:45
the matter is, I don't want everything in
47:47
my daily life to require that I use
47:50
that level of cognition. I want most of
47:52
the things in my life to be really
47:54
easy. Right. You've got enough things to worry
47:56
about and think about and spend your cognitive
47:58
effort on. So it doesn't need to be.
48:00
also driving your car home shouldn't feel like
48:02
a massive effort at the end of the day. Right. Yeah.
48:05
Right. Why does my electronic
48:08
medical record software
48:12
make me feel like I'm taking an
48:14
exam every day? You are not alone
48:16
in that feeling. That is when almost
48:19
every doctor I know complains to me
48:21
about too. Yes,
48:23
there's a massive pain point in that industry. There
48:26
is. These are the
48:28
things, this kind of takes us, I love
48:30
this, like here towards the end of our
48:33
conversation, these are the
48:35
things that are fundamental to the work that
48:37
you do. I would love to hear
48:39
about what is robust
48:41
AI and what is it that you do
48:43
there? Sure. Robust AI started
48:45
as an AI company and we
48:48
kind of got forced into becoming
48:50
a robotics company because
48:53
we chose a problem to work on and
48:55
we chose a user group to care about.
48:57
And so those people are warehouse workers. When
49:00
you order stuff online, they're the people who
49:02
actually are physically going and finding the stuff
49:04
that you ordered, putting it on a cart,
49:06
loading it into a box and shipping it
49:08
out to you. And there
49:10
is a super high demand for that.
49:13
It's increasing, but nobody wants those jobs
49:15
and they're quitting and they're quitting at
49:17
ridiculous rates. This is not just a
49:20
really, really hard jobs. It's rough and
49:22
it can be rough on your body.
49:24
And to make it financially feasible, they're
49:26
probably not getting paid nearly enough. Right.
49:28
Especially when you've got supposedly free shipping.
49:32
So those margins are tiny, but
49:35
I think there's a really cool opportunity
49:37
there to make their lives better. And
49:40
the nice thing is that now their bosses
49:42
are very incentivized to make their lives better
49:45
because they're having such a hard time recruiting
49:47
people and they're having an even harder time
49:49
retaining them. And so
49:51
there's this beautiful alignment between like, well, the
49:53
bosses want people to be there and they
49:56
want them to stay. And
49:58
the users are really frustrated, but I think think we can
50:00
actually make their lives better. And so at Robust,
50:03
we are working on a pushcart that
50:06
happens to have a robot inside. And
50:10
so this pushcart can drive itself around. It doesn't have
50:12
to be pushed around by a person. So you can
50:14
sort of, you know, it'll valet park itself if you
50:16
want it to. You can
50:18
call it over when you want to use it. But when
50:21
you actually want to show it what to do,
50:23
you grab it by its handlebars and push it
50:25
around. Right. And so it's a very familiar form
50:27
factor. And now,
50:30
you know, after thinking about chatting with you, now
50:32
I'm wondering, like, is this like the horse drawn
50:34
cart? Maybe
50:37
we're at that stage, maybe we have more learning to do.
50:39
I'm sure we have more learning to do. Of course, right?
50:41
You can't skip ahead to the cybertruck. Because then you're like,
50:43
what is that? I don't want that. Yeah,
50:46
yeah, that thing is sharp and pointy, man. Also, I
50:48
don't want to give it that much credit. That is
50:50
not skipping ahead. But yes. That
50:53
is a vision of the future. Exactly.
50:55
You take my point. Yes. This is
50:57
actually my vision. Yeah. So it's, I
51:00
mean, it's been super fun for me, like getting
51:02
to hang out with warehouse workers, getting to spend
51:04
time seeing how they actually do the problem solving.
51:06
And the coolest thing is, you
51:09
know, there's this sort of meme out there that's
51:11
like, we don't need no stinking humans. We're just
51:13
going to make a dark warehouse, right? And the
51:15
whole warehouse is a robot. And it turns out
51:17
that doesn't work very well. It's
51:21
the humans in the loop who make the whole stupid
51:23
thing run, right? And they figure out, like, well, actually,
51:25
you have to shim this one machine this way to
51:27
make it do its thing properly, right?
51:30
And those little tricks are
51:32
what makes the whole operation work or not
51:34
work. And so if you take
51:36
the opposite assumption of like, actually, we do need
51:39
those people, and we need to value
51:41
them, and we need to treat them better and
51:43
make more humane working conditions, I
51:45
think that's a better winning way
51:47
forward. And so we're sort of
51:49
betting on that hypothesis right now and trying to make
51:53
more capable push carts that make, you know, make
51:55
the load feel lighter, that make it easier to
51:57
find the thing you're looking for. and
52:00
make you walk fewer steps. Some
52:02
of these warehouse workers can walk like 30,000 steps or
52:04
more per day. And
52:07
like, you know, it's good to get some exercise, but
52:09
that's a lot. That's pretty good. Yeah, I
52:11
think the goal is what, 7,000 a day? Isn't
52:13
that what we're all looking for? Something like this. Yeah,
52:16
you don't need that much of a workout just at
52:18
work. And then it limits
52:20
who can actually participate in those jobs, right?
52:23
And so I think we have a pretty cool
52:26
chance to try to make these jobs better and
52:28
make them more accessible for
52:30
a broader range of people too, which would be
52:32
pretty cool. So the bar is very high for
52:34
design. There
52:37
are many different languages spoken in these warehouses,
52:39
and so we have to support all of
52:41
them, right? And I think, you
52:44
know, it's a, to me it's a
52:46
fun challenge to work on because if we don't
52:48
get the user-centered design right, then we've failed just
52:50
as a company. And yes,
52:53
the machine learning part of it matters.
52:55
Yes, the mechanical design really matters too.
52:58
And so this is a time when we
53:00
get to test our ability to work together
53:02
as an interdisciplinary team to work on a
53:04
real world problem, right? As opposed to, let's
53:07
make a robot and then figure out what it's for
53:09
later, right? Which is
53:12
kind of more typical in this industry.
53:15
You know, I'm curious, thinking
53:18
about the work that you're specifically doing
53:20
right now, I can
53:23
imagine that when you're talking to different
53:25
stakeholders, when you're talking to individuals who
53:27
are curious about this type
53:29
of progress, that a common question that
53:31
comes up is, okay, yes,
53:34
you're trying to make it so that the people
53:36
and the robots can work together seamlessly and blah,
53:38
blah, blah, blah, but isn't the goal ultimately to
53:40
do the dark warehouse? Like maybe we're just not
53:43
doing the dark warehouse because we're not there yet,
53:45
technically. Like how do you respond
53:48
to that question?
53:50
Yeah, I think of it as like,
53:52
you know, they're short term and long term. And right
53:54
now I think there's an obvious direction that
53:57
go for the robot. short
54:00
term because if you look at
54:02
the fully autonomous systems like there's lots of startups
54:05
and other larger companies that are making the
54:07
dark warehouse bet and
54:09
we're seeing how that's panning out and it's
54:11
it's tough because
54:14
you need to be very sure that
54:16
the items that you're shipping are always going to be
54:18
the same items right if you build a big
54:20
warehouse robot and it's got totes that are
54:23
a certain size for carrying stuff and then
54:25
suddenly now you're carrying bigger items
54:27
that don't fit in those totes right what
54:29
you're gonna do and so they
54:31
tend to be very brittle solutions and
54:34
then they end up having to like build yet another warehouse
54:36
where they handle all the other stuff that doesn't fit. Right
54:39
so they're too narrow oh that's
54:41
interesting. Yeah like it can work
54:43
if you're very sure that you're never going to change
54:45
what you're shipping and you're pretty sure
54:47
you're never going to change your order profile
54:49
so you know if you're if you're e-commerce
54:51
like holiday time is insane right they call
54:54
it peak season and
54:56
being ready for that is really hard right
54:58
you got to flex up and down
55:00
very quickly in response to whatever customers
55:02
are ordering and that can
55:04
be hard so I think it's really the
55:06
dynamic nature of that industry that makes it
55:08
super hard to have very
55:10
like brittle solutions so
55:13
I would bet on actually like
55:15
what happens now is people get it done right
55:19
and we just get it done better and
55:21
those are they can be really great entry-level
55:23
jobs for folks who are entering a new
55:25
space right that they they haven't worked in
55:27
before or a country that they haven't worked
55:30
in before and make opportunities for those people
55:32
yeah yeah yeah we need that to have
55:34
a functioning economy and to have a functioning
55:36
social you know net you know
55:38
it's it's I
55:40
guess here sort of at the
55:42
end of of the hour this this
55:44
then leads us to the big question
55:46
that's often kind of on the tip
55:48
of everybody's tongues and not to get too
55:51
dystopian and not to get too dark but you
55:53
know yes you said that there
55:55
are those startups that are working on the dark warehouse and
55:57
of course that's just a synecdoche right that's like an existing
56:00
of something larger. But often when
56:02
we talk about AI and robotics,
56:05
we go to that
56:07
extreme place of like
56:10
general intelligence or of
56:12
sort of the robots taking over,
56:14
you know, this like what happens
56:16
if and when. And you
56:19
have a realistic handle on
56:21
where we are now, kind of the,
56:23
I guess we
56:26
could say the way that the
56:28
future is progressing. I work
56:31
with a lot of people who either
56:33
consider themselves futurists or who
56:35
are just very like interested
56:37
and enamored with the
56:40
world of robotics and AI. And
56:42
I find a lot of them
56:44
to be like massive techno optimists.
56:48
I sometimes worry that I'm like the
56:50
wet blanket, like I'm a bit of
56:52
a cynic. And I want to really
56:54
net out in a place that
56:57
is authentic and realistic. And so I guess this
56:59
is my long way to ask kind of where
57:03
do you find your sensibilities
57:05
when it comes to what
57:07
a long picture might look like
57:09
and what the future, maybe
57:12
there's a difference between could hold versus
57:14
where you actually think the future is
57:16
headed. Yeah, I
57:19
think that's got pounded into me in grad
57:21
school. But there's this notion of like, the
57:24
future isn't just this thing that's
57:26
coming at us, right? Especially with
57:28
technology, like we are literally deciding
57:31
what we want that future to
57:33
look like, because we're actively
57:35
participating in it. And
57:37
so I feel like, you
57:39
know, if that dystopian future comes to pass,
57:41
that's our fault, right? Because we're
57:44
choosing to take that path. And
57:46
I agree with you, there's a ton of techno
57:48
optimists out there. There's also techno pessimists, right? There's
57:51
lots of sci fi that shows us that, you know,
57:53
if we push on this one dimension in the wrong
57:55
way, the world's going to end, right? And
57:58
I think those are good warnings. they're
58:01
good possible futures to consider.
58:03
But at the end of the
58:05
day, I feel like it's our
58:07
responsibility to be involved in inventing
58:10
the future that we actually want to live in. That's
58:13
my old mentor, Stu
58:16
Card is the one who I blame for saying
58:18
it because it's stuck in my head forever. I
58:21
feel like if we don't take
58:24
action now to participate in the
58:26
design of that future, then
58:29
we only have ourselves to blame. Figuring
58:32
out how to make that participation more inclusive
58:34
of all the people who this is
58:36
going to impact is also
58:38
critical to a success
58:40
that is good for more
58:42
people than just the ones who invented
58:45
it at the beginning. Yeah. I think
58:47
that that's
58:49
an important question in and of itself,
58:51
is that this future that is what
58:53
it is because of how we shape
58:56
it, there
58:59
are specifics here and obviously I'm
59:02
not looking for some exhaustive list
59:04
here at the close of the
59:06
show. But obviously having more viewpoints,
59:08
more seats at the table, people
59:11
with different life experiences, really
59:14
having a lot of power in
59:16
the conversations is one fundamentally important
59:18
part of shaping that future. But
59:20
what are some other examples
59:23
of things that you think
59:25
are important to help shape
59:28
that future in a way
59:30
that is responsible and safe,
59:32
and I guess more humanistic?
59:35
Yeah. There is a
59:37
beautiful framework that's one of
59:39
my favorite textbooks on human
59:41
factors and the design
59:43
of technologies. It's updated
59:45
thankfully. The original version of this was
59:48
called Mabamabam, which was stood for men
59:50
are better at and machines are better
59:52
at. Oh God. It's from the
59:55
70s. That has changed. Now, it's
59:57
like people are better at versus machines are
59:59
better at. like the things machines are better
1:00:01
has changed too right in the last few decades. And
1:00:04
I think you know we're talking about like why are
1:00:07
we building things that are exactly like ourselves when we
1:00:09
know that we have so many limitations.
1:00:13
I think that a better
1:00:15
future that could be one in
1:00:17
which we look for the complementary
1:00:19
skills and for the complementary strengths
1:00:22
that can help us to overcome
1:00:24
our limitations right. So you know forgetting
1:00:26
things forgetting can be good sometimes
1:00:29
but when I forget my keys at home
1:00:31
that's not so great right. And
1:00:34
so being able to figure out how
1:00:36
to design tools including computational tools right
1:00:38
that help us to overcome our psychological
1:00:42
limitations our physical limitations right. We're seeing
1:00:44
robots being used for things like exploring
1:00:46
space and exploring the deep sea because
1:00:48
quite frankly our bodies couldn't handle that
1:00:51
right. Exactly yeah. And I think that
1:00:53
future to me is brighter
1:00:56
because it's first of all being self-reflective
1:00:59
about what we're bad at and
1:01:01
what we're good at right and then designing
1:01:03
for the rest of it right and
1:01:05
designing for the complementary skills I
1:01:09
think is a more productive way of
1:01:11
thinking about what's worth building right.
1:01:13
I love that and the future is
1:01:15
now because we are already
1:01:17
doing this and we don't often stop to reflect on it
1:01:20
like I do not know how
1:01:22
to get anywhere without my GPS and I don't
1:01:24
know anybody's phone number because they're saved in my
1:01:26
phone and that's a good thing because yeah that
1:01:28
is real estate I don't need to be taken
1:01:30
up. Yeah you're free to
1:01:32
do other things now. Yeah yeah
1:01:35
I think that's such a such a
1:01:37
wonderful way to kind of close out this
1:01:40
discussion but before we go I guess I should
1:01:42
ask if there's anything that we didn't cover or
1:01:44
if there's anything that you feel like oh god
1:01:47
no we gotta make sure we um
1:01:50
oh my gosh there's so many things that we could talk about
1:01:52
and I'd be happy to chat again anytime. Oh
1:01:56
gosh that is a great question. I
1:02:00
think I would
1:02:02
not recommend that we do this right now. But
1:02:05
there is this question of, again,
1:02:10
from films like her, this
1:02:12
question of our relationship with
1:02:14
these technologies and
1:02:17
how we make sense of them and what that means about
1:02:19
us. Right. And how
1:02:21
can we be humanistic even though that might
1:02:23
not be a human? Because, yeah, I do
1:02:26
worry the sort of genocidal
1:02:28
playbook here, right? Like the
1:02:30
more that we teach something that is quote,
1:02:32
unhuman or quote subhuman a certain
1:02:34
kind of way, the more that we can easily
1:02:37
translate that to our interactions with actual
1:02:39
humans. Oh, totally. A good friend of
1:02:41
mine named his daughter
1:02:44
Alexa just before Amazon
1:02:46
launched Alexa. And now she's in school
1:02:48
and that can be, you
1:02:53
know, it's unfortunate that they had,
1:02:55
they picked the same name, right? Because now there's
1:02:57
all this baggage that comes with that name, right?
1:03:00
Like play music, Alexa, right? Like you're just commanding
1:03:02
and it doesn't need
1:03:04
to be that way. Right. So you're
1:03:06
right. How does that shape our understanding
1:03:08
of our interactions with
1:03:10
each other, right? And how we value
1:03:13
other people versus other kinds of
1:03:15
agents, right? That can feel social even
1:03:17
if deep down we rationally
1:03:19
know that they're not real
1:03:21
people, right? Right. Yeah.
1:03:24
And I guess my take is
1:03:26
I want to always aid on, or aid,
1:03:28
that's not the right word I'm looking
1:03:30
for. I want to err on the
1:03:32
side of kindness. Like even if it's
1:03:34
to a robot, because I feel like it's
1:03:37
like empathy, like we said, is a skill.
1:03:39
You just need to practice it all the
1:03:41
time. If I'm kind to my robots, I
1:03:43
know that I'm reinforcing just
1:03:45
being kind. Right. Right.
1:03:48
Yeah. And when I see people abusing robots, which does
1:03:50
happen pretty often, it kind of makes you wonder, like,
1:03:52
what do they like when they go home? Exactly.
1:03:55
Yeah. Yes.
1:03:58
Fascinating. Gosh. And like said,
1:04:00
of course, there are so many more topics that
1:04:02
we could cover, but hopefully this was a good
1:04:05
kind of smorgasbord
1:04:07
of different things. And I'm hoping that
1:04:09
it could sort of whet some people's
1:04:11
appetites to learn more and to dig
1:04:13
a little bit deeper and just to
1:04:15
be a little bit maybe more mindful
1:04:19
in their engagements with technology.
1:04:21
I just can't thank you
1:04:23
enough, Laila, for A, the work that you do,
1:04:25
but B, for spending the time with us sharing
1:04:28
about it today. Thank you for spending the time
1:04:30
with me. These were amazing questions and
1:04:32
so many juicy topics to work on together. There's
1:04:34
a lot more that we still need to
1:04:36
do. Absolutely. And everyone listening, there's
1:04:38
more we need to do too. So I
1:04:41
just want to thank you for coming back
1:04:43
week after week. I'm really
1:04:45
looking forward to the next time we all get
1:04:47
together to talk nerdy. What
1:04:52
is the best university
1:04:55
ever? Welcome
1:04:57
to Iowa, where you can write your
1:04:59
own story. Choose from over 200 areas
1:05:01
of study, including
1:05:03
a dozen programs ranked in the top
1:05:06
10. Roll up your sleeves
1:05:08
and try something new. You never know
1:05:10
where it might take you. This
1:05:12
story is written, directed, and produced
1:05:14
by you. Learn
1:05:17
more at uiowa.edu. This
1:05:20
is the story of the one. As head of maintenance
1:05:22
at a concert hall, he knows
1:05:24
the show must always go on. That's
1:05:26
why he works behind the scenes, ensuring
1:05:28
every light is working, the HVAC
1:05:30
is humming, and his facility
1:05:32
shines. With Grainger's supplies
1:05:35
and solutions for every challenge he
1:05:37
faces, plus 24-7 customer support,
1:05:39
his venue never misses a
1:05:41
beat. Call, quitgrainger.com, or just
1:05:43
stop by. Grainger, for the
1:05:45
ones who get it done.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More