Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:06
Welcome to Embedded. I'm Alicia
0:08
White alongside Christopher White. Our
0:10
guest this week is Marianne Petri.
0:13
We're going to talk about how you
0:15
go from being an expert
0:17
to being more of an expert, or
0:20
maybe how to go from being a novice to being
0:22
an expert. That might be more useful to
0:23
more people. Hi, Marianne. Welcome.
0:26
Hi. Thank you for inviting me. Could
0:30
you tell us about yourself as if we
0:32
met at, I don't know, a Strange
0:34
Loop conference?
0:36
Okay. Hi, my name is Marianne.
0:38
I'm from the Open University. I
0:41
pick the brains of experts to try to figure
0:43
out what makes them expert.
0:46
That's a good elevator speech.
0:50
Do the brains get to stay in the experts? Yes.
0:56
Sadly, the brain picking is actually quite indirect.
1:00
I like how she prefaced that with sadly.
1:05
We're going to do lightning round where we ask
1:07
you short questions and we want short answers
1:09
and if we're behaving ourselves, we won't ask like
1:12
why and how and are you sure, at
1:14
least until we're done with lightning round. Are you ready?
1:16
I'm ready. What
1:18
do you think is the best first programming language
1:21
to learn? Any of them.
1:24
One that doesn't put you off. There
1:28
are languages that are designed for people
1:30
to walk in without having to get
1:31
crazy. So one is
1:34
Scratch by Mark
1:37
Guzdile and others. But
1:40
I think our entry level courses are in Python. I
1:43
learned Basic and
1:45
then I learned Machine Language. The
1:49
answer is as long as your first language
1:51
isn't your last language, it
1:54
doesn't necessarily do permanent damage.
1:58
Which is the best programming language? Overall?
2:01
Oh, come on. I know. That's
2:03
such a terrible question. I
2:06
decline to answer that one. I'll
2:08
give you an alternative, however, which
2:10
is that one of the single empirically, one of
2:12
the single best indicators
2:13
of programming performance is how many
2:15
programming languages the developer
2:18
has touched. And
2:20
so the key is not one language, but many
2:22
languages, and to learn lessons from all of them.
2:26
And finally, I'm glad I learned Aok.
2:28
I have touched many, many programming languages, and
2:30
the lessons I've learned is I don't like programming.
2:33
No, no. Next
2:36
question. Let's
2:38
see. Do you
2:41
like to complete one project or start a dozen?
2:43
Yes.
2:47
I think the answer is both in balance.
2:50
I try to be
2:52
sort of focused active on two projects at a time
2:55
so that I can play between them
2:57
and then keep a backlog
2:58
of all the other stuff that's in my head so I don't
3:00
lose it. Are there any
3:03
journals or magazines or YouTube
3:05
channels or conferences that
3:08
normal, everyday software developers should
3:10
follow to understand best practices?
3:12
I don't have a good answer to that. The
3:17
two industry conferences I've been to recently
3:19
that have been an absolute
3:20
joy from a developer's perspective
3:23
have been Strange Loop, which sadly
3:25
has just had its last episode,
3:29
and Joy of Coding in the Netherlands. I
3:32
would recommend both of them.
3:35
What
3:36
is your favorite fictiony robot?
3:40
That's hard.
3:43
Robbie, the robot from Forbidden
3:45
Planet was a sort of meme from childhood.
3:47
We used to call him Blobby Robbie.
3:49
More
3:52
recently, Mead, a sort
3:55
of robot spaceship cross, was entertaining
3:58
partly because of the
3:59
There's an implied ethos
4:02
in Mead.
4:05
But
4:06
yeah, I think the Ravi
4:09
is kind of the
4:11
iconic
4:13
robot image that everybody has.
4:16
What was Mead from? It's
4:18
a film.
4:19
Oh, okay. Let's go look for
4:21
that. I haven't seen
4:23
a robot film we haven't seen. Do
4:26
you have a tip everyone should know?
4:28
I have one from my very wise
4:31
mother. Never make a statement when you could ask
4:33
a question. It's a
4:35
piece of advice that has stood me in good stead over 30
4:37
years. Well, more
4:39
than 30 years. I'm
4:41
kind of surprised that wasn't in the form of a question.
4:47
I know, I know. There's a certain
4:49
irony to that. The Jeopardy
4:51
rule of life.
4:57
I saw your strange loop talk
4:59
on Greg Wilson's Never Work in Theory site.
5:02
And this was the small version, although I have seen
5:04
the larger one now. And
5:06
it was about how experts think about errors.
5:09
Could you tell us a little bit about that?
5:13
I'm not quite sure what you
5:15
want to know about it. So, that
5:17
talk is one small slice
5:20
out of decades of research
5:23
on what makes experts expert.
5:27
Greg Wilson and
5:29
Mycoy's image for
5:31
those
5:33
talks was a 10-minute
5:35
talk that would deliver something actionable
5:38
from research to
5:39
developers. And for me,
5:42
the attitude to everything was a really nice nugget
5:45
to hand across that boundary.
5:50
It's also incredibly rich.
5:52
So, the whole notion is that
5:55
experts have a very different approach to
5:58
error when it arises. and say people
6:00
in software factories. So instead
6:02
of, oh my god, there's a bug, swat it, get
6:05
rid of it. They pause and
6:07
they look at the error and they say, what's that about?
6:09
Is that as trivial as it seems or is it
6:11
part of an ecosystem, a collection of other things?
6:14
Is there something else going on that we haven't thought about? And
6:16
very often really important
6:19
insights about the software
6:21
come from paying attention to errors. And
6:25
in a way that fixes the
6:27
error, not fixes the blame. So
6:29
it's a very, very open attitude. And
6:32
that kind of embracing error
6:34
as opportunity is a
6:37
really, really useful part
6:39
of that expert mindset.
6:42
I like that a lot. And
6:46
figuring out how to help people become experts
6:48
is something I've been thinking a lot about lately.
6:52
How do you take people who
6:54
are excited and willing to
6:57
do more and to take classes
7:00
and help them get
7:03
over the hurdle of not even beginner
7:06
to novice or novice
7:09
to junior, but junior
7:12
to engineer and
7:13
engineer to senior. How
7:16
do you help
7:18
them become experts?
7:20
Well, I'll relate things that I've seen
7:22
in terms of the way the high performing teams bring
7:24
people on side.
7:27
Actually, first I'll tell a story about
7:30
one of the designers
7:30
that we studied. So I was doing work
7:32
with my colleague Andre Vanderhoek at University
7:35
of California Irvine. And as
7:37
part of that, we recorded, or
7:40
he and one of his PhD students
7:42
at the time, recorded pairs of designers
7:44
working together on a design task.
7:49
And in all of the companies they went into for
7:51
these recordings, they asked for people who were really
7:53
their best designers so that we could get sample
7:56
material for researchers
7:58
to look at to try to understand. was in those dialogues.
8:01
And in one of the cases, one of the designers
8:04
was incredibly young. He wasn't the sort
8:06
of person that you'd expect them to have delivered
8:08
to us as their, you know, really
8:11
high performing designer. And so they
8:13
stopped afterward and spoke to him and
8:15
said, how did you get to
8:17
be here? And
8:20
his whole story was a story
8:22
of
8:26
asking questions. Every time there
8:28
was
8:28
something to do, he would pick the
8:30
problem he didn't know how to solve. He would find
8:32
something he hadn't done before.
8:38
He would navigate the design space, the
8:41
problem space in a different way because he wanted to be surprised.
8:44
So every project he worked on,
8:46
he focused on
8:48
whatever the design
8:50
component was that was most crucial.
8:54
He was trying to sort out
8:56
the shape of the solution before he started
8:58
engaging with coding. He
9:01
made lots of mistakes and he learned from the mistakes.
9:04
He sought out open source projects
9:06
that were in areas he wasn't familiar with. And
9:09
so what he did was he gave himself
9:11
a huge range of experience in
9:14
order to stretch himself and in order to give
9:17
him a body of material.
9:18
Now, not
9:19
just to engage with and
9:22
build his knowledge base, although that was certainly
9:24
part of it, but also to reflect on so
9:26
that he could look across the whole of that and
9:29
begin to understand what works, what doesn't
9:31
and why.
9:32
And I think
9:34
that's a big part of it, is that business
9:37
of looking for diverse experience,
9:40
reflecting on it, thinking hard
9:42
about what makes something better,
9:44
what makes one solution better than another solution
9:46
or what the trade-offs are. In
9:49
terms of you helping people, I've
9:52
always said that it's kind
9:54
of a meme that the education is in the dialogues.
9:58
There are points of engagement really,
10:00
really important. Where somebody is
10:03
coming to terms with something, it just needs to talk
10:05
about it to somebody else, needs to get it out
10:07
of their head, needs to compare
10:09
their experience to somebody else's experience. And
10:12
so creating an environment in which
10:14
it is
10:16
perfectly reasonable
10:16
to explore, it is
10:18
valued to learn and experiment
10:21
and make mistakes and
10:24
then figure out how to fix
10:26
them. And it's reasonable to have
10:28
conversations about that as a
10:30
rich environment for developing
10:32
expertise. I want to go back
10:34
to what you were saying about this
10:37
particular person who explored
10:41
different ways of looking at things, explored different,
10:44
kind of walked their way through
10:47
not being closed into
10:50
one thing, kind of exploring different things. And
10:52
what you said about experts and
10:55
how you found that experts
10:58
tend to see bugs
11:01
or errors as a problem, not as a problem,
11:03
as an opportunity. It's kind
11:05
of a paradox because that sounds like,
11:08
forgive me, in Zen Buddhism there's
11:10
a thing called beginner's mind. And it's a thing people
11:12
talk about. It sounds like maintaining beginner's
11:14
mind, which is somewhat paradoxical, you know,
11:17
if you say, oh, if you're a good expert, you
11:19
can get into the beginner's mindset. But
11:21
it sounds like that's sort of what you're talking about, being
11:23
able to approach things without
11:26
a lot of judgment to start with and see, okay,
11:29
where does this, where does this lead me?
11:31
What is this telling me that maybe my
11:34
years of experience are locking me into
11:36
a solution that maybe I'm missing
11:38
something?
11:40
That's a beautiful recap.
11:42
One of the things that's really interesting that experts do,
11:45
particularly when they're in a sticky problem
11:47
and they're
11:47
struggling to handle all the
11:50
different constraints, is
11:52
they relax constraints. Either they
11:54
simplify the problem by taking away some of
11:56
the design features they expect
11:58
to put in.
11:59
and eventually and they focus on really
12:02
the heart of the problem. Or they just
12:04
say things. I literally have sat in a design session
12:07
where,
12:08
as part of the discussion,
12:10
the expert in the room said,
12:13
let's pretend there's no gravity. How does
12:15
that change
12:15
what we would do? And
12:18
it's clear that
12:19
we haven't figured out how to eliminate
12:20
gravity.
12:21
But by reducing that constraint, they
12:24
really broadened the potential solution
12:26
space and they got insight
12:28
into the thing that was hanging them up.
12:31
And so that whole
12:33
sense of the inquiring mind, that
12:36
whole business of continually re-exploring
12:39
the nature of the problem and the understanding
12:42
of what's needed is part
12:44
of that designer mindset that
12:47
distinguishes these high performers.
12:49
How do you decide who's
12:51
a high performer? Well,
12:56
the algorithm I
12:56
used for
12:59
finding experts when I went into organizations
13:01
to study them was I would ask
13:04
everybody who the expert was
13:07
and go to that person. That person
13:09
always pointed to someone else. No
13:11
one admits, none of these people admits to being
13:13
an expert because they're all too aware
13:15
of their limitations.
13:19
But the reality is that there's very
13:21
often one
13:22
person or a couple of people
13:24
who are the people who sit quietly in the room and
13:27
then when they open their mouths to ask a question,
13:30
that question changes the discussion.
13:34
They're very often people with very
13:36
deep knowledge and knowledge that
13:38
is – I keep talking about
13:41
garbage can memories, experts with garbage can
13:43
memories that are indexed so that they
13:45
can go back in time and understand what
13:48
they did on previous similar projects,
13:50
what the constraints were on those projects, how they
13:52
made the decisions and then they can reapply them. But
13:55
they're also the people who
13:58
are able to see –
13:59
opportunities to
14:01
see past
14:03
methods, to see past
14:05
imposed constraints, to see past current
14:08
technological
14:08
obstacles
14:09
to find alternatives.
14:12
One of the things that I found fascinating
14:15
after graduating in college was
14:17
the emergence of
14:21
the design patterns.
14:23
I
14:25
guess
14:27
Gestalt? I don't know, whatever. Some word I
14:29
don't know.
14:29
I can't go through much German words. Yeah,
14:32
elevator. I don't know.
14:34
And it felt
14:37
like that made people more
14:40
expert because they got a wider
14:42
variety of problems and relatively
14:45
canned solutions and understanding of where
14:49
things fit. Do people
14:51
just need to read a couple books
14:53
to become an expert?
14:55
No, no, no, no. No,
14:58
no, no,
14:58
no. I mean, expertise is something that takes
15:00
time
15:00
to acquire. It takes time. It
15:02
takes experience. It takes reflection.
15:05
The point is that people can step onto the
15:07
path toward expertise
15:10
by adopting an appropriate mindset and
15:13
then over time build up that knowledge
15:15
base and build up that experience
15:17
base and build up that body of reflection.
15:20
So the nice thing about patterns, as you say, is that
15:22
it was encapsulating known
15:25
solutions to familiar problems in
15:27
ways that could be reapplied. And it abstracted
15:30
them. But ideally, if patterns
15:32
are well expressed, it also gives examples
15:35
of how it works, where to look for how
15:37
this is applied. And that's really,
15:39
really powerful
15:42
as long as that doesn't become the end
15:44
instead of the means. So patterns
15:46
as a tool are incredibly
15:48
powerful. And they do
15:51
allow people to walk
15:53
through things they might not have thought about themselves
15:57
and to consider alternatives that they might not have
15:59
generated.
16:00
One of the things that I try
16:02
to convince people to do is to stop
16:04
coding and to start thinking.
16:07
I try to convince people to stop coding. Oh,
16:09
no. Yeah, but your version
16:10
is never code again. My version
16:13
is to think first. And
16:15
I go through times and I'm like, okay, write out pseudocode.
16:18
Okay, write out pictures. But it's all
16:20
really just don't type
16:22
until you've thought about it. And you brought it up with
16:25
your expert. How do you...
16:26
There's so much coding without
16:29
design that
16:30
happens, I think
16:32
is what you're saying, right? How do we convince
16:34
people to stop typing?
16:37
Well, part of it is cultural.
16:40
So there are huge differences
16:43
in software development cultures
16:45
that actually have a real impact on
16:47
how people behave.
16:49
So
16:50
in going to places like Strangeloop and Jorvik
16:52
coding, I met all these developers who are reflective
16:55
practitioners who are clearly out there trying
16:57
to learn things, trying to think about things who were
17:00
open to conversations. Okay, going
17:02
into companies isn't necessarily the same thing
17:04
because in a lot of companies, they
17:07
are driven by KPIs, key
17:10
performance indicators. They're driven by how
17:12
many pull
17:15
requests you submit. The metrics
17:18
drive things. That's
17:20
actually a culture that is
17:23
really problematic for developing
17:26
generalist expertise, for
17:28
developing problem
17:31
solving expertise and the design
17:35
mindset. Because the design mindset
17:38
isn't about
17:39
those quick fixes.
17:41
It's about understanding that investment
17:43
in the underlying issues
17:46
pays off in terms of speed
17:48
of delivery and quality
17:51
of delivery overall. And
17:54
so it may look like a much
17:56
bigger upfront investment, but
17:59
when I go to a company, talk about high performing teams and
18:01
talking about teams that deliver basically
18:04
under budget
18:07
works first time without lab ground smells. And
18:10
they do that repeatedly. And part
18:12
of that has to do with understanding and
18:14
owning that development process
18:17
in a way that isn't driven by management
18:22
indicators is actually driven by the
18:25
engineering needs. So
18:28
I'm very, very sympathetic to the
18:30
position you're in. I mean,
18:31
you saying think first. Yes, absolutely.
18:35
And it's very interesting to watch
18:40
what these high performers
18:41
do.
18:43
They certainly think first. They certainly
18:45
sketch things. They also sit
18:48
in a corner with a pad of paper
18:49
and their legs crossed and wave
18:50
a pen in the air without writing anything down a lot of
18:52
the time. But they
18:54
think hard before they start committing
18:56
things to code. That doesn't mean they don't ever
19:01
make annotations in code. But
19:05
what these people do is they design
19:09
solutions and then they begin
19:11
to implement those solutions in a pseudo
19:13
code that is an amalgam
19:16
of lots of different notations and lots of different
19:18
ways of presenting things. And
19:20
when they've got the
19:23
shape of things worked out, then
19:26
they move on to code because that part's easy.
19:28
That part's pretty straightforward. I
19:30
mean, sometimes they'll just type code because it's faster
19:33
because they know what they're,
19:34
I suppose we should be distinguishing between
19:37
normal
19:38
solutions and radical
19:40
solutions as the literature would have it. So there are
19:43
certain things that are just a very familiar problem. This
19:45
is another edition of what we already know how to do.
19:47
We will just do it. We will use
19:50
a good solution for a known problem. I'm not
19:52
going to make a diagram
19:53
for string copy. I know how to write
19:55
that code.
19:56
That's right. And you just go to that. But
19:58
for things that are new... they
20:00
think first, as you say, and the
20:03
ways they articulate. I mean, I did
20:05
some studies about
20:08
representations for ideas capture,
20:11
and I did things like, I wandered around after people,
20:13
I pulled the envelopes out of their
20:15
bins that they'd been sketching things on, I took pictures
20:17
of their whiteboards, I kept track of the things that they
20:20
were writing and
20:22
drawing when they
20:24
were shaping the
20:26
solution in their dialogues.
20:29
And in their own minds.
20:31
And those were incredibly diverse representations.
20:34
It had lots of different things in it. There
20:36
were little bits of code, but there were also diagrams,
20:39
there were also bits of analysis,
20:41
there were also descriptions
20:44
of things, there were the code
20:46
that they wrote might have been in more
20:48
than
20:48
one language because they were borrowing
20:50
from known
20:52
elements here
20:52
or there. And
20:55
that's pretty typical.
20:56
Do you think this has changed over time? As
20:59
you described this, I'm thinking back to my early experiences
21:02
as a software developer in like the late
21:04
90s, early 2000s, where I was, I
21:06
think I was surrounded by people like this. This
21:09
is how we did things. And there
21:11
were discussions and we spend a lot of
21:13
time, I remember there were, when I got
21:15
assignments to do things, I would spend
21:17
a month writing
21:19
documents and stuff on how I was gonna approach
21:21
it before I wrote any code and I was encouraged.
21:25
I don't feel like that, at least in my recent
21:27
experiences, that that's
21:29
the way things
21:31
are working most of the time. You were
21:33
hired into a company and surrounded by fantastic
21:36
experts. I know, I know, I know. But I went to different companies after
21:38
that.
21:39
And it all went downhill, didn't it? No,
21:41
no. Well, yeah, I know. I
21:47
mean, over the time that I've been studying
21:49
developers,
21:51
the scale and nature of the
21:53
software they've been building has changed.
21:55
And so a lot of what's going on
21:57
is people are no longer just building.
22:00
building a piece of Greenfield software,
22:02
they're building something into a product line or indeed
22:04
designing a product line where they're
22:06
borrowing from all sorts of
22:08
existing software and libraries, they're
22:10
compiling things, they're composing things. In
22:14
some ways, parts of what
22:16
people are doing has changed in
22:18
proportion if not in kind.
22:20
But in terms
22:22
of the problem solving,
22:25
there's a real need
22:27
to go back and think about the concepts,
22:30
to think about the nature of the design,
22:33
to focus on the essence
22:36
before getting bogged down in the detail.
22:40
And I think,
22:41
so one of the things I do think
22:44
in some places I've studied
22:46
where they've shifted into
22:50
some variation of agile practices,
22:54
they don't always make the
22:56
design documentation, the functionality
22:59
documentation as visible
23:01
and as prominent as it would have been in traditional
23:04
teams.
23:04
So they're using a lot of the same methods because
23:07
I don't actually think
23:07
there's a disjunction between traditional
23:10
software development and agile development. I think agile
23:13
just
23:14
highlighted
23:16
certain effective practices
23:19
so that people could adopt
23:21
them in a coherent way. But
23:24
there are some interesting questions about where
23:28
some of those diagrams,
23:31
sketches, early notes go in
23:33
that process. And
23:36
they don't necessarily show up on
23:38
the wall or on the whiteboard.
23:41
And so it may be that all of this is still happening,
23:43
but it's not as publicly visible within
23:46
the team. It's
23:48
not visible to the whole team in the way that it might have
23:50
been.
23:51
I also think that the dispersal
23:53
of developers has had an impact.
23:59
Graphically or yes. Yes
24:02
physical dispersion. So one of the places I Studied
24:05
for a long time.
24:06
They had
24:07
each developer had a cubicle
24:11
They had a beach each had an office But
24:13
the top part of the top half of the walls
24:15
were glass and they would use the glass
24:17
walls as white boards
24:20
And so even though they were developing individually
24:22
were working on their component
24:24
individually When they sketch
24:26
something on the
24:27
whiteboard
24:28
People could look across and see what they were doing
24:31
and I saw a number of dialogues that happened simply
24:33
because somebody came running Out
24:34
down the corridor knocked on the door and said I just saw this
24:36
hang on a minute
24:37
or indeed just stood in his
24:39
or her own office and drew
24:41
an alternative
24:43
And they had a kind
24:45
of dialogue through the windows and
24:48
I think it's harder now or it's less
24:51
Spontaneous now if people
24:53
are working in separate offices
24:56
and they haven't found an alternative a
24:59
replacement for that kind of impromptu
25:03
interaction
25:04
To do that kind of explicit
25:07
sharing. So
25:08
open offices versus cubicles
25:11
versus closed-door offices
25:15
Either is there research that Says
25:18
one is better than the other. Um,
25:21
there's only one right answer here,
25:23
by the way, I Don't
25:27
have an answer to that there I do
25:29
believe that that research exists I don't
25:31
I don't have it in front of me, but somebody
25:34
I know cited religiously
25:36
Management every time they try to Institute open offices,
25:38
then
25:39
I always wonder about the quality
25:41
of the studies and all of that How
25:43
do you make a good study that
25:46
looks at some for development? Do you give fake
25:48
problems to you integrate into
25:51
a team for five years? How
25:53
do you study these things? Yeah?
25:55
Okay, there isn't a single here's
25:57
how you do a good study answer
25:59
There are
26:01
lots of ways to do studies.
26:03
So a lot of the work that I've done, for example,
26:05
has been me
26:06
going to teams and sitting
26:08
with them and
26:09
watching what they do, or alternatively
26:11
interviewing developers, and
26:13
then they show me what they do where they give
26:15
me,
26:16
send me examples of
26:18
their materials. We also
26:20
do experiments where we have much more focused
26:22
questions, and we ask developers
26:24
to do
26:26
designated tasks and
26:28
then compare across
26:30
different places. The key to all
26:32
of it is that you have, there's an awful lot, the
26:35
way that you design a study depends on the question that you
26:37
want to answer.
26:39
So
26:41
there isn't a single, the
26:43
way that you marry research, so okay,
26:46
so I do a lot of work with PhD students
26:48
teaching them the craft skills of research. And
26:51
one of the fundamentals we have is about
26:54
research design, and it's, we call it the one,
26:56
two, three of research design. Number one, what
26:58
question,
27:00
what's your question, and
27:02
why does it matter, what does an answer look like?
27:05
That's number one. What's the question? Number two,
27:08
what evidence would satisfy you in answering
27:11
that question?
27:12
And then number three is what technique
27:15
would deliver that evidence?
27:16
So how to design a good study
27:19
is
27:20
figure out what you want to know and what knowing
27:22
would look like. So
27:25
if I want to understand
27:27
where innovation
27:28
comes from in
27:31
development teams, I will probably
27:33
start by looking at an innovative team and
27:36
watching for
27:38
any instances of innovation.
27:41
I'll probably then take
27:43
that information and
27:44
go talk to all of those developers and
27:47
ask them questions about how they perceive innovation
27:49
and
27:51
how they see it arising and whether the things I've
27:53
identified are the things that they would
27:54
identify.
27:57
Based on that, I might then go to a number
27:59
of other
28:00
companies and I'd be looking for things like
28:02
differently organized companies, differently organized
28:04
teams, different domains because
28:06
what I'm trying to get is a representative
28:09
sample but working
28:11
at that intensity limits
28:14
the number of places that you can go to study and
28:16
so I'm not going to
28:18
be talking about statistically significant results
28:21
if I observe five teams in
28:24
five different companies.
28:25
And so the real answer
28:28
is that we use a collection
28:30
of different methods over time that
28:33
look at the question
28:35
from different angles and using different
28:37
evidence and then we reflect
28:40
across all of that evidence to try to understand
28:42
the underlying phenomenon.
28:45
Once we've understood it well enough, we can
28:46
articulate what we think is going on and then we can
28:49
go test whether that's true,
28:51
whether we can find
28:52
anything that contradicts that
28:54
theory of how things work.
28:58
But it isn't simple.
29:00
So there's a lot of ethnographically informed
29:02
work where people are sitting in companies
29:04
watching things as they happen naturally.
29:08
There are
29:09
what we call field studies where there might
29:11
be some intervention
29:13
where, for example,
29:15
we might ask people to do a particular, well,
29:17
the example that I gave where we were looking at pairs
29:19
of developers solving a particular
29:21
design brief at the whiteboard.
29:24
I mean, we went and filmed them in their
29:26
companies but it was our design brief.
29:29
And so we could look
29:30
at
29:31
all those pairs, we could look across all the pairs
29:34
to see how their behavior was similar
29:36
and how it differed. But arguably,
29:39
I think we had about 10
29:43
videos at the end. That's a very small number
29:45
to
29:46
be representative of the whole of software development
29:49
or the range of design styles that
29:52
are out in industry. And so
29:55
it's not necessarily
29:57
simple. There are a lot
29:58
of people doing
29:59
survey work or doing
30:02
tasks on things like
30:04
mechanical Turk. For that
30:06
you need a very very specific question, you
30:09
have a need a pretty good idea of what it is that you're asking
30:11
about or you end up with a
30:14
lot of examples of very weak evidence
30:18
and so on and so on. So there are lots of different
30:20
ways to do it and it actually requires
30:22
thinking
30:24
over time but
30:27
it also depends what
30:28
kind of answer you need. Sometimes
30:30
you just want a
30:33
you know
30:33
finger-in-the-air kind of answer. Is there
30:37
any reason to think there's a difference between
30:39
these two
30:39
methods let's have a quick look at them or
30:42
does this work at all? A demonstration
30:44
a really simple demonstration
30:46
of concept might be a very
30:48
very limited kind of study
30:49
so it comes down to that
30:51
match between what you want to know
30:54
and
30:54
then
30:55
how you choose to find it out.
30:57
These are they seem
30:59
like psychology design studies
31:03
as opposed to computer science where you're looking
31:06
at data. Okay so sorry
31:09
that came out really badly didn't it where you're looking
31:11
at data like that wasn't data where you're looking
31:13
at numeric data.
31:16
But the thing that you're asking about if I'm
31:18
talking about the nature of expertise I am talking
31:20
about human behavior. One
31:23
of the reasons that computing is such an interesting
31:25
domain in
31:26
which to to do research is
31:28
because software is limited
31:31
basically by our imaginations.
31:33
We can we can whatever we can imagine
31:36
we can probably build over
31:38
time and
31:40
so
31:41
but the key is the
31:44
human imagination is the human ability
31:46
to
31:47
to affect
31:50
the designs that are in their minds
31:53
and so for me anything that we
31:55
do
31:57
the software that we use is
32:00
going to be written by people, or
32:03
maybe
32:04
a collaboration between people and machines. Let's
32:06
just go with people.
32:07
It's going to
32:09
be read by people, and
32:11
importantly, and then operated by machines. And
32:20
ultimately it's going to operate in a human
32:24
socio-technical
32:24
world. And
32:27
so there's an awful lot of, I mean there
32:29
are lots of systems that are
32:30
very much oriented to technology. But
32:33
even the ones that seem like
32:37
the human part of it is irrelevant,
32:39
it turns out that it isn't. So for example,
32:42
one of
32:44
my informants works in
32:46
embedded software in the automotive
32:48
industry. And there
32:50
are examples there where the
32:53
software worked absolutely
32:55
to spec,
32:57
but
32:58
what it didn't take into account,
33:00
it worked very well with the car. What
33:03
it forgot was that there was a human driver in the car. And
33:06
so for example, there were automation
33:10
supports of braking systems
33:12
that caused fatalities simply
33:14
because
33:15
once
33:17
that automation was
33:17
invoked, it surprised
33:21
the driver.
33:22
And the driver was unprepared to
33:24
what happened with the vehicle. The vehicle
33:26
is now behaving in a different way.
33:29
And so I don't think
33:32
that the separation between looking
33:35
at
33:35
how people think about
33:37
software
33:38
and how software operates on the
33:40
machine should be absolute. There's actually
33:43
a very, very important relationship between them. I
33:45
mean it may be that different people want to focus on different
33:47
parts of that arc, but
33:50
they're intimately related whether
33:53
we like it or not. You've
33:54
talked about experts and
33:56
about high performing teams.
33:59
aren't the same.
34:04
How are they the same and how are they different?
34:07
So typically
34:09
high-performing teams have an
34:11
expert as part of them
34:13
or someone with expertise
34:18
but it doesn't mean that everybody on the team is an expert
34:22
and one of the things is really interesting
34:24
about how
34:27
these teams operate is that they're embedding
34:31
first of all they're embedding this designerly
34:33
mindset in what they do and
34:36
they're reinforcing it with the
34:38
development culture that they have and the dialogues
34:41
that they have across the team and
34:45
so what they're doing all the time is helping
34:48
everybody to be the best they can be. So
34:51
for example
34:52
experts don't
34:53
make fewer errors than
34:55
non-experts they make just
34:57
as many if not more but they have much
34:59
better safety nets for catching
35:01
errors and one of the things that
35:03
high-performing teams do is
35:07
included in their culture practices
35:10
that mean that there's a really good safety net for error
35:13
because
35:14
code isn't owned by one person code is
35:17
owned by the team and people look across and
35:19
help each other and do things together and so the
35:21
likelihood that they will find errors that arise
35:24
is very very high that they'll find them early is much
35:26
higher and therefore that they'll be able to address
35:28
things while it's
35:30
in production not
35:33
when it's out of the world
35:35
and so a lot of the
35:38
business about high-performing teams
35:40
is having enough
35:43
expertise and having that
35:45
really well-founded culture
35:48
that embodies this
35:51
designerly mindset that embodies
35:53
reflection that embodies a
35:55
learning culture that includes
35:57
the kinds of checks and balances and the kinds of practices
36:00
practices that help them catch
36:01
errors, but also
36:04
help them think beyond
36:06
individual strengths.
36:07
And so one of the things that characterizes
36:09
by performing teams is very often it
36:12
feels like it is greater than the sum
36:14
of its parts. Christopher recently
36:17
mentioned
36:19
something where you heard that
36:22
the goal or one of the primary
36:24
duties of a senior
36:26
software engineer should be to create
36:29
more senior software engineers.
36:31
Yeah,
36:32
I like that.
36:33
I don't know how many companies
36:35
I have been in that that's actually part
36:38
of the company. And maybe it's because
36:40
I've done a lot of startups.
36:42
Big companies have that culture
36:44
more than small startups. I
36:46
would say that's probably true.
36:48
I mean, startups, they have a job, they don't
36:50
have a lot of money, they've got to get from here to there.
36:53
But even big companies, I
36:56
don't hear that being a goal
36:58
anymore.
37:00
They do just as many layoffs as anyone else.
37:02
Well, I mean, yeah, I'm not sure
37:04
a layoff is related to that. That's
37:08
usually a higher corporate directive than
37:10
the development culture of your junior people.
37:12
It's a measure of loyalty.
37:14
Oh, sure.
37:15
Okay, question. I have an actual question.
37:17
I'm getting to. Is it my
37:20
responsibility to be
37:23
curious
37:24
and go to conferences and
37:26
read books and think about things? Or
37:29
is it my employer's responsibility
37:31
to give me the time and tools to do that? Again,
37:34
my answer would be yes.
37:37
Yeah, I mean, the
37:39
key is
37:40
individuals have to want to
37:42
improve, they have to want to embrace things.
37:45
But also companies need
37:47
to be
37:49
canny about what creates
37:52
an effective culture and about how
37:54
they invest in their people
37:56
so that their people can deliver the
37:58
best
37:58
products possible.
38:00
And it's been very interesting
38:03
working with different
38:05
people. For example, there's
38:08
one
38:09
developer I spoke to
38:11
who actually wanted to talk to me because he had been working
38:13
with a company for a long time that had a terrific
38:15
company culture, one where they were very
38:17
interested in everybody getting
38:20
better, everybody developing and learning and having
38:22
opportunities. And then something
38:24
changed in the company and it went stale.
38:27
Why the vice president? They lost
38:29
that ethos. And so he
38:32
left because he wasn't happy
38:34
anymore and he went to a new company and his question
38:37
to me was, okay, I'm now in
38:39
charge of 20 developers, how do I
38:41
not make that mistake? And
38:43
that's a really interesting one. I mean, it's
38:46
it has there has to be
38:48
a commitment
38:50
all the way through to
38:53
understanding.
38:55
So
38:56
let me back off a little bit. One of
38:58
the things that I was
38:59
asked at Strange Look 2022 by a number
39:01
of the people I talked to was
39:03
to investigate what we ended up referring to as
39:05
invisible work, the kinds of
39:07
tasks that developers do that
39:09
are really important and
39:12
that add value to the software but are not
39:14
recognized by management, by
39:16
promotion and so on.
39:21
And so I spent the last year
39:24
interviewing people and trying to characterize
39:26
invisible work and trying
39:29
to get evidence
39:31
about the financial
39:33
gains of
39:37
addressing invisible work, about making space
39:39
for things like refactoring code,
39:41
building tools, learning
39:44
new methods, having postmortems
39:47
on projects to reflect on what worked
39:49
and what didn't and so on.
39:53
And
39:54
I think that's part of it is that is
39:56
understanding one of the
39:59
things that I was asked about was that I've heard at Strange
40:01
Loop this year was people referring
40:04
to almost as two different groups, developers
40:07
and the MBAs.
40:09
And when there's a disjunction
40:11
between the engineering and the management,
40:15
then you get this drift
40:18
into error by proxy
40:21
or error by KPI and
40:24
all sorts of things that don't value
40:26
the investment activities that
40:29
pays off actually surprisingly
40:31
soon in terms of
40:32
product improvements and hence
40:36
potentially in terms of the bottom line for the company. So
40:43
there's a dialogue that has to go on. And
40:45
I think that every engineering
40:48
manager who can straddle,
40:51
who can speak,
40:52
who understands the engineering but who can speak
40:54
management speak, who
40:56
stands up for
40:58
invisible work, who makes
41:00
evidence to
41:02
management who is less
41:04
well versed in the engineering, what the benefits
41:07
are
41:07
of having a learning culture, of having
41:09
reflection, of playing
41:12
with alternatives and so
41:14
on. They need to do that because
41:16
that information
41:20
has to pass both ways so that everybody
41:22
understands where the value lies. And
41:26
that comes up over and over and over again. Very
41:29
often, so one of my
41:31
colleagues, Irum Raff had did some work
41:33
with freelance developers to try to understand
41:36
their attitude to secure coding.
41:38
And the biggest determinant that
41:40
she found
41:42
was
41:43
whether anybody was willing to pay for them to
41:45
do secure coding.
41:48
So true. As a freelancer,
41:51
I can confirm your findings.
41:56
I start out with, it should be secure.
41:58
And they're like, needed a little faster
42:00
than like, okay, I should still be secure,
42:03
but we need to be cheaper. I'm like,
42:05
no, at this point, I can't. Right.
42:08
And so that the whole point is to try behooves
42:10
us to try to articulate
42:11
both the cost of not
42:13
doing that and the benefits of doing that in
42:16
terms of the kinds of outcomes that
42:19
those clients can hear and understand.
42:22
Okay, I have a question that I'm sitting on for
42:24
the past 20 minutes. It's been been developing
42:26
in my mind. You
42:28
aren't just studying this stuff for fun. Presumably,
42:32
I assume it is
42:34
fun, but you're not just doing it out of the goodness of your heart.
42:38
You develop findings about the
42:41
way people work and things
42:43
that work, things that don't the
42:46
properties of well forming, high
42:48
performing teams and high performing people. What
42:51
I see in software development a lot, and
42:53
I've seen a lot of companies,
42:56
that's not necessarily a good thing for me, but
43:00
is that
43:02
some things trickle out
43:04
of
43:05
formal study of software development.
43:08
And they disseminate
43:10
through people over time,
43:13
and they become folklore. And so in many
43:15
companies, you end up with this culture of, well, this is
43:17
the way we do things. And it has
43:19
this piece from Agile and this piece for something else, and
43:22
this piece that I read on in a book somewhere. And
43:24
it's this mishmash of folklore. And
43:26
they develop a culture out of that. And usually,
43:28
it kind of sort of works, but it is
43:30
not what you're talking about in terms of a well
43:33
formed, well considered
43:36
way of working. How do
43:38
you not you personally, but how
43:40
does academia, how
43:43
do these studies bridge the gap
43:45
between, okay, we have
43:47
this information, how does that get? Actionable.
43:51
Not just actionable. How do you convince
43:54
people to take a look at this and
43:57
make changes? How are companies? How
44:00
do you get into companies and say, hey, look
44:02
at this is what we found. This kind of office
44:05
works, this doesn't. This kind of design
44:07
works or design method works,
44:10
this doesn't. I
44:12
just don't see a lot of times that people
44:15
are paying attention to this stuff.
44:18
And it bothers me.
44:20
Well, that's an interesting one. And
44:23
it's an accurate
44:25
observation. So it's
44:28
an irony to me that it has taken me as
44:30
long as it has to get into
44:33
communities like Strangeloop
44:35
in
44:37
order to have that conversation. Because those are
44:39
the places where
44:41
there are
44:43
lots of people with their ears open.
44:46
And because they are, I'm
44:49
sure that there are numerous high
44:51
performers there. They're the ones who have the
44:53
position in their companies to
44:55
take the information back
45:00
and make change.
45:01
And it's interesting
45:04
because the reason
45:07
I got invited to Strangeloop was
45:09
because of the It Will Never Work in Theory live
45:13
sessions. And Greg Wilson,
45:16
for as long as I've known him, which is decades,
45:18
has been trying to get research
45:21
results communicated to industry in
45:23
ways that industry can hear.
45:27
And the It Will Never Work in Theory blog
45:29
was one of the efforts that he made
45:31
to do that. It was fantastic.
45:34
But then he kind of felt like people weren't reading the blog. So then he
45:36
went to this 10-minute
45:38
format,
45:41
give me something that's actionable that
45:43
you found in your research in 10 minutes.
45:46
And that means
45:48
make it clear, make it pithy, leave out most of
45:50
the evidence, don't do the academic thing.
45:54
And that, in terms of my research,
45:56
that's probably had more traction
45:58
than anything else
45:59
happened in my career, that 10
46:02
minute video. That's what got me
46:04
into strange loop. That's what got me into joy
46:06
of coding. That's what got me onto your
46:08
podcast. And I think
46:11
there is a real gap. It is a really hard thing
46:13
to do. It isn't helped because
46:16
so I want to make very clear as
46:18
an academic,
46:19
I see myself as a mirror.
46:22
All I'm trying to do is to understand
46:25
and reflect
46:25
the nature of expertise
46:27
that already exists. I'm not here
46:29
to tell anybody how to behave
46:31
based on what I think. I
46:33
don't think I'm right. I think you guys are right.
46:36
And all I'm trying to do is to distill
46:40
the wisdom that has arisen out of
46:42
all of this, this observation
46:45
over time. However,
46:47
there are academics who think they know better. Yes,
46:51
I've met some. And it's
46:53
not helpful because what happens then is
46:56
there's an assumption. So there are a lot of there've
46:58
been a lot of initiatives that had a lot of good
47:00
in them, where the initiative failed
47:02
because it came with too much
47:06
evangelism or too much
47:09
policing.
47:11
So that so one of the things that I see
47:13
in high performing teams is they're always
47:15
paying attention to new developments,
47:18
things that are becoming
47:19
new ideas.
47:20
It is very rare that they
47:22
read something, they find something in some
47:25
prototype that an academic has developed and
47:27
then want to buy that
47:29
prototype and use it in their
47:32
company. What they'll do instead is
47:34
they'll say, Oh, that's a cool idea. Let's take that
47:36
idea and reapply that in our work.
47:39
And unfortunately, academia doesn't there are a lot
47:41
of academics who don't understand that that's a perfectly
47:44
legitimate
47:45
form of adoption.
47:48
So I think that there are part
47:49
of the problem is that the language of
47:52
academia is very different from the language of industry.
47:56
The talk that I would give about, you
47:58
know, how
47:59
experts respond to error in academia
48:02
would have to be very different
48:03
from the one I gave a strange loop
48:05
because the one that's strange loop focused on concepts
48:07
and examples, whereas
48:10
the one in academia would have
48:12
to concentrate on the evidence base
48:15
and on the relationship
48:15
to existing literature. And
48:18
so the forms of
48:20
what we really need is more people who connect
48:24
broker that dialogue
48:26
who can make the translation step from
48:30
research to industry
48:32
or from industry to research.
48:35
The other part that's very actually quite
48:37
difficult is that the time frames are very different
48:40
industry wants stuff now very quickly
48:43
and academia works at a relative
48:45
regulation pace.
48:47
On the flip side, industry
48:50
can be very set in its ways and
48:53
conservative and stubborn
48:55
about making changes, especially
48:58
when money is involved.
49:00
Because I think like taking the open office example
49:03
that Alicia loves to cite, I
49:05
think there's tons of research after that says open
49:07
offices are terrible for everyone. Everybody
49:10
I've ever worked with at open
49:12
office hates it and says how
49:14
they'd love to get rid of it. But it's
49:16
cheap. And so that's
49:19
the really tough thing is okay,
49:22
yes, here are the best practices, you'll do
49:24
better, you'll save money. But over
49:26
here is somebody with a real estate
49:29
balance sheet, and
49:31
they can't make that jump from
49:34
time frame from,
49:37
you know, if I do this over five
49:39
years, I'll end up better off.
49:40
But you need somebody to explain that
49:43
you don't buy plastic screws because they don't
49:45
last long. You wouldn't even
49:47
consider it. Yeah, I know don't
49:49
treat your developers like plastic like
49:51
their cogs. And then
49:54
they won't leave and you won't spend 30% of your time
49:57
interviewing new people wondering why you can't. hire
50:00
anyone.
50:01
That's right. So if
50:02
you have, if you can find a relevant
50:04
piece of evidence where relevant
50:07
is determined in terms of their
50:10
value system, right?
50:12
That's how you make a change in
50:15
practice. So yeah, you
50:17
want the statistics that says our
50:19
turnover rate has increased by this much
50:22
since we went to open plan. And
50:24
look, we lost
50:25
three of the people who were core to
50:27
our business.
50:29
And if we can reframe
50:33
our observations about what works and what doesn't in terms
50:36
of the values of these
50:38
different stakeholders, that's what we have to do.
50:40
We need to be able to speak everybody's language.
50:43
So back to the experts talk, which
50:45
I do understand is only
50:46
part of your research and you have other
50:48
books that I probably should be mentioning and all of that.
50:51
But no, that's
50:51
the one.
50:53
There's the 10 minute version
50:56
on never working theory. There's
50:58
the 45 minute to an hour long
51:00
one that is on strange
51:02
loop. And I'll link both in the show notes. Thank
51:05
you.
51:06
One of the things from
51:08
the shorter version that really,
51:12
really
51:13
hit me as something actionable that
51:15
I could start doing is pair
51:18
programming. And the
51:20
reason I never I mean, I've
51:22
done pair programming in the past. I've
51:24
done it with
51:25
people. I pause you. Yeah.
51:28
Do you mean pair programming or pair debugging? Right.
51:31
That was actually part of it. Yes. Pair
51:34
debugging is what you're recommending. And
51:36
I've done both. I've
51:39
done pair programming and
51:42
pair debugging with one person
51:44
who was remote. And
51:46
basically, my contemporary,
51:48
we had a lot of the same skills,
51:50
but not a lot of the same knowledge. And
51:53
we became really good friends and had a really
51:56
fun time doing things together.
52:00
but pair debugging, especially
52:02
when the skill sets are different.
52:05
So that the expert has to explain
52:07
what's happening and therefore has to
52:09
articulate it and therefore has to think about it. And
52:13
the less, and the more
52:15
junior person is hearing this thought
52:18
pattern and looking at the code and
52:20
probably feeling like they're not contributing anything
52:23
by gaining experience, both in design
52:26
and development, as well as implementation. Why
52:29
doesn't everybody do this? Why haven't I
52:31
been doing this?
52:33
I love pair debugging. It's fun.
52:35
I know. And yet even you and
52:37
I who are in the same building, often
52:40
working on the same project, don't
52:42
always manage to do it.
52:44
So there's, because of the agile
52:46
movement, there's a lot of research on pair programming,
52:49
particularly with student
52:51
programmers.
52:52
And there are real advantages with students
52:55
to pair programming
52:55
in terms of just the kinds of dialogues
52:58
that you've articulated. What
53:02
I see much more often in the teams that
53:04
I study is I see very little pair programming,
53:07
but I see routine pair debugging. And
53:11
I saw that even well before agile
53:13
was articulated. And
53:15
because what they're doing in, there
53:18
are key things that happen with pair
53:20
debugging. So you've already, you've
53:22
already explained
53:24
it that there are,
53:26
you get the dialogue between somebody who sees
53:29
further and somebody who's just
53:31
handling a bug at the moment,
53:35
but it's,
53:36
it's a really good way to make
53:39
sure that for
53:39
example, more members of
53:41
the team are familiar with code base to
53:44
get people to look across each other's shoulders,
53:46
to get
53:46
new perspectives on things, to
53:49
pick things up that might've been missed if
53:52
there's only one person going
53:54
over and over and over it.
53:57
To start dialogues about other things.
53:59
Yeah, and it's a very, very powerful
54:02
mechanism. And as I say, I see it spontaneously
54:05
in almost all of the high
54:07
performing teams. In fact, there's
54:09
one company that I studied where they were using
54:12
pair debugging as an
54:14
onboarding
54:15
process.
54:16
So what they did was they provided
54:19
selected pull requests to the new person.
54:23
The pull requests were distributed across the code
54:26
base. And it meant not
54:28
only did they trawl
54:30
through the different parts of the code base, but they
54:32
also then sat down with somebody
54:35
else on the team who was the expert in
54:37
that part of the code base, or the most knowledgeable
54:40
about that part of the code base. And so they met the
54:42
team as well as meeting the code.
54:45
And in the course of that, it built their confidence,
54:47
because they were doing useful work
54:50
while they were also becoming part
54:52
of this bigger picture. And
54:55
I thought that was a brilliant way of
54:58
strategic way of using pair debugging.
55:00
Some of the features of pair debugging come
55:03
up with rubber duck debugging. When
55:05
you're when you're a secondary
55:07
person, or maybe primary is
55:10
a stuffed animal. But
55:13
you don't get that knowledge
55:15
transfer.
55:16
Yeah, so what rubber ducking gives you is it gives you
55:19
that externalization.
55:21
And very often just saying
55:24
something out loud that you're thinking about
55:28
basically causes people
55:29
to articulate assumptions,
55:32
to stumble over things that
55:34
their mind just drips across and so
55:36
on. So there's there's real value
55:39
in
55:40
rubber ducking. But as you say, what you don't get
55:42
is the exchange.
55:44
And you don't get the laughter necessarily. Yeah.
55:47
And I think laughing is actually, I mean,
55:49
I guess that the US
55:52
as part of talking about doing
55:54
the shows, what would
55:57
I want someone like you to research? And
55:59
I think The key is
56:02
how important is laughter, amusement
56:04
jokes, and stories in the development
56:06
of code.
56:08
I mean, if I can make my
56:11
file
56:12
seem almost like a story, like,
56:15
okay, here's where it begins, and this
56:17
is what it does, and this is
56:20
what you need to know in my comments,
56:22
and I can make it feel like it's an actionable
56:26
story, I feel
56:29
like it's better code, in part because it's easier
56:31
to read. But
56:34
that's the study I want, is does laughter
56:37
make for better code? Okay,
56:39
can I make writing noises now? Sure, sure. Giggling
56:44
and programming,
56:45
together at last.
56:46
I mean, it's interesting to me, I always wanted
56:49
to bug the coffee machine at the places
56:52
I was studying, because it was interesting
56:53
how many
56:55
insights
56:57
happened at
56:59
the place where people got coffee together. They'd bump into
57:01
each other, they'd have a few worries, they'd exchange
57:03
a joke or something, and they could just ask
57:05
a question.
57:06
And that happened over and over, and in some places have
57:09
embedded that.
57:10
So, for example, I was at
57:12
Mozilla in Toronto,
57:14
and
57:14
their kitchen is amazing.
57:18
There's so much work that happens as people pass
57:20
through the kitchen, and listen
57:22
to other people's conversations, chime in here,
57:24
chime in there, exchange information. It's
57:26
all very brisk,
57:29
but it's incredibly powerful. And
57:31
I think part
57:34
of that is, I
57:36
had a student named
57:39
Martha Hawes, whose
57:42
doctoral research was on student teams, but student
57:44
teams that were spread
57:46
across the world. And in those teams, again, I
57:49
don't want to overgeneralize for students
57:51
to professionals,
57:56
but
57:57
the teams that spent a higher proportion of their early
57:59
time,
57:59
time, particularly socializing
58:02
and cracking jokes, actually were
58:05
also the teams that had better outcomes at the end because
58:07
they learned in that time they
58:09
built trust, they had awareness of each other,
58:11
and
58:12
they found it easier to ask questions.
58:15
That's the part I'm absolutely terrible
58:18
about. I
58:20
remember it was my second or third job.
58:23
I think it was, oh, HP Labs. So like
58:26
my second,
58:27
third job.
58:28
And I didn't know enough to be useful,
58:31
and I spent my day at my desk trying
58:33
to understand what was going on. And
58:36
I skipped lunch because I was trying
58:37
so hard to understand.
58:39
And
58:41
my boss came by and said, you're doing
58:44
it wrong. You
58:46
need to go to lunch. You need to understand
58:48
these people more than the technology.
58:51
And I was so shocked because I went against
58:53
everything I believed in. I
58:55
just had to learn all of the information. And
58:58
there was John saying, no, no,
59:01
no, no, stop reading the manual
59:04
and go have lunch with these people. I like
59:06
John. It was weird. And
59:09
it's still not something I'm good at.
59:12
But it's part of it. It's part of what you're trying
59:14
to understand as you're trying to figure out how the
59:16
software will serve them.
59:18
Wow. They served
59:20
the wasps and the tuna sandwich when
59:22
I finally did show up. You
59:27
have several books, and we
59:29
are almost out of time. So can
59:32
you give me a speed rundown of your
59:35
books?
59:36
The only one I think I'd like to give
59:38
the rundown of is Software Design Decoded,
59:41
which is basically 30 years
59:44
of
59:45
empirical research distilled into 66 paragraphs
59:48
with illustrations.
59:51
Everything in the book, it's a collection of
59:53
insights.
59:54
Everything in the book is grounded in empirical studies. There's
59:56
a website associated with the book that has a lot
59:58
of the bibliography.
1:00:00
for evidence
1:00:02
that we built on.
1:00:04
It's a bit
1:00:06
of a Marmite book. The people who understand
1:00:08
what it is love it and the people who actually
1:00:10
want to how to book hate it.
1:00:13
But it is,
1:00:15
it does capture a lot of the elements
1:00:18
of this design mindset,
1:00:20
this
1:00:21
collection
1:00:22
of knowledge,
1:00:24
reflection,
1:00:26
and culture
1:00:28
that builds
1:00:30
into expertise.
1:00:34
It was
1:00:35
actually, it's an interesting one because
1:00:37
it really did take 30
1:00:38
years to get to the point where I could
1:00:41
co-author a book like that.
1:00:43
I liked the part about sketching best,
1:00:46
you, about externalizing thoughts and sketching
1:00:48
the problems and the solutions. We've talked some about
1:00:50
that. But you also had one about experts
1:00:53
design elegant abstractions.
1:00:56
And the paragraph starts, while all developers
1:00:59
create abstractions, experts design
1:01:01
them. A good abstraction makes
1:01:03
evident what is important, what it does,
1:01:05
and how it does it. How
1:01:09
do you design a study for that?
1:01:12
Okay.
1:01:14
Well, I certainly haven't designed
1:01:15
a study to watch somebody create
1:01:18
abstractions. That's
1:01:20
the kind of emergent observation that happens
1:01:23
over time over lots of studies, where
1:01:25
you collect the examples as they arrive, and
1:01:28
then over time make sense of them. That
1:01:32
insight
1:01:33
aligns
1:01:34
with another insight, which is about focusing on the
1:01:36
essence. And I know my colleague
1:01:39
Andre van der Hoek, when he teaches his class on software
1:01:41
design, one of the insights from
1:01:43
the book that he really stresses with the students is
1:01:45
to focus on the essence because it's really easy
1:01:48
for people to sit down and immediately code
1:01:50
the bits that they know how to code,
1:01:51
which is almost never
1:01:53
the part that's the hard part,
1:01:55
or that's the crux of the problem, or is
1:01:57
the defining
1:01:57
element
1:01:58
of the solution. And
1:02:01
so all of this business
1:02:03
about abstractions starts
1:02:05
a little
1:02:05
earlier. It's about learning to ask the right
1:02:07
questions. It's about directing
1:02:10
attention to the heart
1:02:12
of the problem. What's the essence?
1:02:14
What's the core challenge or issue? Are there analogies
1:02:17
in other domains? What can we learn from them? What
1:02:19
are the dissonances among the
1:02:22
alternatives I can think of to address
1:02:24
this thing? And
1:02:28
I have to credit Mary Schalder who often
1:02:30
talks about insight through attending
1:02:33
to dissonance and
1:02:36
its relationship to innovation. So
1:02:38
if we have
1:02:40
lots of examples
1:02:42
or use cases, what do they have
1:02:44
in common or how do they differ? What's
1:02:46
most important about them? What's the
1:02:48
thing we can't do without? And very
1:02:50
often in the course of stripping back
1:02:53
to that essence,
1:02:54
experts
1:02:58
are identifying the thing that they have to
1:03:00
express. And that's
1:03:01
the first step. Expressing
1:03:04
that as a good abstraction is something that they
1:03:06
learn over time. I don't know that
1:03:08
I would know how to teach somebody to make
1:03:11
good abstractions. But the whole
1:03:13
notion of trying to strip
1:03:15
away detail, trying
1:03:18
to find the essential bits, trying
1:03:20
to ask the right questions is,
1:03:23
I think, a mindset
1:03:26
and a set of practices that
1:03:28
people can learn.
1:03:30
It does seem a little hard to learn from this
1:03:32
book. I mean, your expert
1:03:35
sounds like a wizard or a perfect
1:03:37
person. And some of these things, I
1:03:40
think I do them. Maybe I'm an expert.
1:03:43
Maybe I just
1:03:44
think I'm, what is it, Cronin
1:03:47
Duggar?
1:03:47
Dunning Kruger. Thanks.
1:03:51
So the response we've had from the people who use
1:03:53
the book is if they use
1:03:56
it,
1:03:59
first of all, they read it.
1:03:59
and they recognize some of it. And
1:04:02
sometimes they recognize things and say, Oh, I used
1:04:04
to do that. And I forgot about that. Yeah, I just start doing
1:04:06
that again. And so they use it as a kind
1:04:09
of a way of refreshing themselves. Now,
1:04:11
when they come across them, they don't recognize they say, What's
1:04:13
that about? I need to understand that. So
1:04:16
I don't
1:04:19
in terms of
1:04:21
starting from scratch, I
1:04:23
think almost everybody that we know
1:04:25
who uses the books will focus on one thing at
1:04:27
a time. So for example,
1:04:30
I there's one group that was while
1:04:33
they were doing their stand up meetings, they pick out
1:04:35
one insight per stand up meeting, and
1:04:38
talk about it a little bit and kind of hold it
1:04:40
in mind.
1:04:44
Because it was just a way to kind of do a refresh
1:04:47
on
1:04:48
ways of thinking things that are useful
1:04:50
things
1:04:51
practices, we might have forgotten insights
1:04:53
who might have forgotten in terms
1:04:55
of
1:04:57
people starting
1:04:59
out and trying to build expertise again,
1:05:01
the per it's the
1:05:03
book is a means for reflection.
1:05:05
You don't have to try to embed 66
1:05:08
things.
1:05:09
I mean, someone spoke to me about the book being
1:05:11
like 66 mantras. And it was like, Oh, that's too
1:05:13
many mantras.
1:05:14
That's just too many. And the point is,
1:05:16
you don't have to do everything at once you do one
1:05:18
thing, or you do and when when that makes
1:05:21
sense to
1:05:21
you and becomes
1:05:23
much more part of your practice, you can you can move
1:05:25
to something else.
1:05:28
Andre and I are currently trying to
1:05:32
do the exposition of the mindset as
1:05:34
it is kind of a path to
1:05:36
learning it learning
1:05:37
to acquire that mindset. So we're
1:05:39
trying to do the how to book that's the
1:05:41
longer version of this, but we've been at it
1:05:43
for 10 years, it's going to take well.
1:05:48
But the sketching step is interesting
1:05:50
to me because I started in
1:05:52
this realm,
1:05:54
because my my driving interest
1:05:56
was the relationship between language and thought. Computing
1:06:01
gave me an incredible arena in which
1:06:03
to investigate that relationship,
1:06:06
partly because there were artificial languages that
1:06:09
we designed ourselves. But
1:06:11
I've ended up over the years not just looking
1:06:14
at programming
1:06:14
languages and pseudocode,
1:06:16
but also at what people sketch.
1:06:19
There's a lot of research about
1:06:21
the importance of sketching and design, not just in software
1:06:24
design, but across design domains. And
1:06:27
there's a lot of research about the value of multiple
1:06:29
modalities and creativity and
1:06:32
swapping between the visual and the textual
1:06:34
and so on. And it's
1:06:36
just very interesting to me to
1:06:39
try to sketching is one of the
1:06:41
insights, one of the ways that we have into the kinds
1:06:43
of mental imagery that people are using, the
1:06:45
kinds of ways that people are thinking internally
1:06:48
about
1:06:48
problems. And there
1:06:51
isn't a one-to-one correspondence by any means,
1:06:53
but we're
1:06:55
always trying to... I've spent a very long
1:06:57
time, I
1:06:58
haven't finished, trying to draw out
1:07:00
what it is that these
1:07:02
innovators, these designers, are doing in their minds
1:07:05
that allows them to encompass
1:07:08
incredibly complex problems in
1:07:11
many cases and to keep
1:07:13
all the balls in the air and
1:07:16
to find these
1:07:19
lean, elegant ways,
1:07:21
routes to a solution.
1:07:24
One of the things that's very interesting, I
1:07:26
did a study at one point where I asked people
1:07:28
to work on a
1:07:29
problem they currently had and
1:07:32
I just sat and watched them. And as they, in most
1:07:36
cases, literally sat on a chair
1:07:38
with an empty pad of paper and a pen
1:07:40
or pencil in their hand and waved the pencil around
1:07:42
in the air and never wrote anything on the piece of paper,
1:07:45
I would intervene with questions. I would interrupt
1:07:47
them to ask them, I don't know, if they smelled
1:07:49
something or what colour it was or what
1:07:53
kind of thing. It was very interesting that
1:07:55
very
1:07:56
often, as I was watching
1:07:58
people, I could see them thinking...
1:07:59
I could see them sketching in the air, and
1:08:02
very often they would drop the pad, say, excuse
1:08:04
me a minute, run down the corridor to
1:08:06
talk to a couple
1:08:08
of colleagues, and then they'd all stand around a whiteboard,
1:08:11
and that designer would
1:08:13
start drawing something. And very often,
1:08:16
it was a sketch of a conceptual
1:08:18
sketch that captured the essence
1:08:20
of a solution, that captured the solution
1:08:23
to one of the big obstacles
1:08:26
in that design space. And
1:08:29
then very often, that sketch
1:08:32
became an icon
1:08:35
in their design, and they would return to the sketch,
1:08:37
and they would interrogate the sketch, and they would
1:08:39
redraw the sketch, and they would challenge the
1:08:41
sketch on a regular basis. This
1:08:44
whole business about externalizing thought, we
1:08:47
talked about the rubber ducking, that is
1:08:50
about externalizing thought, verbally,
1:08:55
but
1:08:55
sketching externalizes thought usually,
1:08:59
visually, or in mixed media. And again,
1:09:02
it's a way to make
1:09:04
things explicit, and to allow
1:09:06
other members of the team to interrogate
1:09:09
the ideas, and
1:09:12
to have really helpful
1:09:14
critical dialogues about what's going on.
1:09:17
There's a lot of sketching.
1:09:21
Man, it's been really good to chat with you. Do
1:09:23
you have any thoughts you'd like to leave us with?
1:09:26
Probably the main
1:09:28
one is that if anyone has
1:09:30
a topic they want researched,
1:09:33
I would invite them to get in touch
1:09:35
with me.
1:09:38
And
1:09:39
if
1:09:40
there's a response to any of this, I'd love to hear
1:09:42
it. That's kind of funny,
1:09:44
because there's a whole section in
1:09:46
the outline that I did not get to with
1:09:49
our Patreon listeners asking
1:09:51
questions about, do coding
1:09:54
standards actually increase readability, and
1:09:56
which trends are headed towards unsustainable
1:09:59
futures, and... Do requirements
1:10:01
and specifications really make for projects
1:10:04
more likely to ship on time?
1:10:05
So you've already got a bunch
1:10:07
of those, but I'm sure there'll be more.
1:10:09
Well, I'd be very interested in a conversation
1:10:12
about that stuff because there are some – they're
1:10:15
one of the patterns that happens a lot in terms
1:10:17
of tools where I'm speaking – using the word
1:10:19
tool very, very broadly in terms of notations,
1:10:22
in terms of modeling, in terms of development
1:10:25
tools, is
1:10:27
that most tools are built
1:10:29
by someone to solve a problem that
1:10:31
person has, right? And some of the best
1:10:34
tools in software development evolved
1:10:37
that way. But
1:10:39
what happens that
1:10:42
ossifies things, that makes them stale
1:10:44
or less effective than they could be,
1:10:47
is this notion that they have to –
1:10:50
the tool has to be used in a
1:10:52
very particular way. And
1:10:54
that there's –
1:10:57
because one of the things that's
1:11:00
a barrier to adoption is that
1:11:02
if your tool has great ideas in it,
1:11:05
but it works in a way that doesn't fit well
1:11:07
into the developed
1:11:10
development culture that exists in my team,
1:11:13
why would I want to change what's working
1:11:16
in my team to adopt the tool? So instead
1:11:18
I'll just adopt the idea of the tool. And
1:11:22
the failure to recognize that that's a
1:11:24
really valid form of adoption is problematic
1:11:27
to me.
1:11:29
And there are lots of examples
1:11:31
in terms of things like modeling languages,
1:11:35
things like how –
1:11:38
specification routes and
1:11:40
so on, where
1:11:42
what ultimately happens with a lot of the tools
1:11:46
that
1:11:46
people create is that once the big adoption
1:11:49
pump
1:11:50
goes,
1:11:52
people will select the parts of
1:11:54
that tool that work for them and continue to
1:11:56
use that and throw the rest away.
1:13:59
ways experts think. Thanks,
1:14:02
Marianne. This was a really interesting conversation.
1:14:05
Well, thanks, both of you. It's been fun.
1:14:08
Thank you to Christopher for producing and co-hosting.
1:14:11
Thank you to our Patreon listener, Slack Group,
1:14:13
for their many suggestions on research
1:14:15
that Marianne should do. And
1:14:17
thank you for listening. You can always
1:14:19
contact us at showatembedded.fm
1:14:22
or at the contact link on embedded FM. We
1:14:24
will forward things to Marianne, of course. And
1:14:27
now a quote to leave you with from software design
1:14:29
decoded, 66 ways experts
1:14:31
think. Experts
1:14:33
solve simpler problems first. Experts
1:14:36
do not try to think about everything at once. When
1:14:39
faced with a complex problem, experts
1:14:41
often solve a simpler problem first, one
1:14:43
that addresses the same core issues in
1:14:45
a more straightforward manner. In
1:14:47
doing so, they can generate
1:14:50
candidate solutions that are incomplete,
1:14:51
but provide insight
1:14:53
for solving the more complex problems that they
1:14:55
actually have.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More