Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Worried about letting someone else pick out
0:02
the perfect avocado for your perfect impress
0:04
them on the third date guacamole? Well,
0:07
good thing Instacart shoppers are as Instacart
0:09
choose your own adventure and skip the
0:11
shopping side quests. Where available you can
0:13
get ice cream delivered to your hotel,
0:16
sunscreen to the pool, or cold brew
0:18
to your bed. Well, door in as
0:20
fast as 30 minutes. Wherever you find
0:23
yourself this summer, you can get the
0:25
goods. Download Instacart for free delivery on
0:27
your first three orders. Offer
0:29
valid for a limited time. Minimum
0:32
$10 per order. Excludes restaurants. Additional
0:34
terms and fees apply. Welcome
0:37
to IAI's debate on loving
0:39
oneself and loving others. What
0:41
is the trouble with altruism?
0:44
IAI Live is partnering for this event
0:46
with Closer to Truth, which is on
0:49
Cosmos Consciousness Meaning. See closertotruth.com.
0:51
Know the, not the
0:53
truth. closertotruth.com, also Closer to
0:56
Truth YouTube channel. From
0:58
charity givers to those who
1:00
sacrifice themselves in war for
1:02
others, we see altruism and
1:04
selflessness as virtues to be
1:06
applauded. Those who take
1:09
no heed of their own interests
1:11
are hardly praised in Western culture,
1:13
but many point to a danger. Some
1:16
studies show that altruism gone
1:18
awry leads to tolerating abusive
1:20
partners, eating disorders, and depression.
1:24
And critics argue that some
1:26
of history's most horrific episodes
1:28
rose from appeals to altruistic
1:30
tendencies. Forced sterilizations in the
1:32
West were justified as better
1:34
for all the world. Should
1:37
we see unhampered altruism not
1:39
only as futile, but as
1:41
actively dangerous? Are these
1:43
virtues merely a device to
1:46
defend outcomes we think beneficial
1:48
for ourselves and to
1:50
exert power over others? Or
1:52
is selflessness in fact vital,
1:54
and would relegating it to
1:56
secondary status only see more
1:58
corruption and self-obscure? And
14:00
we're not interdependent that our lives are not
14:02
connected. And the gendering is
14:04
the emphasis on altruism and moral theory
14:07
is that egoism is assumed. And
14:10
it's assumed because it's largely been developed
14:12
by men who assume self-interest. And
14:15
so, you know, the kind of
14:17
default place for women is
14:19
that we were supposed to be the
14:21
altruists who took care of others while
14:23
men took care of themselves, basically. Being
14:27
altruist, is altruism ever defined by the consequences
14:29
to the receiver as well as the motivation
14:31
of the giver? Let me give you an
14:33
example. Let's say I give a charity $10,000
14:36
only because I want my name listed
14:39
and to be praised. And the charity
14:41
uses the money wisely to help the
14:43
homeless. Or, the opposite, I
14:45
give the $10,000 anonymously, but
14:47
the charity uses the money
14:49
to buy drugs to keep
14:52
the homeless sedated. So
14:54
do consequences ever determine,
14:59
as well as the motivation of
15:01
the giver, the consequences to the receiver?
15:03
Short answer, Kasia. Well,
15:06
it's very simple for me as I'm a
15:08
consequentialist. Yes. So I would
15:10
take the first option gladly if
15:13
the consequences are fine. The other thing is,
15:16
however, that maybe you would
15:18
not be the most virtuous
15:20
person, right? So
15:23
morality can talk about actions and
15:25
can talk about character. But as
15:27
for actions and for altruism, I
15:30
will go with the first option.
15:32
Okay. Carol. Yes,
15:34
it can have unintended consequences for the
15:36
relationship. Let's take my example.
15:39
I'm a woman who's pregnant and I decide
15:41
to have an abortion because you, the
15:44
father of this child, wants me to have
15:46
an abortion. So I have the abortion for
15:49
you because that's what you want. And
15:51
then I blame you. I
15:55
hold you responsible. So in fact,
15:57
it undermines the very relationship it was
15:59
in. intended to sustain. Richard?
16:03
I think altruism has not only benefiting
16:06
another individual, but also having a cost
16:08
to ego, to the actor.
16:12
And in the cases that you
16:14
said were you give $10,000, but
16:17
you get an immediate benefit, then
16:19
I call that mutualism. OK,
16:23
let's go on to our first theme now. Where
16:25
do moral obligations, altruism being our
16:28
example today, where do they come
16:30
from, are they absolute universals or
16:33
relative? Now, I'm going to begin by forcing
16:35
you to say one word, absolute or relative,
16:37
then we're going to go into some details.
16:40
So everyone, one word, morality, absolute or relative.
16:42
I think I know the answers for each
16:44
one, but let me hear it. Richard? OK,
16:47
yeah, and I don't think it follows the rules of
16:49
physics. It's relative. Carol?
16:55
I was going to say neither. What's
16:58
your alternative? Oh,
17:02
what's my alternative? When I say neither, it
17:04
depends on how you understand morality. I would
17:06
say relative if you mean a set of
17:08
precepts. If you want to talk about
17:10
empathy, then that's part of who we are. Kasia?
17:15
I would say it's absolute. It's
17:17
universal and absolute. OK,
17:20
so let's start. Richard, moral obligations
17:22
like altruism are rooted
17:25
in evolution. What can
17:27
you infer from the behavior
17:29
of non-human primates that enlightens
17:33
this discussion? I
17:36
think we have to think about what morality is. And
17:39
so I'm going to define it as behavior that
17:42
is guided by a sense of right
17:44
and wrong. So
17:49
this is behavior that follows moral
17:51
rules. When
17:53
I then compare that between
17:56
humans and non-human primates I
18:01
think that only humans
18:03
have morality in this
18:05
sense. So you
18:07
can certainly point to humans, to
18:09
animals having empathy. I
18:12
think it's a sort
18:15
of morality out of sympathy, rather than
18:18
a morality out of following rules. So
18:22
for me, you have a very
18:25
different system in humans for
18:27
guiding behavior than
18:30
in other animals. I don't see
18:32
any evidence in other animals that they
18:35
have anything like a conscience. And
18:39
a conscience serves to
18:42
modulate the relationship between
18:44
egoistic tendencies and altruistic
18:47
tendencies. So
18:49
animals, I think, do not have
18:51
these altruistic tendencies that we're interested
18:54
in today. And
18:56
for me... Does
18:58
your execution hypothesis
19:00
have meaning here?
19:03
Absolutely, yes. So I was going to
19:05
say that the
19:08
question of why individuals should
19:10
ever behave altruistically,
19:13
in the sense that we've been talking about, has
19:16
to be answered by a mechanism
19:18
that explains punishment. It's
19:22
only if there is a capacity for
19:24
punishment that one can envisage how
19:27
a non-moral animal evolves to
19:30
become moral. And
19:32
for me, the punishment is best
19:34
explained by something that we see
19:36
even nowadays sometimes operating in society.
19:40
And that is
19:42
a really extreme
19:45
kind of punishment, namely execution
19:48
of individuals who do not
19:50
follow moral rules. And
19:52
humans are the only species that
19:55
we know of that can
19:57
execute other members of the world. their
20:00
own species. Well,
20:02
Akasha, how do you reconcile
20:05
your objectivist ethics with the
20:07
evolutionary critiques that Richard is
20:09
just elucidating?
20:13
So it's really very interesting because
20:16
if I believe, as I do, as
20:19
a moral realist, that moral
20:22
obligations are universal
20:25
and they come from reason, yes,
20:27
that's what I would argue, and
20:30
how I put it together
20:32
with all what we
20:34
know about biology and evolution. And
20:38
I try this way. I don't
20:40
deny that certain things come
20:43
or start with our
20:46
biology and evolutionary basis. And
20:48
of course altruism is a
20:50
great example. Altruism
20:54
together with empathy allows
20:58
us to prolong
21:01
our genes and then take
21:03
care of the whole
21:06
species and so on. But
21:09
it doesn't stop here. And
21:11
I think it's this brilliant
21:13
moment in which we realize
21:15
that what is in our
21:18
biology is not everything that
21:20
we believe is good or
21:22
bad. And
21:25
so here, one's called
21:27
The Expanding Circle. That's
21:30
one of his books where
21:32
he says, well, yes, we
21:35
start with our children and those
21:37
in our village, but
21:40
we go on to
21:42
some other people and
21:44
we understand that if I
21:47
care for my children, I should also
21:49
care whether there is enough food for
21:52
children in Africa or somewhere in Asia.
21:55
But what's even more, I understand,
21:57
I begin to understand. of
22:00
other beings who
22:02
suffer and feel pleasure.
22:05
I'm not only a utilitarianist, but I'm
22:07
also a hedonistic utilitarianist, come on, I'm
22:10
a hedonist. So I do believe that
22:12
we have obligations towards those that
22:16
feel pleasure and pain. And
22:19
I think it's hard to
22:22
deny that we have obligations
22:24
towards other animals, so towards
22:26
other species. And
22:28
for me to say that, I
22:31
had for me to say that, I
22:35
have to rely on the reason rather than
22:37
on data about
22:40
evolutionary basis of altruism.
22:45
So Kasia, if reason discerns
22:47
objective moral truths, like
22:52
how we should approach animals, and
22:54
if reason developed through evolution,
22:56
then certainly sounds like
22:58
the moral truth as obtained by
23:00
reason, are
23:02
contingent on evolution, correct? Well,
23:05
they start with, with
23:09
our ability to do maths,
23:11
for example, right? So we
23:14
now can do things
23:16
at maths that do
23:18
not help us in
23:22
evolutionary basis whatsoever,
23:25
but we can because our reason has
23:27
developed so much. And I
23:29
believe it's the same with morality. So
23:32
we are developing our
23:36
thinking about what
23:38
we should do in morality,
23:41
in moral cases, and
23:43
that goes beyond anything that is
23:45
understandable or
23:49
can be explained in terms of
23:51
evolution. Richard, do you agree with that?
23:55
I'm finding it difficult to... to
24:00
engage mentally with Kasia
24:02
because it seems
24:04
to me that you're talking about what we should be doing.
24:07
And I'm thinking about where we
24:09
come from. I'm not making any
24:12
recommendations. No, I mean, I'm
24:14
so aware of the way that moral rules can
24:16
change that I don't think there's any kind of
24:19
moral absolutism. I
24:22
kind of agree with you that I think it's wonderful
24:24
if people are astonishingly
24:26
altruistic. It's just
24:28
a different kind of approach to
24:30
the question from
24:33
understanding why we are
24:35
moral in the first place. So,
24:38
you know, I embrace
24:40
your proposal that
24:44
we should be altruistic in these ways,
24:48
but it still leaves me questioning why
24:50
it is that it
24:52
has happened, that we
24:55
have a tendency to do this. And for me,
24:57
you know, you are virtue signaling in the best
25:00
possible way. You know, you're saying, look
25:02
how wonderful life would be if we're all as
25:04
virtuous as I am. And
25:07
I agree with you. We may be. I'm not saying
25:09
that. Well,
25:12
it's okay. We want to be as virtuous as you are.
25:15
That's part of my goal in life. Carol,
25:17
does a cultural perspective affect
25:20
this debate we're talking about
25:22
over moral origins, principles,
25:26
and what would be some examples? Well,
25:31
I mean, the thing that's so loud to me
25:33
in this whole discussion is the gender aspect
25:36
of it, which is very cultural. But
25:38
I also, I mean, I find myself in
25:40
agreement, I think, I think with Richard in
25:42
the sense that I
25:45
believe that first of all,
25:47
and there has been a huge
25:49
change within the human sciences in
25:51
terms of the understanding of who we are as human beings.
25:54
And we are fundamentally relational
25:57
responsive creatures. I
25:59
mean, we are both. born with a voice,
26:01
with the capacity to communicate our experience and
26:03
with the desire to live in
26:06
relationship, not alone, but in relationship with
26:08
other people. Now that is
26:11
the foundation then for, you know,
26:13
how do you stay in relationship? How
26:15
do I stay in relationship with myself
26:17
and with other people? And
26:20
moral philosophers Sandra Loget at the
26:22
Sorbonne talks about a paradigm of
26:24
attention, that morality has
26:26
less to do with rules and absolutes
26:28
and so forth, than the need to pay
26:31
attention or as I would say, to
26:33
be careful rather than careless, both
26:36
with myself and with other people and
26:38
the consequences of carelessness in a
26:41
world which is interdependent, where
26:43
it's not me versus you in a zero
26:46
sum game, but a question of being careful
26:48
and paying attention. And
26:50
then if you talk about who in
26:52
fact does the work of caretaking of
26:55
children, because you brought up children, it's
26:57
primarily women and often women of color.
27:00
And you know, so I think this
27:03
whole discussion from my point of view
27:05
is and culturally so deeply
27:07
gendered, where we kind
27:09
of idealize caring and also kind of
27:11
devalue it. I mean, for
27:13
example, the in the United States, the
27:16
build back better build that Biden, you
27:18
know, so forth got passed, only
27:20
passed because they eliminated childcare from
27:23
the bill. So
27:25
there is this, you know, to talk about
27:28
altruism or morality, it's
27:30
been a lot of it has been at the expense
27:33
of those people who do the caretaking work.
27:36
And I mean, I think that's really
27:39
been a problem. And it explains why
27:41
right now in the world we live
27:43
in, the costs of carelessness and indifference
27:46
are just huge. And evolutionary
27:48
anthropologists, I'm thinking specifically of the
27:50
work of Sarah Bloff or herdy,
27:53
who says that evolution selected
27:56
for empathy, mind
27:58
reading, the ability to and intuitive
28:00
what other people are thinking and
28:03
cooperation, were key to our
28:05
survival as a species. So
28:07
it's not altruism versus egoism. It's
28:10
how do we, in a
28:12
sense, value and in a
28:14
sense educate and develop these
28:16
basic human capacities at
28:19
a time when our surgeon general says
28:21
one of the biggest public health problems
28:23
is loneliness and the absence of relationships.
28:25
So to me, the foundation for this
28:27
whole discussion has to do with relationships
28:30
more than with rules. Is
28:32
there a clear cutoff, a step
28:34
function break, between discussion
28:37
of the evolution of
28:40
what we call morality and
28:43
the application of morality under
28:48
current applications, what you're talking about,
28:50
Carol? So is
28:53
there any relevance to evolution?
28:56
You use one example. Is
28:59
that a self-selection of one
29:01
particular example that aligns
29:03
with your own particular
29:06
theory? Carol? I'm
29:09
going to come at what you said, I think from a slight
29:11
angle, but I hope you can see why I'm coming at it
29:13
this way. To me, the focus is
29:16
on altruism is from a group of
29:18
people who take egoism for granted. And
29:21
a big move is to think of someone
29:23
other than themselves. In all of
29:25
my work with women, the big move was
29:27
to include the self. OK,
29:31
but is
29:33
that a cultural activity
29:36
of what humans have occurred? Is there
29:39
any biological basis to
29:41
that? Richard? Well,
29:43
for me, the biology is in the evolution of
29:45
a thing called norm psychology,
29:48
people have called it, which is an innate
29:52
tendency to
29:56
absorb conventions, rules,
29:59
moral precepts in the society
30:03
and follow them and to
30:05
punish those who don't follow
30:08
them. There's obviously a tremendous variation
30:10
in this and the variation is
30:12
doubtless both genetic and environmental. We
30:15
see the variation in psychopaths
30:18
at one end. Psychopaths
30:20
are inert to
30:24
moral precepts to a very large extent and generally
30:27
what you find is that psychopaths
30:29
understand moral rules, they just choose not to
30:31
follow them. On
30:34
the other end, you've got scrupulosity, which
30:38
is the term given
30:40
for people who are
30:43
excessively altruistic. There's a
30:45
biological phenomenon that may
30:47
be partly genetic and
30:50
partly environmental. So
30:52
the range of approaches
30:56
to following
30:58
moral rules is
31:00
enormous, but on the whole, humans
31:03
are very different from every
31:05
other species and our
31:07
biology consists of
31:10
the normal psychology, which is essentially equivalent
31:12
to having a conscience and acting on it.
31:15
It's not in the absorption
31:17
of particular rules. There's no
31:19
indication that the biology actually
31:21
has led to the internalization
31:26
of specific moral rules. What
31:29
it's led to is the capacity to
31:31
internalize moral rules that are given to
31:34
them by society. I'd
31:36
like to clarify that what we're
31:39
not talking about is the formal
31:41
reproductive altruism that's used in evolutionary
31:43
theory, which is so-called Hamilton's rule.
31:47
When the benefits, which are
31:49
the number of offspring equivalents,
31:51
gained by the recipient of
31:53
the altruism, weighted by the
31:55
genetic relationship to the donor,
31:57
the donor to the beneficiary
32:00
is greater than the cost, again,
32:02
the number of offspring paid by
32:04
the donor to do the altruistic
32:07
behavior. Now, to make that really
32:09
simple, supposedly the
32:11
biologist JBS Haldane declared,
32:14
I would lay down my
32:16
life for two siblings or
32:18
eight cousins, because siblings share
32:20
50% of our genes and cousin 12.5%
32:24
of our genes. Haldane actually
32:26
said two brothers, but
32:28
I took the liberty to update them.
32:31
Richard, evolutionary altruism, as I just
32:33
defined it, influenced it all the
32:36
way we approach the optimum practical
32:38
altruism? Well, the definition of
32:40
altruism that you just gave is one that I
32:42
think is useful in terms of thinking about evolutionary
32:45
biology, but I chose not
32:47
to use it for today because I
32:49
wanted to get away
32:51
from the investment in kin,
32:54
which we're calling nepotism today,
32:57
and restrict the use
32:59
of altruism to this much more puzzling
33:03
case where individuals are
33:06
benefiting at their own expense by
33:09
those who are not related to them. So
33:13
the Hamiltonian equation need not concern
33:15
us today. Calla, it seemed like
33:17
you would disagree with that, because
33:19
at least a little bit of
33:21
understanding of that Hamilton rule gives
33:24
reason to
33:27
include yourself
33:29
or your kin in a more
33:32
broader definition of
33:35
altruism than just purely
33:37
selflessness. It's a bigger
33:39
inclusion. Is that
33:42
fair? I think my
33:44
whole approach, as I said, I write about a
33:46
different voice. I write about care ethics as a
33:48
different moral voice where the
33:51
starting point and the focus is on
33:53
relationships, not on rules. So
33:55
I'm in a certain sense at odds with this, and what
33:57
I'm thinking about is I listen to this conversation
34:01
is I've been very interested
34:03
in people, individuals who under
34:05
extreme circumstances do what
34:07
we would call extraordinary heroic
34:10
acts. I mean, people who,
34:14
for example, you know, under
34:16
the Holocaust rescued Jews at their
34:18
own expense, the what is it,
34:20
the zookeeper in Warsaw
34:22
who hides. All
34:25
of these people when asked, they said,
34:27
what I did was nothing heroic. What
34:29
I did was simply what any human
34:31
being would have done. And
34:33
the idea here is it's really a reversal,
34:36
which says that as humans, because
34:38
we're born with a capacity, I mean, we
34:40
have mirror neurons, we have
34:42
a capacity for empathy. All
34:44
of the current baby research shows
34:47
that at one year of age, we're
34:49
exquisitely sensitive to who's in connection with
34:51
us and who is not. And
34:54
the breaking connection is it
34:56
is really distressing. And
34:59
so basically the question is not how
35:01
do I gain the capacity to be
35:03
altruistic and overcome self interest. The
35:06
question is how do I lose
35:08
this fundamentally human capacity to live
35:10
in relationship with myself and with
35:12
other people? And that's the foundation
35:14
of what we call morality. So
35:16
the whole egoism altruism debate to
35:19
me is based on a false
35:21
premise, which is, as I said,
35:23
it's based on that either I
35:25
act for myself or for you.
35:28
And that the big thing I have found in years
35:30
of listening to particularly to
35:32
women, but not just to women, but
35:35
for women in Western culture
35:37
at least. And I think it's not
35:39
just in patriarchal cultures. The
35:41
question of can I include myself
35:43
in those people who I care
35:45
about and get away
35:48
from this word selfish, which
35:51
makes it so
35:54
that's why I think your examples, Robert,
35:56
at the beginning of anorexia, of depression,
35:58
of domestic abuse. was
36:01
showing the problems with selflessness
36:03
or with altruism for women,
36:05
where the big move is, I
36:11
mean, this is the famous Virginia Woolf
36:14
essay about killing the angel in the
36:16
house, the utterly unselfish woman who
36:18
has no voice, seemingly has no voice,
36:20
because that's actually not true. Everybody
36:23
has a voice. Okay, so we're
36:25
not gonna resolve whether morality and
36:27
altruism is absolute or relative today.
36:30
I'd say absolutists have the heavier
36:32
burden of proof. Let's go on
36:34
to theme two. Let's move to
36:36
doing altruism, the practical side of
36:38
the debate. How
36:40
can we be confident that our efforts
36:42
to enhance altruism really improve the wellbeing
36:45
of others? Could altruism
36:47
ever be futile or even dangerous?
36:51
Kasia, if we grant
36:53
that altruism is an appropriate moral
36:55
goal, how practical is it? How
36:58
can we optimize our altruistic efforts
37:00
and on a large
37:03
scale? So
37:09
of course, as with any human
37:11
action, we cannot be confident that
37:14
what we do is the
37:17
right thing or even good for others.
37:20
But with altruism, there
37:23
is this interesting thing about
37:25
checking how my action influence
37:28
the other. And
37:30
much depends on how
37:32
we define wellbeing, because
37:34
if altruism is
37:36
set towards the wellbeing of others, then
37:39
we need to learn what the wellbeing
37:41
is. And of course,
37:43
it's easier when we think about
37:46
survival or food or basic needs,
37:48
but when we do
37:51
it philosophically, we usually group
37:53
this wellbeing into three main
37:55
categories. We talk
37:58
about fulfillment of desires. feeling
42:00
for other people. I mean, the
42:02
wish that other people, you know,
42:04
would, would, that their lives would
42:07
go well. And I think
42:09
of the simple things, I mean, you
42:11
can think of it as John Dunn,
42:13
no man is an island, you can
42:16
think of Martin Luther King, we are
42:18
born into a network of mutuality tied
42:20
in a single garment of destiny. What
42:22
affects one directly affects all indirectly. We're
42:24
living in a time of climate change
42:27
and global warming, this notion of separateness
42:29
that it's either me versus you. I
42:32
mean, it's really very, very dangerous, I
42:34
think, at this point in history, it's
42:36
just simply not true. And
42:39
the more we understand, and I come, I
42:42
come not from where Richard comes from, but
42:44
also, I come from research
42:46
and evidence about, you
42:49
know, about evolution,
42:52
anthropology, developmental psychology, who we
42:54
are as humans. And the
42:56
question has been reversed, which
42:58
is not how do
43:00
we gain the capacity to overcome self
43:03
interest, but how do we lose the
43:05
capacity to pick up and
43:07
respond to what's going on in the
43:10
environment around us, including other people. And
43:13
there's a huge cost to not
43:15
responding, including to ourselves. I
43:17
think that's naive to think that if
43:20
I just have no feeling for you, I
43:23
mean, you can call me an egoist, but
43:25
it's going to affect me. And that's
43:27
partly what we see from the surgeon
43:29
general. There's a major health crisis of
43:32
locking relationships and loneliness,
43:34
not having relationships, cuts
43:37
off physical it means it's
43:39
as it's worse for you than smoking, you
43:41
know. So I think the
43:43
question is, how do we, how do we
43:45
lose our connections to ourselves and other people.
43:47
And this egoism altruism,
43:50
the egoist has no connection to others
43:52
and the altruist has no connection to
43:55
themselves. It's just, it to
43:57
me is the culture.
44:00
respond to that. Thank
44:02
you. I would say
44:04
a few words about this love and feelings.
44:06
So I think that that what
44:09
you actually say goes very well
44:12
with my
44:16
theory of reasoning because it's
44:18
clear that I cannot feel
44:21
so much or even I don't
44:23
know those in my country to
44:27
people that were that
44:29
suffer in Africa or
44:31
Asia or anywhere else.
44:33
But I know that they do and
44:36
my reason tells me what
44:38
is my obligation or what I should
44:40
do towards them, how to help them
44:43
and I can reason that. I cannot
44:45
feel it. My favorite
44:47
English philosopher, Henry Sedgwick, said
44:49
once that I cannot love
44:51
the whole world. I know
44:54
that it's a lovely Buddhist
44:56
idea but I'm
44:58
incapable of doing that but I
45:00
can reason and I know that
45:02
I should help those
45:04
who suffer although I don't know
45:07
them, I can't see them and
45:09
I don't know anything about them. Okay,
45:13
it seems like there's
45:15
no optimum generalities about
45:17
doing altruism but I
45:20
hear utilitarianism and the personal
45:22
care ethics each
45:24
can provide an enriching, not
45:26
necessarily contradicting perspective and
45:29
we're obviously all aware of the dangers. So
45:32
our theme three, should we
45:34
abandon this ideal of
45:36
altruism and if so,
45:39
what kind of principle should
45:41
guide our ethics going forward?
45:43
So Carol, continuing on what
45:45
you were saying before, how
45:49
can altruism be reframed in
45:52
terms of the care ethics
45:54
that you've developed? It's
45:57
reframed in terms of the importance of
45:59
living. in relationship with oneself and with
46:02
others, and that the problem is what
46:04
stands in the way of
46:06
our living in connection with ourselves and with
46:08
other people, and in response to
46:10
what Kasia was just saying, because I think
46:12
there are a lot of overlaps when I listen
46:14
to you. I mean, I also
46:17
think we have to think, I mean, in the in
46:19
the US in terms of in the
46:21
time of the war in Vietnam,
46:24
one of the most powerful things
46:26
was that photograph of that child
46:28
on fire with napalm. And
46:31
more than reason or any
46:33
philosophy, it was that
46:35
broke through that sense that
46:37
this child in Vietnam, none of
46:39
us knew her name, none of
46:41
us had any personal connection. But
46:43
instantly, there was a sense of
46:45
if I allow that that photograph
46:47
brought her into connection with us,
46:49
and suddenly we had a relationship
46:51
with her. And that changed the
46:53
way we felt and the way we acted about
46:56
napalming the population there. So I
46:59
think the question is, what
47:02
facilitates our ability to recognize our
47:04
interdependence in our connection, and this
47:06
would go with other forms of
47:08
life too. And what
47:10
promotes us in thinking that we
47:13
are apart and can ignore and
47:15
not pay attention. And for
47:18
me, as to ourselves as well
47:20
as to others. And I remember at the
47:22
time I was doing my interviewing right after
47:24
Roe v. Wade, and it's relevant now because
47:26
of the reversal of that by the US
47:28
Supreme Court. And I
47:30
remember asking women, if it's
47:32
good to be responsible to people, and I think Kasia
47:34
would agree with me here. And you
47:37
know, in pathic with their needs and
47:39
concerns, you're a person, why is it
47:41
selfish to respond to yourself and woman
47:43
after woman at that time said to
47:46
me, good attention. So I think,
47:48
to me, the
47:51
urgency now for humans as a
47:53
species and for survival really is
47:55
to ask what
47:57
stands in the way of our living in relationship.
50:00
that even with some enormous
50:02
catastrophes, both natural
50:05
and made by humans, it
50:08
is rather possible that our
50:10
species will survive, even
50:14
in small numbers. Anyway, if
50:16
that goes really in
50:19
long time, that means billions,
50:22
trillions, and I don't know how many other
50:25
people living that. I
50:27
have no answer to
50:29
that. If you ask
50:31
me personally, I'm not
50:33
a long-termist. I
50:36
believe that we should take
50:38
care of things that are
50:41
in our closest side, and
50:43
climate change, AI, and wars
50:45
in the world are
50:48
the closest examples. So
50:50
we have a lot to do now
50:53
to ensure that
50:55
our children and grandchildren
50:59
will have good life. And
51:02
what will come in future, no one
51:04
knows. It's not that we shouldn't
51:06
think about it, we should, but
51:08
it's of course the matter of how
51:10
much money and how much energy we
51:13
should put into those things that are really
51:15
far ahead. Richard, does
51:18
this discussion between interpersonal
51:20
relations and utilitarianism in any
51:22
way can be informed by
51:24
our revolutionary foundations? Well,
51:28
I'm sure it can. And it seems to me it's
51:36
very important to have a strong
51:38
theory of altruism. It's
51:41
important because altruism itself is
51:43
important and we want to promote it. It's
51:46
a good thing for everybody. But
51:50
it is vulnerable to the
51:52
people who will
51:54
take advantage of other altruists.
51:56
And we should be aware that
51:58
there are all sorts of their
56:00
situation, it means you have to know their culture and
56:02
so forth, that that is
56:04
a is a basic human capacity that
56:06
has to be fostered and educated and
56:08
developed. It's not myself versus others. It's
56:11
that we're all in this together. Akasha,
56:13
one question is a final one. We're almost
56:15
out of time. But we
56:17
haven't given you a chance to articulate
56:19
the importance of other sentient creatures, you've
56:21
mentioned it, but haven't defended it in
56:24
terms of our global
56:27
utilitarian approach to
56:29
altruism. What do
56:31
you mean by all sentient creatures? And
56:33
doesn't that present a lot of conflict
56:35
of interests? Of
56:38
course it does. And by
56:41
sentient beings, I mean those
56:43
that feel pain and pleasure.
56:45
And we are talking
56:48
about facts here. We still
56:50
learn, we still study, which
56:53
animals do and which don't,
56:55
which creature is don't feel pleasure and
56:57
pain. But surely those
57:00
that do, that
57:02
do, we should take care
57:04
of them. And the interest,
57:07
as you said, and the conflict
57:09
of interest is obvious. And that's
57:12
why discussion of self consciousness and
57:14
consciousness will help as well. Because
57:17
it depends on the answer,
57:19
why alive methods? Yes, what
57:22
is wrong with killing? Why
57:25
would it be wrong to
57:27
kill one creature and not the other?
57:30
And these are all moral questions,
57:33
really important. And again,
57:35
I hope we are, we
57:37
are doing some progress with that. But
57:40
we should be careful. And
57:42
I think that we should be much careful
57:44
than we are, in a way when well,
57:47
we should all live happily
57:49
being vegan, or most of us
57:51
could give up
57:55
on meat and animal products
57:57
and without big
57:59
suckers. to our selves. So
58:02
some ways are easier than they seem,
58:05
I think. So
58:07
the ideal of altruism is certainly
58:09
a worthy goal and there are
58:11
multiple paths to reach it, but
58:13
let not the pursuit of perfection
58:15
inhibit us from the doing of
58:18
good. So thank you everyone. Thank
58:20
you Carol, Kasia, Richard. Thanks everyone
58:22
for watching. Thanks
58:30
for listening to this week's episode of
58:32
Philosophy for Our Times. If
58:34
you enjoyed today's episode, don't forget
58:36
to leave a like on your
58:39
platform of choice and visit iai.tv
58:41
for hundreds more podcasts, videos and
58:43
articles from the world's leading thinkers.
59:00
Don't know that, Boxer.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More