Download Neuroscience and Moral Reliability

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Stoicism wikipedia , lookup

Virtue ethics wikipedia , lookup

Ethics wikipedia , lookup

Moral psychology wikipedia , lookup

List of unsolved problems in philosophy wikipedia , lookup

Consequentialism wikipedia , lookup

Moral relativism wikipedia , lookup

Ethical intuitionism wikipedia , lookup

Moral responsibility wikipedia , lookup

Transcript
Neuroscience and Moral Reliability
In the first part of his engaging paper Levy (2011) proposes for neuroethics the role of revising the
categories Western applied ethicists make use of. In this perspective neuroethics is not only a
branch of applied ethics, but a broader intellectual endeavor.
I agree with Levy's overall point that neuroscience has the potential for modifying to some extent
the Western understanding of metaethics and normative ethics, but I argue that scientific advances
can impact in a significant way on moral thought only if a majority position on some philosophical
quandaries is reached.
As a matter of fact, there is large disagreement both about the moral relevance of neuroscience
(Berker 2009) and the particular metaethical or normative views it might undermine. The key point
is the following: In the philosophical community there is no majority view, let alone an agreed-upon
view, about the ways to assess the reliability of a moral intuition or a moral principle. This is due to
the fact that there is no shared interpretation of the notions of “correctness” or “truth” in moral
philosophy. According to realists or objectivists, moral propositions must track some moral truth,
the access to which can be provided by either reasoning or intuition. According to anti-realists,
moral propositions either express a conative state or are an evolutionary tool that has increased the
reproductive fitness of a highly social species such as H. sapiens; nevertheless these propositions do
not allegedly track any kind of truth. In his paper Levy distinguishes reliable and unreliable moral
intuitions, but gives the readers no explicit criterion to discriminate between the two. Since the
unreliable items are often correlated by Levy with “irrationality”, the general impression the readers
get is that Levy assumes this criterion: Moral intuitions that stem from reason are reliable, moral
intuitions that stem from emotion are unreliable.
But this criterion, as Levy himself noted in his 2007 book (Levy 2007, 295-296), has been
debunked by the studies of Antonio Damasio on lesions to the ventromedial prefrontal cortex (see
Damasio 1994) and by behavioral research on psychopaths, who seem to possess full rationality but
to lack some important emotive components of the healthy human mind. According to these studies,
emotion and emotion-related brain regions are essential for a non-pathological moral behavior.
Even in the case of disgust, where it might have seemed clear that the emotional factor is morally
irrelevant, there has been an important applied ethicist that has argued for the moral importance of
such an emotion (Kass 1997). Therefore, there is no consensus about emotions being an unsound
base for moral judgments.
Such a position should have been argued for, but Levy unfortunately leaves the readers emptyhanded. All that he offers us is an appeal to Greene's papers. As Levy himself correctly notes, much
can be said about these widely discussed experiments (mainly Greene et al 2001, 2004). They
1
examine the BOLD patterns of experimental subjects while the latter confront moral dilemmas in
which one life can be sacrificed to save five. According to Greene and colleagues, some dilemmas
which usually bring about a utility maximization response are differentiated from others that
normally produce a lack of intervention (so that the five people at risk in the dilemma die). The
difference lies in the use of direct, physical violence against a fellow human being in the latter
group, whereas in the former group the life of the one is sacrificed through mechanical, distal
means. Greene then notes that the interventionist response is more easily justified by
consequentialist ethical theories, whereas omission of intervention is more easily justified by
deontology. The results of Greene's experiments show that areas of the brain connected with
emotions are more active in the dilemmas that normally give rise to a “deontological response”,
whereas areas connected with cognitive control and reasoning are more active in the dilemmas that
regularly produce “consequentialist responses”. From these data Greene derives a dual-process view
according to which moral judgments can be generated either by emotional moral intuitions or by
more cognitive processes. Only the latter moral capacity is reliable and it corresponds to
consequentialism, whose validity is hence corroborated.
But why should emotions be unreliable?
Greene (2008) answers that the kind of emotions elicited by personal violence are a legacy of our
evolutionary past, an adaptation that limited intra-specific violence in an epoch in which it was
technically impossible to perform violence in an impersonal way. Greene argues that deontology
both relies on intuitions stemming from those emotions and requires moral realism, as it considers
some moral judgments to be objectively true. But these intuitions are caused by evolution, which
does not track any possible moral truth. Therefore deontology relies on intuitions that contrast with
its own standards and is debunked.
There is unfortunately insufficient space to discuss all the relevant issues here, so that I just mention
the points I deem more significant.
First, ethical theories cover a much broader range of possible actions (such as theft, lies, sexual
behavior, blood and organ donation, etc.) than that taken into account by Greene and coworkers. As
Dean (2010) correctly suggests, it seems far-fetched to reach conclusions about whole ethical
theories starting from such a tiny and unrepresentative sample of their domain.
Secondly, subjects were not asked why they decided as they did, so that we lack information about
the justification of their choices. Greene’s correlation of subjects’ judgments with particular ethical
theories seems to be largely hypothetical, even more so if one considers that not all deontological
theories need include an absolute prohibition against killing. In fact, there are some dilemmas taken
into account by Greene, such as Crying Baby, in which the person to be sacrificed dies whatever the
2
agent chooses to do. Under such conditions it is unlikely that deontologists would claim that killing
the one is morally forbidden. Hence Greene lacks evidence for his association of emotive responses
with deontology and of cognitive control with consequentialism.
Thirdly, if “deontological intuitions” are debunked by an evolutionary genealogy, the situation turns
grim for most ethical principles and intuitions. Tenets like “better to save more lives” or “pain is
bad” are for instance easily amenable to evolutionary explanations. Human ancestors that shared
these intuitions were likely to avoid pain and to rescue each other more than the members of human
groups that did not believe so. If propositions like those are debunked by an evolutionary
genealogy, consequentialism is in serious trouble, because it cannot do without them. For example,
“pain is bad” is fundamental to consequentialism, as this theory would be void without an account
of well-being that discriminates good consequences and bad consequences. Moreover, human
reasoning arose because of natural selection just as much as emotional mechanisms did. Hence, if
one concedes that evolutionary explanations debunk moral intuitions and principles, one also admits
a thorough uprooting of our current moral views, a deep change which strikes down both
consequentialism and deontology. If evolutionary debunking arguments are valid, all ethical
theories endorsing moral realism are in jeopardy (Kahane 2010). Since both deontology and
consequentialism usually include realist claims, this does not help either of two contenders in
Greene's dual-process view. Consequentialism is as flawed as deontology if evolutionary debunking
arguments are correct.
This being said, I conclude that the normative claims Greene draws from his experiments are
unwarranted. Assuming arguendo that the correspondence between brain regions and emotion /
cognition is bijective, Greene's experiments only show that some moral dilemmas elicit a
heightened emotional response relative to others. Given that moral dilemmas involving direct
violence are stressful stimuli that bring about conflict between possible courses of action, this is
hardly surprising. Conclusions from this to the validity of whole normative moral theories seem, at
the moment, to be far-fetched.
Therefore neither Levy nor Greene provides a criterion that adequately distinguishes between
reliable and unreliable moral intuitions. This is quite a foregone conclusion, as there is currently no
majority position in the moral philosophy community about the empirical hallmarks of a particular
ethical theory, the possibility of testing both metaethical and normative theories, and fundamental
issues of metaethics, like the contrast between realism and anti-realism. In particular we lack a
position about these issues that is considered to be justified by most moral philosophers, according
to a common standard of justification. In its absence, there will be no agreement among
philosophers about the consequences of the forthcoming advances in cognitive and affective
3
neuroscience and empirical science will be stopped from effectively contributing to the
development of moral thought in the way that Levy envisions.
References
Berker, S. 2009. The Normative Insignificance of Neuroscience. Philosophy and Public Affairs
37(4): 293-329.
Damasio, A. 1994. Descartes’ Error: Emotion, Reason and the Human Brain. London: Picador.
Dean R. 2010. Does Neuroscience Undermine Deontological Theory? Neuroethics 3: 43-60.
Greene J. D. 2008. The secret joke of Kant’s soul. In Moral psychology. Vol. 3, ed. W. SinnottArmstrong, 35-79. Cambridge, MA: MIT Press.
Greene J. D., Sommerville R. B., Nystrom L. E., Darley J. M., and Cohen J. D. 2001. An fMRI
Investigation of Emotional Engagement in Moral Judgment, Science 293: 2105-2108.
Greene J. D., Nystrom L. E., Engell A. D., Darley J. M., and Cohen J. D. 2004. The Neural Bases of
Cognitive Conflict and Control in Moral Judgment. Neuron 44: 389-400.
Kahane G. 2010. Evolutionary Debunking Arguments. Noûs, e-published ahead of print, September
24th 2010
Kass L. R. 1997. The Wisdom of Repugnance. The New Republic, June 2nd 1997.
Levy, N. 2007. Neuroethics. Challenges for the 21st Century. New York: Cambridge University
Press.
Levy, N. 2011. Neuroethics: A New Way of Doing Ethics. AJOB Neuroscience 2(X):YY-ZZ.
4