Download Abstracts

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Vladimir J. Konečni wikipedia , lookup

Transcript
List of Abstracts (ordered alphabetically by first author)
Expectations and Reflection Explain the Knobe Effect
Mark Alfano, Paul Stey & Brian Robinson
Building on the work of Alfano, Beebe, & Robinson, we argue that the key to
understanding the Knobe effect is not the moral or prudential valence of the
consequences of the protagonist’s behavior but expectation (in)congruence. Our
thesis breaks into three parts: a description of the psychological conditions
under which the effect can be observed, a link between these conditions and
norm (in)congruence, and a rational reconstruction of the conditions and the
link.
At the most superficial level of analysis, we claim that people are more
inclined to attribute a wide variety of mental attitudes to an actor who produces
an effect contrary to expectations. This claim is supported by a new experimental
finding that expectations mediate mental state attributions in Knobe effect
vignettes. In all Knobe effect studies to date, participants answer questions only
after they find out what the protagonist does. This makes it impossible to probe
what they expect him to do before he acts. Unlike any previous study, our
experiment asked participants to state what they expected the protagonist to do
before they learned what he decided. Statistical analysis reveals that these
expectations explain most of the variance in subsequent attributions of mental
attitudes. When the protagonist acts contrary to expectations, participants say he
does so intentionally, but not otherwise.
A deeper level of analysis reveals that expectations are influenced by
salient norms, so when someone violates a salient norm (be it moral, prudential,
legal, aesthetic, conventional, or even merely descriptive), he typically acts
contrary to expectations and thus induces higher levels of mental state
attribution. This influence of norms on expectations explains why many
interpreters of the Knobe effect have been tempted to link it to morality.
Violating a moral norm is one way of going against expectations, but there are
many others, some of which involve adhering to a moral norm while violating
some competing norm. Moreover, the link is crucially between expectations and
salient norms, not just expectations and norms punkt. We illustrate this point by
varying our vignettes. In all conditions, the protagonist is choosing whether to
invest his inheritance in a retirement savings account or donate it to a worthy
charity. Investing conforms to a prudential norm while violating a moral
injunction to help the needy; donating conforms to the moral norm while
violating a principle of prudence. Thus, in all of our conditions, two norms are
relevant, and only one can be satisfied. However, they are not always both
salient. In different conditions, an interlocutor raises neither, one, or both of
these norms to salience, which in turn influences both expectations and mental
state attributions. Participants are more inclined to say that the protagonist
intentionally does not help when the helping norm is salient; they are also more
inclined to say that the protagonist intentionally does not prepare for retirement
when the prudential norm is salient.
The ultimate level of analysis appeals to the rationality of both forming
and attributing mental states in the way just described. It makes sense to pause
and deliberate when the action you’re about to take would violate a salient norm.
1
Deliberation in turn leads to the formation of various mental attitudes, such as
beliefs, desires, and intentions. Since we have limited cognitive powers, it makes
sense to curtail our deliberative engagement to those cases where ignorance
would be most deleterious. Such cases typically involve the violation of norms, so
it would be reasonable to deliberate more about potential norm-violation than
about potential norm-conformity. Knobe effect studies in the literature typically
ask participants to attribute mental states (intention, belief, desire, etc.). Our
experiment branches out to the attribution of mental processing. We asked
participants not only whether they thought the protagonist intentionally brought
about certain effects but also to what extent he considered the potential
consequences of his behavior. Our findings in this regard corroborate our
interpretation: participants attribute high levels of deliberation if and only if the
agent violates a salient norm.
Probabilistic Inference in the Trolley Problem and its Variants
Chris Brand & Mike Oaksford
One of the most important developments within the psychology of reasoning and
decision making over the past two decades has been the finding that people do
not reason as if the premises of an argument were certain. Instead, it has been
demonstrated that people reason in a probabilistic fashion. Curiously, this has
received little attention within moral psychological research, despite it having
profound implications for the field.
As an example, within standard interpretations the trolley problem
generally elicits a utilitarian choice and the footbridge problem elicits a
deontological one; within a probabilistic framework, both problems might be
producing a utilitarian response. In the case of the trolley problem, diverting the
train to a different track would render the death of the five on the original track
as very unlikely; furthermore, the death of the man on the second track is by no
means certain, as Foot herself noted in the original proposal of the dilemma.
Pulling the lever is therefore likely to lead to a very good outcome, in which at
least five of the men do not die.
In the footbridge problem, however, pushing someone in front of a train is
very likely to lead to their death, but given what most people know about trains
may also be seen as very unlikely to prevent the death of the five. Pushing
someone in front of the train would therefore be expected to lead to a worse
possible outcome than doing nothing, and should not be condoned by a good
utilitarian. That probabilistic utilitarian reasoning can lead to the standard
responses observed in experiments in ethics is easily shown; what has not yet
been demonstrated is that people do make such probabilistic inferences when
evaluating moral scenarios.
Although some researchers have previously discussed the potential
relationship of probabilistic judgements to moral reasoning, they have generally
either dismissed such effects as covariants (such as Greene et al., 2009) or failed
to experimentally test the possibility (Sloman et al., 2009). This seems like a
curious omission. As such, we will report on the preliminary findings of a
currently-ongoing study investigating whether there is a significant relationship
between the judged permissibility of a number of trolley problems, and the
estimated likelihood of various possible outcomes within them.
2
A possible solution to the trolley problem: justice towards groups and the
timing of the attribution of individual positions
Florian Cova & David Spector
We propose a novel explanation to the well-known trolley problem. The existing
research has shown that, when faced with a hypothetical dilemma involving the
sacrifice of one life in order to save five, respondents tend to provide very
different solutions depending on the framing of the scenario (Hauser et al.,
2007).
We tested on over 600 participants two variants of both the standard
scenarios and variations upon these scenarios (using both original and already
existing cases). In the first variant, all individuals originally belong to the same
group and their final positions were the result of late decisions. In the other
variant, the potentially sacrificed person and the potentially saved five belong to
different groups from the outset. We find that (i) respondents were more willing
to sacrifice one in order to save five if all individuals originally belonged to the
same group, and (ii) when not told anything about group membership,
respondents made implicit assumptions, which differed across scenarios. Also,
(iii) our results allowed us to rule out the traditional account in terms of the
doctrine of double effect (e.g., Mikhail, 2007).
On the basis of these results, we argue that the apparent incoherence of
answers to the dilemma in different but equivalent scenarios results from the
combination of diverging implicit assumptions about group membership and an
aversion for inter-group redistribution. We conjecture that this aversion results
from the combination of a mild omission bias, rawlsian preferences, and
reasoning as if decisions were made behind a veil of ignorance that is pierced
after group membership is determined but before within-group positions are.
Additionally, we argue that the same factors can account for problems of
replication in the trolley literature, in particular about the famous “man-on-theloop” case (compare for example Hauser et al., 2007 with Greene et al., 2009).
The Philosopher in the Theater
Fiery Cushman
Where do moral principles come from? Some moral philosophers derive
principles by reflecting on their own intuitions. We propose that ordinary people
do much the same thing. Based on a several case-studies, we suggests that
people generalize explicit moral principles from automatic moral intuitions,
codifying reliable consistencies in their 'gut feelings'. Explicit moral principles
therefore reflect the basic structure of the cognitive systems that generate our
intuitive moral judgments. And, because intuitive moral judgments depend on
an assessment of causal responsibility and mental culpability, those same causal
and mental state analyses figure prominently in explicit moral
theories. Interestingly, certain psychological 'quirks' of reasoning about
causation and mental states that show up in our moral judgments therefore also
show up in our moral principles. In this sense, our moral principles reflect not
just facts about the world, but also peculiar structures of our minds. While the
"Cartesian theater" has sometimes been mocked as a psychological model, we
propose it as a useful analogy. There is a philosopher in the theater, trying to
3
make principled sense of her own moral intuitions -- among other psychological
systems -- putting on the show.
On the attribution of externalities
Urs Fischbacher
Do people blame or praise others for producing negative or positive
externalities? The experimental philosopher Knobe conducted a questionnaire
study that revealed that people blame others for foreseen negative externalities
but do not praise them for foreseen positive ones. We find that the major
determinant of the Knobe effect is the relative distribution of economic power
among the agents. We confirm the Knobe effect only in situations where the
producer of the externality holds the higher economic status and the positive
externalities are small. Switching economic power makes the Knobe effect
vanish. The Knobe effect is even reversed in settings with large positive
externalities. Our results are in line with theoretical predictions by Levine.
Philosophical Dilemmas, Philosophical Personality, and Philosophy in Action
Adam Feltz
Perhaps personality traits substantially influence one's philosophically relevant
intuitions. This suggestion is not only possible, it is consistent with a growing
body of empirical research: Personality traits have been shown to be
systematically related to diverse intuitions concerning some fundamental
philosophical debates. This fact, in conjunction with the plausible principle that
almost all adequate philosophical views should take into account all available and
relevant evidence, calls into question some prominent approaches to traditional
philosophical projects. I explain how the growing body of evidence challenging
some of the uses of intuitions in philosophy, and I defend this challenge from some
criticisms of empirically based worries about intuitions in philosophy. I conclude
by suggesting two possibly profound implications. First, some dominant
traditional philosophical projects must become substantially more empirically
oriented. Second, much of philosophy ought to become substantially more applied.
Your Money or Your Life: Varying the outcomes in trolley problems
Natalie Gold, Briony Pulford & Andrew Colman
Trolley problems are used in order to probe intuitions about the morality of
harming one person in order to prevent a greater harm to others. Philosophers
use the moral intuitions derived from them to achieve a reflective equilibrium
between intuitions and philosophical principles, psychologists to investigate
processes of moral judgment and moral decision-making. Trolley problems
involve trade-offs between life and death, they are unrealistic scenarios, and they
are hypothetical scenarios. Do the results from trolley problems generalize to
harms other than death, to more familiar scenarios, and to judgments about real
events? We present three experiments designed to examine these questions. In
the first experiment, we present evidence that the difference in moral intuitions
4
between bystander and footbridge scenarios is replicated across different
domains and levels of physical and non-physical harm. In a second experiment,
we transplant the trolley problem to a more familiar scenario (also involving
financial harms), and discover that judgments are different depending on
whether the agent is a “bystander”, or onlooker, or a “passenger”, who is already
involved in the scenario, but that changing of context reverses the direction of
the effect. In the third experiment, participants make judgments about the
morality of trade-offs between small financial harms that are actually happening
in the lab as they make their judgments, enabling us to investigate bystanderfootbridge, actor-observer, and order effects in an incentive compatible
scenarios.
Emotional Arousal and Moral Judgment
Zachary Horne, Derek Powell
Moral psychological research has revealed that both reasoning and emotion play
a role in moral decision-making. A debate persists, however, about which of
these two processes is primary, and in what contexts. Strictly speaking, evidence
that emotion plays a role in moral decision-making is not evidence against the
role of reasoning, and vice versa. This study seeks to adjudicate the debate by
examining the causal relationship between people’s emotional states and their
moral judgments. Using self-report data from a commonly used emotion
measure (PANAS-X), we measured participants’ emotional responses to several
moral dilemmas and examined whether or not their emotional states could
predict their moral judgments. Compared to non-moral control dilemmas, our
findings indicate that emotions are reliably cued during moral scenarios. Despite
the intense emotions participants experienced, their emotional reactions were
not predictive of their moral judgments. Our findings, in conjunction with related
findings, call into question models of moral cognition that seek to explain
people's behavior in moral dilemmas entirely in terms of emotional arousal.
Who Makes A Tough Call?: An Emotion Regulation Approach To Moral
Decision-Making
Joo-A Julia Lee
Moral dilemmas often invoke strong emotional reactions. McClure et al. (2007)
argued that detecting and regulating competition between cognitive and
emotional processes is associated with activity in a dorsal region of the anterior
cingulate cortex (ACC). Their model suggests that our brain may be involved in
constantly monitoring the conflicts between the two processes, and potentially
overriding our emotional reactions by regulating and controlling them. Drawing
on their hypothesis on conflict monitoring, we contribute to this theory of
conflict monitoring by showing that emotion regulation plays a key role in
making utilitarian decisions by inhibiting the influence of negative emotions that
are associated with the prospect of harm.
We first predicted that these emotional reactions affect our decisions in a
critical way by interfering the deliberate, utilitarian decision-making process.
More specifically, we hypothesized that individuals who are told to regulate their
5
emotions would be more likely to make utilitarian decisions, compared to those
who are not asked to suppress their emotions. We also expected that the
relationship between one’s emotion regulation strategy and moral decisions
would be qualified by the extent of physiological arousal when considering the
moral dilemma such that the relationship between emotion regulation and
utilitarian decisions would be stronger among those who felt strong emotions.
Lastly, we hypothesized that people who regulate their emotions will
have heightened moral clarity perceptions in unrelated moral dilemma
situations, compared to those who do not regulate their emotions. We also
predicted that this relationship between emotion regulation and moral clarity
would be mediated by one’s revealed preference for utilitarian choice, thus
changing one’s moral judgment as well.
We tested our main hypotheses in two studies. In Study 1, we use a
correlational design and examine whether individual differences in emotion
regulation are correlated with one’s decisions in solving moral dilemmas. In
Study 2, we directly test the causal relationship between emotion regulation
strategies (in particular, suppression) and one’s preference for utilitarian
decisions in moral dilemmas, as well as one’s moral clarity judgment.
The two studies provided a critical link between regulating emotions and
moral judgment and decision-making in the dilemma situation. Not only
participants made a more utilitarian decision when they were told to suppress
strong emotional reactions they experienced, but their utilitarian preference also
carried over to increase their perception of moral clarity. In turn, this result
indicates that strong emotional arousal during the video made participants more
averse to the utilitarian option, and this aversion also led to the lack of moral
clarity, by perceiving the ethical dilemmas as more ambiguous.
Rational learners and non-utilitarian rules
Shaun Nichols
Hundreds of studies on moral dilemmas show that people’s judgments do not
conform to utilitarian principles. However, the exact nature of this
nonconformity remains unclear. Some maintain that people rely on deontological
“side constraints” that are insensitive to cost-benefit analysis. However, the
scenarios that are used to support this intuition, e.g., the magistrate and the mob,
contain an important confound. In these cases, we consider whether it is
appropriate for one person to violate a moral rule in order to prevent others
from committing similar violations. In that case, people tend to say that it would
be wrong to violate the rule. In a series of experiments, we showed that people
give very different responses when the question is whether an agent should
violate a moral rule so that she herself doesn’t have to commit more such
violations in the future. This suggests that a critical feature of our moral rules is
that they function in an intra-agent, rather than inter-agent manner. But this
raises a further question – why do our rules have this non-utilitarian character?
One prominent view (e.g. Mikhail 2007) holds that the structure of moral rules
plausibly depends on an innate moral grammar. We propose instead that given
the evidence that the young child has, a rational Bayesian learner would in fact
arrive at non-utilitarian rules.
6
Investigating the Effects of Framing in Trolley Problems
Briony Pulford, Andrew Colman, & Natalie Gold
In an examination of judgments in trolley problems we studied 15 scenarios with
slightly different framings to disentangle different effects that may be
occurring. Via an on-line survey 1,853 participants read one of the scenarios,
chose either Yes or No (if killing the one to save the five was morally
permissible), and rated attributes of the person who took the action, such as
whether they caused the one man to die or were to blame. Whether turning the
train is morally acceptable was also rated. We replicated the findings of Hauser,
Cushman, Young, Jin & Mikhail (2007) using the Ned, Oscar and Denise scenarios,
and found almost identical results for the percentage of people who agreed that
it was morally permissible to kill one person to save five. Possible confounds
with the scenarios such as the train driver fainting or not, and the gender of the
person in the scenario were found to be non-significant and were excluded as
causes. More importantly, referring to the man as a ‘heavy object’ or not did not
significantly influence people’s judgments. Bystander-passenger differences
were clear in the side-track scenarios but non-significant in the loop-track
scenarios.
The self/other incongruity in moral psychology: data and implications
Regina Rini
Recent work in experimental philosophy (e.g. Nadelhoffer and Feltz 2008)
suggests that moral judgments display a self/other incongruity (which is
sometimes also called the actor/observer bias). That is, moral judgments of
particular actions seem to depend in part upon whether the subject imagines
herself, or someone else, in the position of actor. This result does not seem to
have received the attention it merits. In this paper I argue that these findings
present a potentially unique challenge to the reliability of moral judgment. In my
view, the self/other incongruity is not simply another cognitive bias. Rather, it
challenges a fundamental presupposition of contemporary western moral
philosophy: that moral judgments display universalizability. If our moral
judgments routinely do not assess all similarly-situated agents equally, then by
failing the universalizability requirement they simply do not qualify as moral
judgments at all! I discuss several plausible responses to this worrying thesis,
but suggest that resolution must await further empirical details regarding the
cognitive processes underlying the incongruity. I also present findings from my
own experimental research, attempting to replicate and generalize the results
from Nadelhoffer and Feltz, and revealing a previously unreported interaction
between the self/other incongruity and gender.
Deadly Risk: Probability and Harm in Moral Judgment
Tor Tarantola
7
Recent research in moral psychology has focused on the processes of causal
reasoning, theory‐of-mind, and their integration in the formation of moral
judgments. However, relevant causal inputs have largely been investigated in
binary terms: either a consequence occurs or it does not, is caused by an action
or is not. The present research examines the role of harm probability
independent of theory‐of‐mind and consequence. How does an actor’s
probability of harming his potential victim, independent of his knowledge and
the ultimate consequence, affect a third‐party’s moral evaluation of his behavior?
Study 1 reports the results of an experimental survey (n=837) that shows that
punishment, but not wrongness, is attenuated by a reduction in this type of harm
probability. Study 2 reports the thematic analysis of four focus group interviews
that probed the extent to which this phenomenon was consciously
acknowledged. It also examines the process by which theory‐of‐mind and causal
considerations are integrated in deliberative moral reasoning. Implications for
theories of moral cognition are discussed, and a tentative model is proposed.
Psychopathic trait affects the moral choice but not the moral judgement:
experimental results of dilemmas resolution
Sébastien Tassy, Christine Deruelle, Julien Mancini, Samuel Leistedt, Bruno Wicker
Psychopathy is a personality disorder frequently associated with antisocial
behaviours. Although psychopathy increases the probability of immoral
behaviour, studies on the influence of psychopathy on decision making during
moral dilemma evaluation yielded contradictory results. Psychopathy either
increased the probability of utilitarian response or not. Here, we propose that
judgement (abstract evaluation) and choice of behaviour may be underlied by
two distinct cognitive processes and that psychopathy could alter specifically the
moral choice of behaviour, leading to intact moral judgement but immoral
behaviour. To explore this hypothesis, we tested the effect of the level of
psychopathic trait in a student sample on both judgment (”Is it acceptable to .... in
order to....”) and choice of behavior (”Would you ....in order to....?”) responses to
moral dilemmas. Results show that high psychopathic trait increases the
probability of utilitarian response for moral choice of behaviour but not for
moral judgement. The reason would be that psychopathy, which seems to alter
the network involving amygdala and the ventromedial prefrontal cortex, tends to
reduce the empathy for the victims, and would thus increase the acceptance to
make suffer someone to enhance the aggregate welfare. These results are in
favour of a dissociation of the cognitive processes that govern choice of
behaviour and judgement. It also explains the discrepancy of results of previous
studies on the effect of psychopathy on moral dilemmas resolution, which tested
indistinctively either moral judgement or moral choice of behaviour.
Mortality Salience and Morality: Thinking About Death Makes People Less
Utilitarian
Bastien Trémolière & Jean-François Bonnefon
8
The dual-process theory of moral judgment postulates that utilitarian responses
to moral dilemmas require to mobilize controlled processes that draw on limited
cognitive resources. In parallel, Terror Management Theory postulates that these
same resources are mobilized when one is reminded of one's own future
physical death (in order to suppress these thoughts out of focal attention). Crosspollinating these two perspectives, we predicted that people under mortality
salience (MS) would be less likely to give utilitarian responses to moral
dilemmas. Experiment 1 showed that the frequency of utilitarian responses was
lower when participants were presented with dilemma involving non-lethal
harm, than when they were presented with more traditional life-and-death
dilemma. Experiment 2 introduced an exogenous manipulation of MS (asking
participants to jot down some brief reactions to the idea of their future physical
death) and used non-lethal harm dilemma only. Participants under MS were less
likely to give utilitarian responses, compared to participants in a control group
who had to think about physical pain. Experiment 3 cross manipulated MS and
three degrees of cognitive load. Results replicated the effect of MS, ans showed
that only extra high load had a comparable effect of responses. The combination
of MS and extra-high load decreased the frequency of utilitarian responses by 40
percentage points. In addition to providing novel support to a dual- process
approach to moral judgment, these findings raise the worrying question of
whether mortality salience effects might shape private judgment and public
debates, by preventing a full reflective attention to the available arguments, since
these issues often involve matters of life and
When Can('t) We Trust Our Moral Intuitions in Distributive Cases?
Alex Voorhoeve, Ken Binmore and Brian Wallace
I examine the reliability of our moral intuitions in cases in which we must choose
to save either (1) a smaller number of people from greater harm, or, instead, (2)
a larger number of people from lesser harm. Such choices involve tradeoffs
between two dimensions (the number of people saved and the severity of harm
from which they are saved) for which it is difficult to specify the right "rate of
exchange."
There is evidence that when faced with non-moral choices that involve this kind
of difficult trade-off, a significant proportion of people will use a heuristic known
as "similarity-based decision-making." Roughly, this involves neglecting
dimensions along which alternatives are similar, and basing one's decisions on
only the dimension(s) along which they are dissimilar (Rubinstein 1988). The
use of this heuristic can lead to an under-weighting of dimensions along which
alternatives are similar. It may also lead to violations of principles of rational
choice. This means that, insofar as they use this heuristic in intuitive moral
choices, people's intuitions are suspect.
I report results of what is, to my knowledge, the first experiment on the use of
this heuristic in moral decisions. Eighty-two subjects were asked to make a
series of health care allocation choices of the kind outlined. The data reveals
strong evidence for the use of a similarity heuristic in a significant proportion
9
(around 30%) of subjects. It also reveals that these subjects generally
underweight similar dimensions, leading to violations of principles of rational
choice and to morally problematic choices.
I conclude that we should not trust our moral intuitions in cases which involve
difficult tradeoffs and in which alternatives are similar along one dimension.
Motivated Moral Perceivers
Jen Cole Wright, Michelle DiBartolo, Annie Galizio, & Evan Reinhold
People often make snap (“intuitive”) moral judgments, later highlighting
information that confirms their judgment (aka “motivated moral reasoning”).
Our studies examined whether people are also “motivated moral perceivers”,
disproportionally attending to visual information consistent with their preexisting moral judgments when evaluating morally challenging situations.
Participants were presented with dilemmas in which a person/people must die
for another person/people to live and then with visual images of the people
involved in the dilemmas. We predicted participants would show a visual
preference for the person/people they had decided to save and/or avoidance of
the person/people they decided to sacrifice.
In study 1, we used the ERICA eye-tracking system to track 266
participants’ eye movements as they read the classic Trolley or Footbridge case,
followed by images of the individual man and the five workers (counterbalanced
on screen side) in the case. After viewing the images, participants were asked
how willing (on a 7-point Likert scale) they were to pull the switch/push the
man off the bridge to save the five workers. We found that participants who
glanced first at the workers reported being more willing to pull the switch/push
the man off the bridge than those who glanced first at the man. Thus, our results
confirmed that participants’ visual attention patterns were predictive of which
person/people they had decided to kill/let die.
A limitation of this first study is that though we assumed participants
formed their judgment immediately upon reading the case (before seeing the
images), since we did not have participants report their judgments until
afterwards, there is no way to be sure. So, in study 2, we presented 115
participants with the Trolley case and had them answer the question before the
images. Once again, we found that participants whose first glance was at the
workers were more willing to pull the switch than those whose first glance was
at the individual.
These studies provide preliminary support for the hypothesis that people
display preference for visual information that is consistent with their moral
judgments. Having been confronted with a moral choice – namely, to kill an
individual in order to save a group – participants’ intuitive moral judgments (to
save or kill the individual) predicted where they first looked in both the
Trolley/Footbridge cases and in those Baby/Villager cases that were preceded
by the Trolley case. In those cases, participants looked first to the individual(s)
they intended to save/not kill.
10