Download Article Page 08.27.20+

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Sensory substitution wikipedia , lookup

Cognitive neuroscience of music wikipedia , lookup

Neuroeconomics wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Embodied language processing wikipedia , lookup

Artificial intelligence for video surveillance wikipedia , lookup

Feature detection (nervous system) wikipedia , lookup

Cognitive neuroscience wikipedia , lookup

Enactivism wikipedia , lookup

Top-down and bottom-up design wikipedia , lookup

Stereopsis recovery wikipedia , lookup

Brain Rules wikipedia , lookup

Sensory cue wikipedia , lookup

Emotion perception wikipedia , lookup

Visual search wikipedia , lookup

Neuroanatomy of memory wikipedia , lookup

Computer vision wikipedia , lookup

Cognitive science wikipedia , lookup

Cognitive psychology wikipedia , lookup

Visual memory wikipedia , lookup

Neural correlates of consciousness wikipedia , lookup

Visual extinction wikipedia , lookup

Music psychology wikipedia , lookup

Visual selective attention in dementia wikipedia , lookup

C1 and P1 (neuroscience) wikipedia , lookup

Visual N1 wikipedia , lookup

Visual servoing wikipedia , lookup

Neuroesthetics wikipedia , lookup

Perception wikipedia , lookup

Time perception wikipedia , lookup

P200 wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Transcript
A misperception may simply be a misperception
Jonathan W. Page
Psychology, Dickinson College, Carlisle, Pennsylvania
Email: [email protected]
Office: 717 245 1974
2
Abstract
When a police officer shoots an unarmed suspect, the intentions of the officer often come
under scrutiny. The purpose of this paper is to suggest the possibility that such an
incident may be the result of discreet neural processing in subcortical and early visual
stages and not necessarily the result of a larger conspiracy on the part of the officer. A
brief description of how the human visual system works and the interlocking nature of
vision and action is presented. According to what we know, it is suggested that in
situations of danger or threat the nervous system may produce rudimentary variable
signals on which subsequent processing and action is based. Ultimately, this can lead to
misperceptions that guide motor actions. Based on the information presented, a scientific
hypothesis of how a police officer can mistake a non-weapon for a weapon is offered.
Potential ways to avoid misperceptions through research and training are explored and
discussed. This paper outlines a new way to look at mistake-of-fact shootings and should
be of interest to police investigators, officers, and administrators, and to researchers and
professionals in academia.
Keywords: Perception, misperception, visual error, lethal force, mistake-of-fact shooting
3
A misperception may simply be a misperception
“A misperception is a misperception” is a tautologous statement: it is true by
virtue of the form in which it is stated. In propositional logic this statement is true under
any conditional value (a tree is a tree, a duck is a duck, love is love); in rhetoric it is
functionally unnecessary since it is repetitive and awkward, but it is nonetheless true. The
question raised here is: Does a logical, albeit rhetorical statement such as this, have
currency in the realm of policing? And if so, under what circumstances? In this
conceptual paper, a scientific hypothesis that allows such a statement to be seriously
considered in policing is presented. It is further argued that there are in fact situations in
which this hypothesis should be used as a first alternative to understand and explain
certain behaviors, like mistake-of-fact-shootings.
1. THE PROBLEM
Most if not all of us have experienced a visual misperception in some form or
another. Usually the outcome is benign—we think we see a friend entering a store, but
when we approach we realize we were mistaken. We erred visually, no harm done.
However, when an on-duty police officer makes an obvious visual error in the course of
duty, the outcome is rarely benign. An example of a high-stakes misperception is when
an officer mistakenly shoots an unarmed suspect. In this example, the police officer
erroneously perceived that the suspect had a weapon and was therefore an immediate
threat to the officer’s safety. Oftentimes a tragedy such as this results from a non-weapon
(such as a cell phone) being perceived by the officer as a weapon (a gun or a detonator)
(see Sharps & Hess, 2008 for real examples). How can this happen, especially since the
stakes are so high? A cell phone is obviously a cell phone (another tautologous
statement); how can an officer mistakenly think it is a weapon? Many reasons have been
outlined (some will be mentioned below), but the possibility of a misperception being
simply that—a misperception—is seldom considered. It may seem too simple of an
explanation to offer as a defense for the unfortunate incident, so other more consequential
reasons are sought.
When a perceptual error like mistaking a cell phone for a gun occurs, the officer is
left to justify his or her actions. Having to provide reasons for such a tragic mistake can
be difficult; the justification bar is necessarily high. Ultimately, the officer may be
required to provide reasons in criminal court, or at the least during civil proceedings. In
either case the intentions of the officer will be explored in depth with the over-arching
theme probably being that the officer could have decided on some level not to shoot the
unarmed suspect. The officer will have to defend his or her intentions. In our benign
example of a visual misperception this would seem silly: Did you really intend to mistake
the stranger for a friend? Could you have decided not to make this visual error? The real
problem occurs when one must take decisive action based on a perception. For a police
officer, that action is sometimes lethal force, and that lethal force may sometimes,
unfortunately, be based on a misperception.
One caveat to this paper is that the word “error” is used rather liberally throughout
to refer to mistakes in judgment. As used here, error refers to a mistake in judgment that
can be evaluated after-the-fact. If a police officer says a suspect pulled a gun from his or
her waistband, the accuracy of the statement can (usually) be factually checked at a later
time. If indeed the suspect had a gun, then the officer’s initial perception can be judged as
correct; on the other hand, if it is later determined that the suspect actually pulled a cell
4
phone from his or her pocket, then the initial perception of a gun can be judged as
erroneous.
Defining the term “error” as used in this paper is important for several reasons,
one being that currently there is not a single agreed upon definition. In the field of
experimental psychology for example, the concept of “human error” is debated in large
part because of the elusiveness of an acceptable definition. One way to define error,
especially errors in neural processing, is as naturally occurring variability. Proponents of
this view argue that this definition is ecologically sound because, for one thing, it allows
for learning—something we as humans are very adept at. In fact, several topics in the
field of psychology are based on trial-and-error learning. Another way to define error is
according to engineering and information processing terms where an error means there
has been a breakdown in the system. Blame for the error is then assigned to the point
where the breakdown occurred. In the fields of human factors and industrial safety, the
view of “errors” is nearly unanimous with this latter definition. This view also seems to
dominate the legal and judicial systems. It is probably for this reason that decisionmaking is so scrutinized in lethal force encounters.
One of the problems with defining an error as a breakdown in the system is that it
infers that the problem can be fixed and thus avoided in the future. For police officers
involved in mistake-of-fact shootings, this means that they are by definition at fault for
their “error,” and therefore could have taken steps to avoid such a mistake. The main
theme of this paper is that there may be other causes of errors that occur well before the
decision-making stage of cognitive processing; namely, endogenous variability in
perceptual processing. If this is true, then the argument naturally turns to a debate of
semantics (i.e., is it truly an “error” if the officer actually perceived a weapon, even
though it was later determined there was no weapon?). Hopefully a more thorough debate
on this topic as it relates to law enforcement will be undertaken in the near future.
2. OUR PERCEPTUAL SYSTEM
In order to consider the idea that a misperception is simply a product of the
nervous system, it is important to have at least a cursory understanding of the human
perceptual system. A detailed description of human perception is not possible here
because of space limitations (volumes have been written describing the intricacies of the
different aspects of our perceptual systems). Instead, vision will be presented and
discussed as a representative perceptual system and a brief overview of the visual system
and relevant visual processes will be described. References for further readings in this
area will be given for those who are interested in a deeper-level of understanding.
The path that visual information takes, starting from when light first strikes the
eye, leads from the eyes to structures in the midbrain with the bulk of the information
(roughly 90 percent) eventually being sent to a small cortical area at the very back of the
head known as the visual cortex. The visual cortex is also referred to as area V1 because
it is the first cortical area to receive input from the eye. From V1 the information flows in
two general directions: (1) the dorsal pathway, projecting towards the top of the head,
processes information relating to the “where” and “how” of our visual world—where
objects are located in space and how we can interact with those objects; (2) the ventral
pathway, projecting towards the ears, helps us discriminate objects regardless of their
location in space and is known as the “what” pathway (for in depth reviews of these
processes, see Farah, 2000; Nolte, 2002; or Rolls & Deco, 2002).
5
It is tempting to view this progression of visual information down the various
pathways as just that—a progression of information in discreet and sequential stages,
each new stage building upon or adding to information from the previous stage.
However, this is not how our system works. In general, our system processes in parallel
rather than serially and has an astounding reciprocal feedback system built-in. For
example, the lateral geniculate nucleus (LGN) located in the midbrain thalamus is a
midway point between the eye and V1. Visual information is carried down the optic
nerves originating in the retina of the eye, to the LGN, where it is processed and
transferred to optic radiations traveling from the LGN to V1. Originally it was thought
that this area was more or less a relay station simply passing signals on from the eye to
V1. Researchers soon learned that it is not that simple. The LGN actually receives more
feedback projections from V1 than it sends (Rockland, 2002). And this seems to be the
rule rather than the exception.
What is the purpose of this massive feedback system? The answer actually
depends on where you are looking. For the early stages of visual processing, like the
V1/LGN feedback system just mentioned, the feedback connections increase the gain of
cell responses to light, helping the visual system lock-on to specific patterns of light for
more efficient processing (Sillito et al., 1994). At this stage, feedback simply enhances or
modulates normal visual processing. However, the role of feedback seems to be much
different at later stages. Research has shown that areas immediately surrounding V1 (like
area V4), as well as areas distal to V1 (like the anterior inferotemporal (AIT) area) are
influenced by cognitive processes such as attention, memory, and context (Moran and
Desimone, 1985; Miller et al., 1993). Cognitive influences at early visual stages can only
be induced via a strong feedback system. In other words, if visual processing was strictly
sequential, cognitive processes would occur much farther down the line and would
therefore not be able to influence early computational stages at the point of initial
processing (see Desimone and Duncan, 1995, and Bruce et al., 1996 for reviews).
It is now obvious to vision researchers that both bottom-up and top-down
processes determine our perceptions. Bottom-up refers to the physical dimensions of our
sensory world that shape what we perceive; our perceptions are driven by physical
stimuli (light) from the environment. Our knowledge, goals, memory, and the context of
our perceptions shape what we perceive in a top-down manner. Some perceptual theories
focus more on the top-down aspect and are known as “conceptually driven” models (see
Bruce et al., 1996 for a review); others, like Marr (1976, 1982) have focused more on
bottom-up or “data driven” processes. But both sides agree that bottom-up and top-down
processes work together in a dynamic way to help us identify objects, resolve ambiguity,
and move and act in our world.
Also important to understanding how the human visual system works is
knowledge of the information used by this system. Our visual system builds internal
representations of our environment using the physical dimensions of light and reflected
light. Five types of spatial contrast have been identified that help segregate information
from light into understandable and recognizable images. They are: luminance, color,
texture, motion, and binocular disparity. These stimulus properties define the type of
information carried throughout the visual pathways. Discreet functional units, and in
some cases structural units, have been identified in various areas of the visual system
(starting with the retina itself) that process one or more of these dimensions. Successful
6
identification of these spatial contrasts is necessary for perception; ambiguous
information in these areas can lead to misperceptions. Regan’s (2000) book gives an
excellent description of each of these processes.
As the parameters of light energy change, perception changes as well. When using
vision in daylight or well-lighted indoor conditions, cone photoreceptors in the retina of
the eye respond to light exclusively. This type of vision is referred to as photopic vision.
Scotopic vision refers to vision under very low light levels where the rod photoreceptors
in the retina respond exclusively to light. Rod-dominated vision is much different than
cone-dominated vision because of the functional differences between these two types of
photoreceptors. For example, cones have high visual acuity and carry color information;
rods are much more sensitive to light and thus signal changes in luminance (brightness
levels), but carry no information relating to color (see Bruce et al., 1996 for an in depth
review). A third type of vision in the midrange region that overlaps photopic and scotopic
vision is mesopic vision. Mesopic vision is much more complicated to measure and
define. In this luminance range, rod and cone photoreceptors operate in tandem and
perception is determined via a combination of their inputs (Baylor, 1987). But their
interaction is more complicated than a simple additive function of photoreceptor inputs.
For example, there are large temporal differences in how the two photoreceptors carry
information (Stockman & Sharpe, 2006) causing the visual system to behave differently
than would be expected from simply adding inputs together. Additionally, there are at
least two post-receptoral rod pathways that process luminance differently (e.g., Conner,
1982; Sharpe & Stockman, 1999; Stockman et al., 1991, 1995). At low light levels, a
sensitive but more sluggish rod pathway carries light information; a second less sensitive
but faster rod pathway is used in higher mesopic light levels (for a review of low-light
vision, see Hess et al., 1990).
To summarize this section: (1) Visual information is processed in parallel rather
than serially with a tremendous amount of reciprocal feedback between processing
stages. (2) Reciprocal feedback modulates perception at all levels of vision. The result is
a system highly influenced by cognitive, top-down processes even at early computational
stages. (3) Normal vision functions differently in low-light conditions.
3. PERCEPTION AND ACTION
Arguably, the main purpose of the visual system is to guide goal-directed motor
movements. From an evolutionary standpoint, this seems to be the only purpose. Without
the ability to move through the environment, what would be the advantage of perception?
Supporting this position is the fact that visual and motor cortices in the brain develop in
concert in the early years of life. Some researchers (e.g., Regan, 1989) have suggested
that it may be necessary to consider action when studying perception since there are some
perceptual properties that are better understood when considering motor actions and the
organization of the motor cortices.
To help understand the dynamic relationship of perception and goal-directed
motor action, theorists have developed schemata that describe this process. One of the
original models was developed to describe differences between conscious action and
reflexive reaction (MacKay, 1965) where both feedback and feedforward mechanisms
were proposed. Later research (e.g., Miall & Wolpert, 1996; Wolpert et al., 1998)
confirmed the general idea of this model by showing that a brain structure called the
cerebellum actually contains internal models of the motor system that, among other
7
things, use information of the current state of the motor system (bottom-up feedback) to
predict the consequences of potential actions (top-down expectations). This information
is then used in a feedforward manner to react quickly to incoming stimuli. Having such a
system allows the brain to overcome time delays inherent in a strictly feedback system.
The unimaginable catches by close-to-the-wicket fielders in cricket and the extremely
quick reactions by goaltenders in ice hockey are easier to understand when described by
such a feedforward system (Regan, 2000). Therefore, it seems that at least part of the
advantage of an interlocking perception/action system is the capacity to anticipate and
react, which speeds-up reaction time. It is also an excellent example of how bottom-up
and top-down processing work together to influence our perceptions and actions.
According to Milner and Goodale (2006), two of the foremost leaders in this field,
perception can modulate action both through conscious awareness and through
unconscious “online” vision. The two systems operate somewhat independent of each
other and have separate neural systems guiding their functioning. The theory suggests
that perception guides our actions by way of conscious top-down processing that takes
into account our prior knowledge and experiences as well as our current mood, emotions,
and preferences; and perception directs our reactions to events at a level below conscious
processing that is more bottom-up, or data driven. That perception can guide action in a
top-down manner has been studied in some detail (for example, see Findlay & Gilchrist,
2003; or van der Heijden, 2004). How the unconscious online system influences
perception has not received as much research attention, but this may be changing. There
is recent evidence of a subsystem that modulates perception and action at a level below
our conscious awareness. A “fear module” has been proposed that is centered in the
amygdala and operates automatically below conscious awareness to respond to threats to
our survival (see Öhman & Wiens, 2004 for a review). This idea is supported by the fact
that about 10 percent of the optic projections leaving the back of the eye travel to this
area of the brain. So the amygdala (and other limbic structures) receive direct visual
input. And, a system that influences perception and action at a level below conscious
awareness helps explain breakdowns in cognitive processes—like errors in memory,
judgment, and attention, for example—that occur during high-anxiety, stressful events
(Bradley et al., 2000; Fecica & Stolz, 2008; Morgan et al., 2004; Neuberg & Cottrell,
2006).
To summarize this section: (1) The brain can anticipate events using internal
models of the motor system to predict the consequences of motor actions, producing very
quick reactions. (2) Action can be guided by top-down, cognitive influences on
perception. (3) Action can be guided by unconscious online vision (better described as
“reactions”). (4) A fear module modulates perception and action at a level below
conscious awareness and responds to threats to our survival.
4. SO HOW CAN AN OFFICER MISTAKE A CELL PHONE FOR A GUN?
Prior research has outlined a number of causes for mistake-of-fact shootings.
Most of these are thought to influence the decision process of whether, or when, to act on
a perceived threat. They can be grouped into three broad categories: officer variables,
suspect variables, and environmental factors. Officer variables include such things as age,
gender, ethnicity, the context in which the event is occurring, how the officer thinks
about and understands the situation, and the level of stress the officer is experiencing.
Suspect variables include background characteristics of the suspect like age, gender, and
8
ethnicity, but additionally include how the officer views the suspect taking into account
such things as bias, culture, and stereotypes. Environmental factors encompass such
things as lighting, temperature, and the influence of other people in the area (for a review
of these processes, see Vrij & Barton, 2004).
The hypothesis of this paper, however, is that misperceptions leading to mistakeof-fact shootings can result from neural signals within the perceptual system. If early
visual processing can produce signals containing error, all subsequent processing will be
based, at least initially (remember the massive reciprocal feedback system that may
eventually correct errors), on these faulty signals. While officer variables, suspect
variables, and environmental factors may influence perception leading to mistake-of-fact
shootings, the emphasis in this paper is put on how these factors contribute to producing
faulty signals in the early processing stages of perception. In other words, it is suggested
here that officer and suspect variables and environmental factors may influence
perception and action through their early input into the perceptual/motor systems (via
bottom-up and top-down processes) rather than by influencing decision-making at a later
stage on a higher cognitive level.
It has been shown unequivocally that top-down processes work to influence our
perceptions. As mentioned above, this has even been shown at the early visual processing
area V4 (Moran and Desimone, 1985). This means that conceptual information—like
knowledge gained through experience, or sets of beliefs and expectations based on the
context of the situation—can modulate the computational bottom-up sensory signal at
early processing stages. It has even been shown that imagination modulates early visual
processing in a manner similar to perception (Page et al., submitted), suggesting that
what one “thinks” can influence what one “sees.” For the police officer responding to a
threat, this means that information received prior to arrival at the scene (from the
dispatcher or from fellow officers), the officer’s knowledge of the situation gained
through training and experience on the job, the context and setting of the event (area of
the city, time of day, temperature, etc.), the mood and emotional state of the officer, and
even the imaginational qualities of the officer can all potentially influence what the
officer perceives, thus having some bearing on what the officer does.
In addition to conscious perception, unconscious online vision can also influence
an officer’s actions. The fear module of the amygdala responds automatically (without
conscious input) to danger and threat. The purpose of this mechanism is to detect and
respond to danger in a manner that produces the best chance of survival for the agent.
Looking at it from this point of view, it would make more survival-sense to mistake a cell
phone for a gun than to mistake a gun for a cell phone. And the online visual system does
make visual errors based on its poor visual acuity. The amygdala is notoriously bad at
discriminating objects (for an interesting description of this in action, see Johnson, 2004).
This means that for ambiguous stimuli in a threatening context, the amygdala must
always assume the worst because doing so actually benefits survival. For example, when
walking through your yard you might jump at the glimpse of a garden hose in the grass
thinking initially that it is a snake. Your online visual system (via the amygdala) “sees” a
long, skinny, winding object and alerts your fight-or-flight system that danger lurks in the
green grass. Your interpretation of this initial danger signal is that the long, winding
object is a snake. Making this assumption at the outset aids in your survival (i.e., it would
be better to misidentify a garden hose as a snake than to misidentify a snake as a garden
9
hose). However, this is not the end of the story. Subsequent feedback from conscious
perception identifies the object as a garden hose, over-ruling the initial misidentification
of the online visual system. You can quickly correct your initial misperception through
conscious visual discrimination. This interplay between conscious perception and the
online visual system for threatening stimuli has been demonstrated experimentally
numerous times (e.g., Killgore & Yurgelun-Todd, 2004; Ohrmann et al., 2007; Phillips et
al., 2004; Williams et al., 2005).
It may seem that the online visual system would be more beneficial if it had better
acuity, that way misidentifying a garden hose as a snake would not happen in the first
place. However, if this system had better visual acuity, the additional neural processing
needed to sharpen the visual world would take much more processing time than it
currently does, giving you less time to respond to the snake in the grass. So less time
spent on perceptual processing leads to more time available for motor action. The low
visual acuity of the system also allows you to generalize across categories so that
anything “snake-like” raises the alarms. An old radiator hose or a bent tree branch that
has fallen from a tree can also lead to the initial perception of a snake. Again, the online
visual system—guided in large part by input from the fear module in the amygdala—
always seems to assume danger. There would be no survival benefit for mistaking a
garden hose for a radiator hose in other words.
Physical variables can influence perception as well. For example, changes in the
amount of available light in the environment can affect vision in a number of ways.
Switching from photopic to scotopic vision leads to changes in how the visual system
processes light stimuli. In the laboratory, differences in varying illumination can be
isolated and studied much easier than in the field. Researchers generally accomplish this
using static images, like having participants look at visual patterns under bright and dim
lighting. In the real world, however, visual processing is rarely performed in completely
static conditions. Motion of some type is almost always present. Two main sources of
motion in the real world that are relevant to this paper are radial flow (the flow of
information resulting from the person moving through the environment) and biological
motion (patterns of movement associated with other living beings). Both are hindered in
low light conditions. Researchers studying motion under low light conditions have found
that object perception based on motion and biological motion are both impaired at low
light levels (Grossman & Blake, 1999; Takeuchi et al., 2004), partly because the visual
system becomes more sluggish in low light conditions (Dawson & Di Lollo, 1990;
Takeuchi & De Valois, 1997).
Studies have also shown that speed discrimination and perceived speed are
deficient in dim lighting (Hammett et al., 2007; Takeuchi & De Valois, 2000). In fact, it
is estimated that under mesopic conditions, velocity perception is under-estimated by as
much as 20 percent (Gegenfurtner et al., 1999, 2000). This means that biological
motion—like a suspect raising a weapon—occurs much faster in dim lighting than you
actually perceive it as occurring. Studies have further shown that an accurate estimation
of biological motion is best given under photopic conditions, followed by scotopic
conditions, and then mesopic conditions (e.g., Billino et al., 2008). This is interesting
because the regression of accuracy is not intuitive; it seems that performance should
decrease incrementally as lighting decreases with the best performance being in the
brightest conditions and the worst in the darkest conditions. This is clearly not the case:
10
Billino and colleagues (2008) found that performance is best in fully lighted conditions,
next best in very dark conditions, and worst in semi-dark conditions. In other words, you
can better discriminate biological motion under very low light conditions (like starlight)
than you can under relatively lighter conditions (like bright moonlight or dim street
lights).
Errors in perception can also be exacerbated by such things as stress and time
constraints. Time constraints quash feedback from conscious perception, annulling any
potential input that may help correct misperceptions. Stress robs the cognitive system of
neural resources so that more processing can be devoted to mounting a physiological
response to the stressor. The result can be a narrowing of attention (Page, submitted;
Strayer et al., 2003) or memory deficits (Morgan et al., 2004), for example, or a decline
in performance such as decrements in speed, accuracy, or other evaluative measures
(Salas et al., 1996). In fact, research has shown that different types of stress interfere with
a variety of cognitive processes (Garavan, 1998; Garcia-Larrea et al., 2001; Lansdown et
al., 2004; McElree, 2001; Sliwinski et al., 2006; Verhaeghen & Basak, 2005). In
dangerous or threatening situations, like what many police officers face in the course of
duty, high levels of stress and anxiety can cripple the rational conscious perceptual
system leaving the unconscious online visual system to direct motor actions.
5. A HYPOTHETICAL SCENARIO
Taken as a whole, the discussion to this point has established the possibility that
visual processing leading to action can be based on misperceptions. As a hypothetical
example of how this may work, assume that an officer is responding to a call at a fast
food restaurant where a male suspect is said to be threatening the employees and patrons.
The officer is warned that the suspect may be armed and is dangerous. The responding
officer enters this potentially volatile situation with a plethora of information and
knowledge that act to determine the context of the event. This knowledge is processed in
a top-down manner and influences what is perceived. The context of the situation cannot
be overstated.1 There are many factors that help determine context: the neighborhood in
which the fast food restaurant is located adds contextual information as does the physical
appearance of the suspect; pre-event information from the dispatcher and the officer’s
experience level add to this as well giving the officer a “gut-feeling” of how the scenario
will play out (more specific contextual information could be added to this example, the
possibilities are vast). Adding environmental factors—such as low lighting conditions,
hot (or cold) air temperatures (which impact an officer’s physiological system), or the
number of employees and patrons whose safety the officer must consider—may increase
the chances of a misperception occurring. Furthermore, there is little doubt the police
officer will be experiencing relatively high levels of stress at this point, which will rob
higher-order cognitive resources in order to best respond to the threat. If stress levels are
high enough, the officer will experience attentional tunneling to some degree. The result
is a perceptual/motor system relying more on the online reactive system than the
conscious perceptual system.
1
How many police officers mistakenly shoot someone pulling a cell phone out of their pocket
during a casual lunch in a nice restaurant where no signs of danger or threat exist? Misperceptions
occur in the midst of high-tension, adrenaline-dominated situations for many reasons, the main
reason being context.
11
Now consider bottom-up input as the officer confronts the suspect. Online vision
may detect movement of the suspect’s hands around his waistline or may notice that his
hand is gripping an object. Remember that acuity for the fear module subsystem is low
and a dangerous interpretation of an event increases the chance of survival. Any action
taken by the officer at this point in time will probably be initiated by input from the
online visual system. Action initiated by this system is reflexive and, given the nature of
the cerebellum that helps us predict and anticipate events, is potentially quick in its
response. The conscious perceptual system probably has very little influence over action
at this point. The massive visual feedback system that generally overrides misperceptions
(like correcting the misidentification of the garden hose as a snake) may not be able to
override the fear module and influence motor action in this case because of time
limitations and limited neural resources. In this hypothetical scenario, the suspect, who
happens to be holding a cell phone, raises his hands above his head to comply with the
officer’s orders and is shot by the officer who perceives this movement (raising a
“weapon”) as threatening. The officer simply reacted to ensure survival. An unarmed
suspect was shot because the officer mistook a cell phone for a weapon.
6. IS THERE A WAY TO REDUCE PERCEPTUAL ERRORS?
A misperception can occur at many stages in the above scenario. The most
obvious starting point is the online visual system. This system may initially signal
“weapon” in response to the cell phone based on neural processing from the fear module.
Contextual information from pre-event briefings and suspect characteristics can also
increase the likelihood of this error in a top-down manner. Subsequent processing is then
based on this misperception. Given time, the massive reciprocal feedback system may
correct this misidentification; however, time constraints are always, or nearly always, a
major factor in a critical incident. An officer simply does not have time to wait and see if
the “weapon” is actually a weapon; he or she may have already taken action by the time
the feedback system can correct the visual mistake. But even if time is available in which
the feedback system can operate, it does not guarantee that the conscious perceptual
system will override the online system. In situations of danger or threat, cognitive
resources are diminished leaving the conscious perceptual system disadvantaged. Tunnel
vision, deficits in decision-making, and impaired memory for the event generally result.
A good question to ask at this point is: Is there a way to reduce perceptual errors
during critical incidences involving police officers? The answer is yes, but it will take a
commitment to research. Research provides a solid basis for discovering the principles
that underlie perception and action during lethal force encounters. Discovering the
principles of the interlocking nature of perception and action and how these work in
dangerous or threatening situations leads to uncovering best practices for policy and
training. Training then leads to the development of skills necessary to deal with such
circumstances. In fact, training is the link between research and the practical application
of research findings. For example, training can help officers understand how visual
misperceptions come about; understanding how the cognitive, perceptual, and motor
systems work under stress gives a realistic view of what one can expect of themselves
and others in such situations. Training can teach methods for reducing stress in stressful
situations, which may increase input of the conscious perceptual system by reducing
physiological effects such as tunnel vision. Giving more weight to the conscious
perceptual system may potentially allow the visual system’s massive feedback network to
12
correct initial misperceptions. Further research is needed to test this possibility. Training
may also provide other cognitive tools to help officers avoid perceptual errors in the first
place. But overall, research can provide a broad understanding of the issues discussed in
this paper to direct training curricula, guide initial inquiries into mistake-of-fact
shootings, and establish a strong basis for a legal defense when necessary.
7. CONCLUSION
Presented in this paper is a scientific hypothesis based on a tautologous statement
that a misperception is simply a misperception. It is argued that this hypothesis should
initially be accepted—in theory and in practice—when considering mistake-of-fact
shootings by police officers. Since it is only a hypothesis at this point, one might question
the accuracy and logic of the arguments presented. As for accuracy, the arguments
presented here are based on the numerous research studies referenced throughout this
paper that describe the current understanding of how our perceptive and motor systems
work. The evidence is compelling. Thus, the logic connecting these studies does not need
to be far-reaching. But the good news is: The hypothesis offered does not need to be
accepted by way of strong appeals to reason and logic nor does it need to be accepted
simply on good faith, it is fully testable. Hopefully, other researchers will be intrigued by
the conceptualization put forward in this paper and begin vigorous testing of the
hypothesis presented.
Based on the evidence presented in this paper, it is recommended that
investigators looking into police use of lethal force involving mistake-of-fact shootings
start from the premise that a misperception is simply a misperception, and not seek out
more complicated explanations unless they have solid reasons for doing so. The reason
for suggesting that this premise should always be the starting point is based on the law of
parsimony. The law of parsimony is a scientific edict for understanding our world. It
states that the simplest, most straightforward explanation should always be accepted over
more complicated ones. In this case, the most straightforward explanation for mistake
shootings is misperceptions based on automatic processing at subcortical and early visual
stages. It is much more complicated to describe such an incident based on higher
processes of cognitive decision-making (and, arguably, not as accurate). According to the
law of parsimony, if two descriptions of an event are equally convincing, the simplest one
should be accepted. The contention of this paper is that this rule should also be followed
during formal critical incidence investigations where a misperception is claimed. If,
during the course of an investigation it becomes evident that a mistake-of-fact shooting
was actually not a mistake, then by all means other explanations that better fit the facts
should be sought. But for an initial starting point, plenty of scientific evidence exists to
assume that a mistake-of-fact shooting may in fact be based on a simple misperception.
13
References
Baylor, D.A. (1987), “Photoreceptor signals and vision. Proctor Lecture”, Investigative
Ophthalmology & Visual Science, 28, 34-49.
Billino, J., Bremmer, F., & Gegenfurtner, K.R. (2008), “Motion processing at low light
levels: Differential effects on the perception of specific motion types”, Journal of
Vision, 8(3), 1-10.
Bradley, B.P., Mogg, K., & Millar, N.H. (2000), “Covert and overt orienting of attention
to emotional faces in anxiety”, Cognition & Emotion, 14(6), 789-808.
Bruce, V., Green, P.R. and Georgeson, M.A. (1996), Visual perception: Physiology,
psychology, and ecology, (3rd Ed.), Psychology Press, East Sussex, UK.
Conner, J.D. (1982), “The temporal properties of rod vision”, Journal of Physiology, 332,
139-155.
Dawson, M., & Di Lollo, V. (1990), “Effects of adapting luminance and stimulus contrast
on the temporal and spatial limits of short-range motion”, Vision Research, 30,
415-429.
Desimone, R., and Duncan, J. (1995), “Neural mechanisms of selective visual attention”,
Annual Review of Neuroscience, 18, 193-222.
Farah, M.J. (2000), The cognitive neuroscience of vision, Blackwell, Malden, MA.
Fecica, A.M., & Stolz, J.A. (2008), “Facial affect and temporal order judgment”,
Experimental Psychology, 55(1), 3-8.
Findlay, J.M., & Gilchrist, I.D. (2003), Active vision: The psychology of looking and
seeing, Oxford University Press, New York, NY.
Garavan, H. (1998), “Serial attention within working memory”, Memory & Cognition,
26(2), 263-276.
Garcia-Larrea, L., Perchet, C., Perrin, F., & Amenedo, E. (2001), “Interference of cellular
phone conversations with visuomotor tasks: An ERP study”, Journal of
Psychophysiology, 15, 14-21.
Gegenfurtner, K.R., Mayser, H., & Sharpe, L.T. (1999), “Seeing movement in the dark”,
Nature, 398, 475-476.
Gegenfurtner, K.R., Mayser, H., & Sharpe, L.T. (2000), “Motion perception at scotopic
light levels”, Journal of the Optical Society of America A, Optics, Image Science,
and Vision, 17, 1505-1515.
Grossman, E.D., & Blake, R. (1999), “Perception of coherent motion, biological motion
and form-from-motion under dim-light conditions”, Vision Research, 39, 37213727.
Hammett, S.T., Champion, R.A., Thompson, P.G., & Morland, A.B. (2007), “Perceptual
distortions of speed at low luminance: Evidence inconsistent with a Bayesian
account of speed encoding”, Vision Research, 47, 564-568.
Hess, R.F., Sharpe, L.T., & Nordby, K. (1990). Night vision: Basic, clinical, and applied
aspects, Cambridge University Press, Cambridge, UK.
Johnson, S. (2004), Mind wide open: Your brain and the neuroscience of everyday life,
Scribner, New York, NY.
Killgore, W.D., & Yurgelun-Todd, D.A. (2004), “Activation of the amygdala and
anterior cingulated during nonconscious processing of sad versus happy faces”,
NeuroImage, 21(4), 1215-1223.
14
Lansdown, T.C., Brook-Carter, N., & Kersloot, T. (2004). “Distraction from multiple invehicle secondary tasks: Vehicle performance and mental workload implications”,
Ergonomics, 27(1), 91-104.
MacKay, D.M. (1965). “Cerebral organization and the conscious control of action”, in
Semaine d’étude sur cerveau et expérience consciente, Pontificiae Academiae
Scientiarum Scripta Varia, Vol. 30, Rome, 1-29.
Marr, D. (1976), “Early processing of visual information”, Philosophical Transactions of
the Royal Society of London, B, 275, 483-524.
Marr, D. (1982), Vision: A computational investigation into the human representation
and processing of visual information, Freeman, San Francisco, CA.
McElree, B. (2001), “Working memory and focal attention”, Journal of Experimental
Psychology Learning, Memory, and Cognition, 27(3), 817-835.
Miall, R.C., and Wolpert, D.M. (1996), “Forward models for physiological motor
control”, Neural Networks, 9(8), 1265-1279.
Miller, E.K., Li, L. and Desimone, R. (1993), “Activity of neurons in anterior
inferotemporal cortex during a short-term memory task”, Journal of
Neuroscience, 13, 1460-1478.
Milner, A.D., & Goodale, M.A. (2006), The visual brain in action, (2nd Ed.), Oxford
University Press, New York, NY.
Moran, J. and Desimone, R. (1985), “Selective attention gates visual processing in the
extrastriate cortex”, Science, 229, 782-784.
Morgan, C.A., III, Hazlett, G., Doran, A., Garrett, S., Hoyt, G., Thomas, P., et al. (2004),
“Accuracy of eyewitness memory for persons encountered during exposure to
highly intense stress”, International Journal of Law and Psychiatry, 27, 265-279.
Neuberg, S.L., & Cottrell, C.A. (2006), “Evolutionary bases of prejudices” in M.
Schaller, J.A. Simpson, & D.T. Kenrick (Eds.), Evolution and Social Psychology,
Psychosocial Press, Madison, CT, pp. 163-187.
Nolte, J. (2002), The human brain: An introduction to its functional anatomy, (5th Ed.),
Mosby, St. Louis, MO.
Öhman, A., & Wiens, S. (2004), “The concept of an evolved fear module and cognitive
theories of anxiety”, in A.S.R. Manstead, N. Frijda, & A. Fischer (Eds.), Feelings
and Emotions: The Amsterdam Symposium. Studies in Emotion and Social
Interaction, Cambridge University Press, New York, NY, pp. 58-80.
Ohrmann, P., Rauch, A.V., Bauer, J., Kungel, H., Arolt, V., et al. (2007), “Threat
sensitivity as assessed by automatic amygdala response to fearful faces predicts
speed of visual search for facial expression”, Experimental Brain Research,
183(1), 51-59.
Page, J.W., Duhamel, P. and Crognale, M.A. (Submitted), “Evidence for visualization at
early visual cortex”.
Page, J.W. (Submitted), “Evidence of tunnel vision in police pursuit drivers”.
Phillips, M.L., Williams, L.M., Heining, M., Herba, C.M., Russell, T., et al. (2004),
“Differential neural responses to overt and covert presentations of facial
expressions of fear and disgust”, NeuroImage, 21(4), 1484-1496.
Regan, D. (1989), Human brain electrophysiology, Elsevier, New York, NY.
15
Regan, D. (2000), Human perception of objects: Early visual processing of spatial form
defined by luminance, color, texture, motion, and binocular disparity, Sinauer
Associates, Sunderland, MA.
Rockland, K.S. (2002), “Visual cortical organization at the single axon level: A
beginning”, Neuroscience Research, 42, 155-166.
Rolls, E.T., and Deco, G. (2002), Computational neuroscience of vision, Oxford
University Press, New York, NY.
Salas, E., Driskell, J., & Hughes (1996), “Introduction: The study of stress and human
performance”, in J. Driskell & E. Salas (Eds.), Stress and Human Performance,
Lawrence Erlbaum, New Jersey, pp. 1-46.
Sharpe, L.T., & Stockman, A. (1999), “Two rod pathways: the importance of seeing
nothing”, Trends in Neuroscience, 22, 497-504.
Sharps, M.J., & Hess, A.B. (2008), “To shoot or not to shoot: Response and interpretation
of response to armed assailant”, The Forensic Examiner.
Sillito, A.M., Jones, H.E., Gerstein, G.L., and West, D.C. (1994), “Feature-linked
synchronization of thalamic relay cell firing induced by feedback from the visual
cortex”, Nature, 369, 479-482.
Sliwinski, M.J., Smyth, J.M., Hofer, S.M., & Stawski, R.S. (2006), “Intraindividual
coupling of daily stress and cognition”, Psychology and Aging, 21(3), 545-557.
Stockman, A., & Sharpe, L.T. (2006), “Into the twilight zone: the complexities of
mesopic vision and luminance efficiency”, Ophthalmic & Physiological Optics,
26(3), 225-239.
Stockman, A., Sharpe, L.T., Zrenner, E., & Nordby, K. (1991), “Slow and fast pathways
in the human rod visual system: ERG and psychophysics”, Journal of the Optical
Society of America A, Optics, Image Science, and Vision, 8, 1657-1665.
Stockman, A., Sharpe, L.T., Rüther, K., & Nordby, K. (1995), “Two signals in the human
rod visual system: a model based on electrophysiological data”, Visual
Neuroscience, 12, 951-970.
Strayer, D.L., Drews, F.A., & Johnston, W.A. (2003), “Cell phone induced failures of
visual attention during simulated driving”, Journal of Experimental Psychology,
9(1), 23-32.
Takeuchi, T., & De Valois, K.K. (1997), “Motion-reversal reveals two motion
mechanisms functioning in scotopic vision”, Vision Research, 37, 745-755.
Takeuchi, T., & De Valois, K.K. (2000), “Velocity discrimination in scotopic vision”,
Vision Research, 40, 2011-2024.
Takeuchi, T., Yokosawa, K., & De Valois, K.K. (2004), “Texture segregation by motion
under low luminance levels”, Vision Research, 44, 157-166.
van der Heijden, A.H.C. (2004), Attention in vision: Perception, communication, and
action, Psychology Press, New York, NY.
Verhaeghen, P., & Basak, C. (2005), “Ageing and switching of the focus of attention in
working memory: results from a modified N-back task”, Quarterly Journal of
Experimental Psychology A Human Experimental Psychology, 58(1), 134-154.
Vrij, A., & Barton, J. (2004), “Violent police-suspect encounters: The impact of
environmental stressors on the use of lethal force”, in A. Needs, & G. Towl
(Eds.), Applying Psychology to Forensic Practice, Blackwell, Malden, MA, pp.
117-128.
16
Williams, M.A., McGlone, F., Abbott, D.F., & Mattingley, J.B. (2005), “Differential
amygdala responses to happy and fearful facial expressions depend on selective
attention”, NeuroImage, 24(2), 417-425.
Wolpert, D.M., Miall, R.C., and Kawato, M. (1998), “Internal models in the cerebellum”,
Trends in Cognitive Sciences, 2(9), 338-347.