Download Cognitive Biases Make Judges and Juries Believe Weird Things

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Inductive probability wikipedia , lookup

Birthday problem wikipedia , lookup

Randomness wikipedia , lookup

Probability interpretations wikipedia , lookup

Transcript
By Lance D. Reich
Cognitive Biases
Make Judges & Juries Believe Weird Things
“The natural cause of the human
mind is certainly from credulity to
skepticism.”
—Thomas Jefferson
P
eople often read stories of trials that
make apparently fantastic factual
determinations. For example, an
award of billions in damages for a company’s production of a chemical whose
link to causing harm is very tenuous. Or
an engineer is held liable for a structure
that failed, even when the structure was
built to specifications believed at the time
to be safe. In the worst instance, people
have been convicted of crimes based on
“repressed memories” that an expert pulls
from the victim’s memory through hypnosis or some other form of psychological
pseudoscience. Seeing such findings of
fact by judges and juries, one wonders
how a person could be so convinced of
a spurious fact to assess a legal penalty.
Unfortunately, the answer is quite simple:
we humans, being irrational, sometimes
make irrational decisions.
One defect in our thought process
is that the logical framework through
which we make our decisions is biased.
More than 250 cognitive biases corrupt
our decision making. A cognitive bias is a
consistent deviation in a person’s thought
from a logically correct judgment. These
biases lead to perceptual distortion of
Lance D. Reich is a patent attorney and
partner at the Seattle office of Lee &
Hayes, PLLC.
facts, illogical interpretation of evidence,
and faulty predictions or conclusions
based on the evidence presented.
Cognitive biases are mostly consistent across people, irrespective of race,
economic status, or nationality. Consequently, those who seek to manipulate
us—be they lawyers, experts, politicians,
salesmen, or whomever—use these biases
to force us to an incorrect decision given
the facts presented. Thus, fact finders
can be manipulated into deviating from
the scientific method and into believing
unscientific facts.
What follows is a summary of several
of the more common cognitive biases that
help manipulate people. Knowing the cognitive bias may not prevent a person from
making a biased decision, but it provides a
sanity check against what otherwise might
become an incorrect decision. Because
almost everyone has been subject to at
least one of these “tricks” before, they may
appear familiar.
Framing
“Think about how much money
we saved by buying this . . .”
—Anon.
Framing occurs from “too-narrow” a
description of a factual situation or issue.
For example, people react differently to a
particular choice depending on whether
it is presented as a positive or a negative.
A person can therefore present facts that
lead the decision maker to a conclusion
based upon the light in which the facts
are placed.
For example, assume that a company
markets a medicine that has both a significant cure rate (80%) and significant
adverse side effects (5%). The person
defending the company will argue that
the medicine has a cure rate of more than
80 percent, making eight out of 10 people
that were otherwise sick now healthy. The
person attacking will argue that this drug
maims or kills five out of every 100 people who receive it. Just imagine the value
of the opening statement when the first
words a juror hears at the trial is one of
these two sentences.
Base Rate Fallacy
“The definition of insanity is doing
the same thing over and over and
expecting a different result.”
—Benjamin Franklin
The base rate fallacy occurs when the conditional probability of a conclusion, in
view of new evidence, is assessed without
taking into account the prior probability—
the “base rate”—of the likelihood of the
conclusion. In other words, one needs to
account for the underlying probability that
something has occurred, given the base
rate likelihood, before one can best assess
that the new evidence is likely the cause
for the occurrence.
The base rate fallacy pops up when
new facts are evaluated to determine a
new probability for something that has
occurred. A classic example, from Tversky and Kahneman, is a determination
Published in The SciTech Lawyer, Volume 10, Issue 1, Fall 2013. © 2013 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof
may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association.
We humans, being irrational,
sometimes make irrational decisions.
of the likelihood of that a taxicab was
involved in a hit-and-run accident at
night. Two cab companies, the Green
and the Blue, operate in the city. You
receive the following data: (1) 85 percent
of the cabs in the city are Green and 15
percent are Blue; and (2) a witness identified the cab as Blue. The court tested
the reliability of the witness under the
same circumstances that existed on the
night of the accident and concluded that
the witness correctly identified each one
of the two colors 80 percent of the time
and failed 20 percent of the time. What
is the probability that the cab involved
in the accident was Blue rather than
Green?
Many people will give the knee-jerk
answer “80 percent.” In fact, the likelihood is 41 percent. A good way to look
at it and avoid the bias is to reframe the
raw numbers as percentages. The base
rate is that 85 out of 100 taxis at random
are Green and 15 are Blue. So, at random,
it is only 15 percent likely that the car
was Blue (call it 15/85 likely to be Blue).
The witness will be right for the color of
80 out of 100 taxis and wrong on 20 of
them, or a 20 percent error rate (call it
8/2 that the witness is correct). The combined probability is 15/85 * 8/2 = 120/170
or 12/17. Because the probability of being
Blue plus the probability of being Green
must equal 1, 12/(12 + 17) = 0.41. Thus,
in spite of the witness’s testimony, the hitand-run cab is more likely to be Green
than Blue. Even if Bayesian math is difficult to follow, one can see that the witness
would be wrong more, e.g., misidentify
the taxi, at 20 per 100, than the known
base rate of Blue at 15 per 100.
Hindsight Bias
“No matter what you do, someone
always knew you would.”
—Ami McKay
Hindsight bias is colloquially referred to
as the “I-knew-it-all-along effect.” It’s the
inclination to see events that have already
occurred as more predictable than they
were before they took place. Hindsight
bias will often distort, based upon what
has actually occurred, the recollection
and reconstruction of events. Hindsight
bias commonly shows up in a courtroom
where it’s necessary to assess blame for
a bad outcome, such as an accident or
disaster.
For an example, assume an engineer
is on trial for negligently constructing
a levee that was supposed to withstand
a Category (Cat.) 3 hurricane and the
levee was actually breached by a Cat. 4
hurricane, causing widespread destruction. (Sound familiar?) The engineer says
that the standard of care at construction
required building for Cat. 3 hurricanes
and studies have shown that storms
greater than Cat. 3 come only about once
every 500 years. The prosecutor argues
that, given all the damage from this levee
breach, the engineer should have known
that constructing the levee to withstand
only a Cat. 3 hurricane was negligent,
given the harm from choosing the wrong
standard. And the jurors are then bombarded with pictures and other evidence
of the destruction, with the prosecutor
telling them constantly, “This could have
been avoided.”
Hopefully, the jurors will think
through the problem, such as envisioning people objecting at the time the
levee was built to the additional costs to
build a levee resistant to a Cat. 4 hurricane. Unfortunately, though, it is very
hard to get people to discount what actually happened and to assume instead the
knowledge and beliefs that a person had
beforehand.
Illusion of Control
“Uh, everything’s under control.
Situation normal.”
—Han Solo, Star Wars
The illusion of control is the tendency to
overestimate one’s own or others’ ability to control events. People feel that
they can control outcomes they have no
influence over. One sees this fallacy in
superstition and other ritualized behavior believed to affect an outcome. But in
a courtroom or other legal setting, the
illusion of control can lead to claiming
causation where the actor simply had no
control over the situation.
For example, perhaps a driver is
being sued for negligence. The driver
lost control of the car on a patch of ice,
leading to an accident. Jurors may feel
that the driver had far more capacity
to avoid the accident than the driver
actually had—often even cascading the
driver’s control to different points before
the accident in order to infer driver’s
control of the situation. For example,
the driver should have seen the ice; the
driver should have known to use a different route; the driver shouldn’t have
been out driving at all given the weather,
and so forth. As one can imagine, the
illusion of control fits with hindsight
bias to find that someone had full control of a situation that led to an obvious
outcome (in view of what actually happened, of course).
Ironically, though, the illusion of control has a flip side—what some call the
illusion of no control. This occurs when
people assume that they have less control over a situation than they do. Thus,
an action could actually influence an outcome, even though the person might not
believe that it could. A common example of this is where a person believes
that they cannot control their addictive
behavior at all. Although this may be
true, a person can control where they are
and what they are doing to try to avoid
situations where the addictive behavior
occurs. For example, a person may not
be able to ultimately control their drinking problem, but he or she certainly can
avoid being in a bar where drinks are
served.
Published in The SciTech Lawyer, Volume 10, Issue 1, Fall 2013. © 2013 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof
may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association.
Illusory Correlation
“But I don’t want to go among mad
people,” Alice remarked.
“Oh, you can’t help that,” said the
Cat: “We’re all mad here. I’m mad.
You’re mad.”
“How do you know I’m mad?” said
Alice.
“You must be,” said the Cat, or you
wouldn’t have come here.”
—Lewis Carroll,
Alice in Wonderland
The phenomenon of seeing a relationship
between things, such as people, events, or
behaviors, even when no such relationship exists, is called illusory correlation.
A very common example of this is the
classic stereotype, where people form
false associations between (1) membership in a statistical minority group and
(2) behaviors or actions (typically negative). Unfortunately, stereotypes can lead
people to expect that certain groups and
traits fit together, and they will overestimate the frequency with which these
correlations actually occur, e.g., people of
this race are more violent, people of this
religion have no ethics, and so forth.
Another form of this bias occurs when
otherwise random events occur in proximity to each other, and the person draws
the incorrect conclusion that they are correlated. For example, there is a widespread
belief that a full moon is correlated with
more accidents and emergency room visits—not necessarily that the full moon
specifically caused these accidents, but
rather that the additional accidents tend to
occur when there is a full moon. Yet statistics show no correlation between the full
moon and accidents.
Insensitivity to Sample Size
“I only know one person who
voted for Nixon.”
—Pauline Kael
Insensitivity to sample size occurs when
people judge the probability of obtaining
a sample statistic without respect to the
sample size. This bias discounts the randomness found in small sample sizes. For
example, in one study people were told
that the average height of men was 5'10",
but then they assigned the same probability to the likelihood of obtaining a
mean height of above 6 feet in samples of
10, 100, and 1,000 men. Such an assignment ignores the likelihood that variation
is more likely in small samples, so people
may draw conclusions and ignore what
may be only the randomness of a small
sample.
In another example, Tversky and
Kahneman asked the following question:
a town is served by two hospitals. In the
larger hospital about 45 babies are born
each day, and in the smaller hospital about
15 babies are born each day. About 50 percent of all babies are boys, but the exact
percentage varies from day to day. For a
period of one year, each hospital recorded
the days on which more than 60 percent of
the babies born were boys. Which hospital do you think recorded more such days?
(1) The larger hospital; (2) The smaller
hospital; and (3) Neither. The numbers
were about the same: 56 percent of subjects chose option (3), and 22 percent of
subjects each chose options (1) or (2).
The best answer, though, is (2). The
larger hospital is much more likely to
report a gender ratio close to 50 percent
on a given day than the smaller hospital. (This is often called the “law of large
numbers.”) Neglect of sample size has
also been shown in a different study of
statistically sophisticated psychologists.
It is therefore easy to see why judges and
fact finders will draw completely wrong
conclusions based on a small sample.
Primacy Effect
“I took a speed-reading course and
read War and Peace in twenty minutes. It involves Russia.”
—Woody Allen
The primacy effect leads a person to
recall the first information presented
better than the information presented
later on. A simple example would be
where one reads a sufficiently long list of
words and is more likely to remember the
words at the beginning rather than the
words in the middle. This bias can cause
bad information presented early to be
recalled more readily than better information presented later.
The primacy effect is related to framing in so far as the first information
presented frames the issue. But primacy
is far more general; it simply involves
recalling the first information first. Ironically, while repetition can cause a bias of
recall (the availability heuristic), the primacy effect may take hold even if other
information is actually repeated more
often.
For example, if a complex set of facts
is being explained, people will often recall
the initial facts far better than the changes.
So one seeking to exploit this bias can
start the facts in a certain way, possibly in
the middle, to try to encourage forgetting
or to reduce the impact of other facts. As
an example, a lawyer could start a story
with the discovery of a possible health
issue from a drug and what the company’s response was at that time, rather than
starting with the discovery of the drug or
with a straightforward series of events. In
this manner, the complexity of the health
issue is more likely to be reinforced in the
minds of the juror.
Conclusion
“Fool me once, shame on you. Fool
me twice, shame on me.”
—Anon.
It may be impossible to stop some people
from preying on others by taking advantage of their cognitive biases, but one can
at least point out flaws in the logic that is
behind exploitation. Furthermore, in the
legal arena, judges can limit, by procedure, the presentation of evidence likely
to mislead the fact finder to jump to a
faulty conclusion, such as misleading
sample sizes, stereotypical statistics, and
ignorance of base rates. u
Published in The SciTech Lawyer, Volume 10, Issue 1, Fall 2013. © 2013 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof
may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association.