Download Objective probability and the assessment of

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of randomness wikipedia , lookup

Randomness wikipedia , lookup

Probabilistic context-free grammar wikipedia , lookup

Indeterminism wikipedia , lookup

Infinite monkey theorem wikipedia , lookup

Probability box wikipedia , lookup

Birthday problem wikipedia , lookup

Ars Conjectandi wikipedia , lookup

Dempster–Shafer theory wikipedia , lookup

Inductive probability wikipedia , lookup

Probability interpretations wikipedia , lookup

Transcript
Law, Probability and Risk (2003) 2, 275–294
Objective probability and the assessment of evidence
M IKE R EDMAYNE
Law Department, London School of Economics and Political Science, Houghton St,
London WC2A 2AE, UK
[Received on 29 May 2003; revised and accepted on 1 October 2003]
As accounts of evidential reasoning, theories of subjective probability face a serious
limitation: they fail to show how features of the world should constrain probability
assessments. This article surveys various theories of objective probability, noting how they
overcome this problem, and highlighting the difficulties there might be in applying them
to the process of fact-finding in trials. The survey highlights various common problems
which theories of objective probability must confront. The purpose of the survey is, in
part, to shed light on an argument about the use of Bayes’ rule in fact-finding recently
made by Alvin Goldman. But the survey is also intended to highlight important features of
evidential reasoning that have received relatively little attention from evidence scholars: the
role categorization plays in reasoning, and the link between probability and wider theories
of epistemic justification.
Keywords: probability; theories of probability; objectivity; classification; epistemic
justification.
Introduction
The questions examined in this paper were prompted by an argument made in Alvin
Goldman’s Knowledge in a Social World. There, Goldman suggests that when the
assessment of evidence is made via Bayes’ rule, reasoners will come closer to the
truth, so long as the probabilities they use are accurate.1 This is a pleasing result. In
providing a novel justification for Bayes’ rule as a norm of evidence evaluation—a veritistic
justification, in Goldman’s terminology—it dodges some of the difficulties associated with
Dutch book arguments.2 It also allows Goldman to pose important questions about the
justifications for exclusionary rules.3 More generally, the idea of accurate probabilities
suggests that some assessments of evidence are better than others. The concept of accurate
probability is a handy resource to have against those who would argue that all assessments
of evidence are on a par.4
An important question, therefore, is: what does Goldman have in mind when he
1 A LVIN I. G OLDMAN , K NOWLEDGE IN A S OCIAL W ORLD, 115–123 (1999). See also Alvin I. Goldman,
Quasi-Objective Bayesianism and Legal Evidence 42 J URIMETRICS . J. 237 (2002).
2 See e.g. J ON E LSTER , U LYSSES AND THE S IRENS : S TUDIES IN R ATIONALITY AND I RRATIONALITY,
128–133 (rev’d edn, 1984); M ARK K APLAN , D ECISION T HEORY AS P HILOSOPHY, ch. 5 (1996).
3 See G OLDMAN, supra n. 1, at 292–295.
4 For scepticism about the ‘proper’ value of evidence, see Gary Edmond, The Next Step or Moonwalking?
Expert Evidence, the Public Understanding of Science and the Case Against Imwinkelried’s Didactic Trial
Procedures 2 E VIDENCE & P ROOF 13, at 21, 29 (1998).
c Oxford University Press 2003, all rights reserved
276
M . REDMAYNE
refers to accurate probability? The answer is ‘objective probability’. As Goldman readily
acknowledges, this is not a very secure concept to hang a theory on. ‘There is little
agreement among philosophers about when, or under what precise conditions, statements
have determinate objective probabilities.’5 Goldman does, however, provide one reason for
believing in objective probabilities: ‘in testimony cases it looks as if jurors, for example,
work hard at trying to get accurate estimates of such probabilities, which seems to presume
objective facts concerning such probabilities.’6 But jurors could be mistaken. Perhaps there
are no objective probabilities to be had, or perhaps there are objective probabilities only
for some types of testimony.
This paper explores the issue of what objective probability might mean in the context of
the evaluation of legal evidence. Examining objective probability will not only help to shed
light on Goldman’s arguments. It will also have a bearing on other debates among evidence
theorists. While Goldman’s argument for Bayes’ rule is novel, he is by no means the first
person to advocate the rule as a normative framework for evidence evaluation in the courts.
Evidence scholars have generated a large literature on Bayes’ rule and its implications.7
Most of those who advocate the rule as a tool for evidence analysis, however, are Bayesians,
in the sense that they presume that it is people’s subjective degrees of belief—subjective
probabilities—which will be fed into Bayes’ rule. One reason why the Bayesian project is
controversial is just this subjectivity.8 And for good reason. When the only constraint on
rational belief is coherence among a belief set, it can seem that anything goes.9
It seems worthwhile, then, to explore the idea of objective probability, and its
applicability to the sorts of evidential questions typically at issue in trials. Before
embarking on this task, it will be helpful to have some examples in mind of the sort
of propositions to which fact-finders might try to attach probabilities. Imagine a serious
assault case involving three items of evidence. Blood was found at the scene of the
crime, and DNA analysis finds that it matches the defendant’s blood. The defendant has a
previous conviction for inflicting grievous bodily harm. However, he also has an alibi: his
girlfriend testifies that he spent the night during which the assault was committed at her
house. In order to use Bayes’ rule to analyse the evidence here, three relevant probability
questions are: what is the probability of the DNA matching given that the defendant is
innocent? What is the probability of someone with a previous conviction for serious assault
committing another serious assault? What is the probability that the defendant’s girlfriend
would provide him with this alibi given that he is innocent? These questions involve a series
of probability statements which, intuitively, range from the objective to the subjective.
There sounds to be an objective answer to the DNA question, which we could discover by
consulting the statistics kept by forensic scientists. With the previous conviction for assault
there are also statistics on re-offending which might point us towards an answer. But here
5 Supra n. 1, 117.
6 Id.
7 See e.g. Symposium: Bayesianism and Juridical Proof 1 E VIDENCE & P ROOF 253–360 (1997).
8 Ron Allen has particularly stressed this point in the legal debates. See e.g. Ronald J. Allen, Clarifying the
Burden of Persuasion and Bayesian Decision Rules: A Response to Professor Kaye 4 E VIDENCE & P ROOF 246,
250 (2000). More generally, see Wesley C. Salmon, Rationality and Objectivity in Science, or Tom Kuhn Meets
Tom Bayes in T HE P HILOSOPHY OF S CIENCE, 256 (David Papineau ed., 1996).
9 See, e.g. Henry E. Kyburg, Jr., Randomness and the Right Reference Class LXXIV J. P HILOSOPHY 501,
501–502 (1977); Richard Foley, Probabilism 15 M IDWEST S TUDIES IN P HILOSOPHY 114 (1990).
OBJECTIVE PROBABILITY AND THE ASSESSMENT OF EVIDENCE
277
we might doubt whether the answer has quite the same objective quality as the one in the
DNA case. There seem to be so many variables—in what circumstances and how long ago
was the previous assault committed?—that any statistics we consult will not have such a
direct hold over our answer as do the statistics in the DNA example.10 When it comes to
the alibi question, we might simply wonder what it means to suppose that there might be
an objective answer. There are so many variables, and no statistics even to point us in the
right direction.
The aim in the next section of the paper is not to catalogue objective theories of
probability exhaustively. By surveying a few objective interpretations it is, rather, to
develop various themes, highlighting, in particular, the principal obstacles that objective
accounts face.
Objective interpretations of probability
Probability, it has been said, is Janus faced.11 It can describe aspects of the mind: our
ignorance, or lack of certainty about various propositions. But it can also describe aspects
of the world. If we discover that the number of suicides in Paris is roughly constant year
after year,12 or that a tossed coin lands heads as often as it lands tails, we are struck
by features of the world that seem to call for an explanation in terms other than our
lack of certainty. Carnap distinguished between these aspects of probability in terms of
probability1 and probability2 .13 This distinction is important, but it is not quite the same
as that between objective and subjective interpretations of probability. Some epistemic
accounts, which treat probability as a feature of the mind, conceive probability as objective.
Keynes’ logical theory is an example.14 This should be borne in mind as we proceed.
1. A knock-down objection? Fallis on Goldman
Don Fallis has suggested that there is a fundamental difficulty in applying Goldman’s
argument to trial situations.15 ‘[C]hanciness’, he claims, ‘only applies to events that are still
in the future.’16 Consider the implications of this for the DNA evidence in our example.
We know that DNA found at the crime scene matches the defendant’s DNA. How, then,
can we talk of the probability of finding the evidence given that the defendant is guilty or
innocent? On either hypothesis, the probability of the evidence—of the match—appears to
be 1. This conclusion, however, is too quick. The paper Fallis cites in support of the claim
that past events have no objective probabilities does in fact accommodate talk of the chance
10 See, generally, Mike Redmayne, The Relevance of Bad Character 61 C AMBRIDGE L AW J. 684 (2002).
11 I AN H ACKING , T HE E MERGENCE OF P ROBABILITY, 12 (1975).
12 On this example, see I AN H ACKING , T HE TAMING OF C HANCE, ch. 8 (1990).
13 R. Carnap, The Two Concepts of Probability 5 P HILOSOPHY & P HENOMENOLOGICAL R ESEARCH 513
(1945).
14 A very useful classification, and survey, of theories of probability is D ONALD G ILLIES , P HILOSOPHICAL
T HEORIES OF P ROBABILITY (2000). See esp. ch. 1, and, on Keynes, ch. 3. A good shorter survey is Alan
Hájek, Interpretations of Probability in T HE S TANFORD E NCYCLOPEDIA OF P HILOSOPHY http://plato.
stanford.edu/archives/sum2003/entries/probability-interpret/.
15 Don Fallis, Goldman on Probabilistic Inference 109 P HILOSPHICAL S TUDIES 223 (2002).
16 Id., at 229.
278
M . REDMAYNE
of past events.17 Using a simple coin tossing example, its authors, Helen Beebee and David
Papineau, suggest that we imagine a tossed coin being covered up. Even though the coin
has already landed heads or tails, we can still ask about the probability of discovering
either outcome when the coin is uncovered.18 Now it is a little more difficult to apply this
approach to our DNA example, because we already know the result—the DNA does match.
Nevertheless, if we can use ignorance to create chance with an already tossed coin, it does
not seem to be demanding too much to suppose that we can presume ignorance. And in
fact, in the DNA example, it actually seems quite natural to talk about the probability of
the result occurring given that the defendant is innocent, even though we know that the
DNA matches.
By drawing attention to the difficulties posed by talk of probabilities of determined
outcomes, Fallis does highlight an important problem for an argument like Goldman’s, a
problem that will play a large role in the discussion to follow. For the moment, however,
the principal point is that even if the event we are talking about has already occurred, this
does not necessarily rule out talking about it in terms of objective probability.
2. Propensity theories
Perhaps the best known objective interpretation of probability is one which does place
probability in the world rather than in the mind. This is the frequency theory, under which
probability is defined in terms of the frequency of a particular outcome in a long, or infinite,
series of trials.19 Whatever the merits of this theory, it does not look to be suitable for our
purposes. Legal trials tend not to deal with long runs, but with single events. There are,
however, no frequencies for single events. Even the DNA probability in our example cannot
be accommodated under the frequency theory: that the DNA profile has been shown to
occur with a particular frequency in the population does not, without more, tell us anything
about the probability that this defendant’s DNA would match if he were not the source.
It therefore seems that a theory of objective probability of use to lawyers must eschew
the frequency theory. There is another objective theory of probability, however, which does
admit single case probabilities: this is the propensity theory. In fact, one of the modern
variants of the propensity theory was developed precisely to extend probability to situations
17 Helen Beebee and David Papineau, Probability as a Guide to Life XCIV J. P HILOSOPHY 217, 223–224
(1997). Other authors who give accounts of objective probability also use examples based on past events: among
those discussed below, David Lewis and Peter Achinstein are examples. Gillies notes that there is a problem in
applying conditional probabilities to propensity theories. This problem, known as Humphrey’s paradox, occurs
because it is difficult to talk of the propensity of a past event to have produced a particular outcome. Gillies
argues that his version of propensity theory avoids the paradox. See G ILLIES, supra n. 14. See also David Miller,
Propensities May Satisfy Bayes’s Theorem in BAYES ’ S T HEOREM (Richard Swinburne ed., 2002).
18 Fallis does note that Beebee and Papineau admit historical chances, but points to problems in their analysis.
A full discussion here would pre-empt points made in the text below, but, briefly: Fallis suggests Beebee and
Papineau’s objective chances can always be out-performed, from a veritistic standpoint, by probabilities which
are not based on partial ignorance. But this just seems to beg questions about how we should construe objective
probabilities. Further, Fallis seems to allow that the possibility of a higher level of knowledge about outcomes
does not undermine talk about objective chance: he is prepared to admit talk of the objective probability of
a future coin toss (at 230–231), even though it seems that God—to borrow his metaphor for higher levels of
knowledge—would be able to determine the outcome with certainty.
19 See G ILLIES, supra n. 14; Alan Hájek, ‘Mises Redux’—Redux: Fifteen Arguments Against Finite
Frequentism 45 E RKENNTNIS 209 (1997).
OBJECTIVE PROBABILITY AND THE ASSESSMENT OF EVIDENCE
279
the frequency theory could not account for.20 The propensity theory has an intuitive appeal.
Just as it is a physical property of a glass that it will break when dropped, so it seems to be a
physical property of a coin that it lands heads as often as tails. The suggestive terms ‘would
be’ and ‘habit’ were used by C. S. Peirce to describe such characteristics.21 Things are a
little more complex than these examples suggest, however. A coin thrown onto a slotted
board can land on its side as well as heads and tails; a glass dropped into a bowl of water
will not break. We therefore need to think in terms of a chance set-up, which includes the
external environment as well as the properties of the object or person whose propensity we
are talking about.
It is worth distinguishing between different types of propensity theory. One type of
theory interprets this chance set-up in a very broad sense. Donald Gillies refers to these
as ‘state of the universe’ propensity theories.22 We might think of the chance set-up as
including the state of the whole universe at a particular time, or at least the set of variables
causally relevant to the question we are addressing.23 In our alibi example, we could think
of the woman’s life history and various aspects of her relationship with the defendant as
contributing to her propensity to provide him with an alibi for the night of the murder.
This provides us with a reasonable way of conceptualizing propensity, but state of the
universe theories face a large problem. If the universe is deterministic, then there are no
probabilities. If the state of the universe at time t completely determines its state at t + 1,
then everything that happens happens with probability 1.24 In our example, the probability
of finding the matching DNA profile if the defendant is innocent is 1, because the history
of the universe explains how DNA matching that of the defendant came to be found at the
crime scene. We will refer to this as the determination problem. One reason for drawing
attention to the determination problem is that it is a general one: it dogs objective accounts
of probability; we will see it recur in this overview.25
Now there are plenty of reasons to doubt the truth of determinism,26 so the
determination problem is not a knock-down argument against state of the universe
propensity theories. Nevertheless, it might seem unwise to rely on a theory of probability
which is hostage to the truth of determinism. There are other reasons too to be wary
of state of the universe propensity theories. Gillies suggests that they are ultimately too
metaphysical. While delivering an account of probability, the probabilities posited depend
20 Popper’s version of the theory. See K. R. Popper, The Propensity Interpretation of Probability 10 B RITISH
J OURNAL FOR THE P HILOSPHY OF S CIENCE 25 (1959).
21 See A. W. Burks, Peirce’s Two Theories of Probability in S TUDIES IN THE P HILOSOPHY OF C HARLES
S ANDERS P EIRCE (E. S. Moore and R. S. Robin eds, 1964); H ACKING, supra n. 12, ch. 23.
22 G ILLIES, supra n. 14, 126–129.
23 Id.
24 See Miller, supra n. 17, at 112. See also P ETER ACHINSTEIN , T HE B OOK OF E VIDENCE, 111 (2001). This
point is doubted, however, by Beebee and Papineau, supra n. 17 at 235.
25 For general accounts, see Isaac Levi, Chance 18 P HILOSOPHICAL T OPICS 117 (1990); Phil Dowe, A
Dilemma for Objective Chance in P ROBABILITY IS THE V ERY G UIDE OF L IFE : THE P HILOSOPHICAL U SES
OF C HANCE (Henry E. Kyburg and Mariam Thalos eds, 2003).
26 See e.g. J OHN D UPR É , T HE D ISORDER OF T HINGS : M ETAPHYSICAL F OUNDATIONS OF THE D ISUNITY
OF S CIENCE , ch. 8 (1993); PATRICK S UPPES , P ROBABILISTIC M ETAPHYSICS , ch. 2 (1984).
280
M . REDMAYNE
upon such an enormous amount of information—information about the history of the
universe—that they will not be knowable.27
Gillies therefore develops an alternative propensity theory, which he terms ‘long-run
propensity theory’.28 This theory takes probability to be a property of a set of repeatable
conditions. An example would be the probability—or propensity—of a coin to land heads
when thrown under particular conditions. One way in which this differs from a state of the
universe theory is in what we might term ‘level of focus’. A state of the universe theory
adopts a very fine level of focus; we might need to consider the position of a particular air
molecule in order to determine the probability of a coin landing heads on a particular toss.
The idea of repeatable conditions presumes a rather more blurred focus, one which will not
pick out the position of individual air molecules. By accepting this coarser level of focus,
Gillies is able to develop a more practical theory of probability, one which might be used
as a means of thinking about probability in scientific experiments and the like.
This coarse level of focus, however, has a price attached to it. Reliance on repeatable
conditions aligns Gillies’ theory rather closely with frequency theories; his theory turns
out to differ from a frequency theory primarily in its ontological commitments, rather than
in its scope of application. And so the long-run propensity theory faces various problems
associated with frequency theories. Significant among these is the reference class problem.
Consider the previous conviction evidence in our earlier example. Suppose research shows
that a person with a previous conviction for serious violence has a six per cent chance of
being reconvicted for a serious assault within two years of release from custody.29 If the
defendant in the example was recently released from custody, is this figure relevant to our
assessment of the likelihood that he committed the assault? The reference class problem is
that the defendant might fit into more than one statistical category. Suppose the defendant
attended an anger management course while in custody, and that such people have been
found to pose only a one per cent risk of re-offending within two years; but that the
anger management statistic is based on a sample of all offenders, not just those who have
committed serious assaults. Our defendant then straddles two reference classes. We have
statistics for both, but not for his particular case. But even if we did have statistics for the
narrower class of serious assaulters who have completed an anger management course, we
might still doubt their hold over this defendant. Perhaps the defendant’s probation officer
reports him to be a poor client, and in his opinion the defendant poses a relatively high risk
of re-offending.
Gillies responds to these problems by accepting that his long-run frequency theory does
not really apply to single cases. As he puts it, when ‘we want to introduce probabilities for
single events, these probabilities, though sometimes objectively based, will nearly always
fail to be fully objective because there will in most cases be a doubt about the way we
should classify the event and this will introduce a subjective element into the singular
27 This point is disputed by Miller, supra n. 17 at 115. In practical terms, Miller’s response seems to evade the
‘too metaphysical’ charge at the price of blurring the distinction between his theory and a frequency theory.
28 See G ILLIES, supra n. 14, chs 6 and 7. Also Donald Gillies, Varieties of Propensity 51 B RITISH J OURNAL
FOR THE P HILOSPHY OF S CIENCE 807 (2000).
29 Parts of the argument here reflect Redmayne, supra n. 10, 704–708. For another application of the reference
class concept to problems in evidence law, see Mark Colyvan, Helen M. Regan and Scott Ferson, Is it a Crime
to Belong to a Reference Class? 9 J. P OLITICAL P HILOSOPHY 168 (2001), reprinted in P ROBABILITY IS THE
V ERY G UIDE OF L IFE, supra n. 25.
OBJECTIVE PROBABILITY AND THE ASSESSMENT OF EVIDENCE
281
probability.’30 With the long-run propensity theory, then, the coarse level of focus means
that even single events blur out of our vision; we can only see probability being instantiated
in sequences. This limitation of Gillies’ theory is mitigated by his willingness to see
interpretations of probability as forming a continuum between completely objective and
completely subjective extremes. Thus probabilities for single events are likely to fail to be
fully objective. This allows that objective propensities may loosely constrain single event
probabilities in some way.
3. Shafer: Nature’s probabilities
Glenn Shafer has suggested a theory of objective probability which can be applied to single
cases.31 Like the propensity theory it presumes that the world has a particular structure—
a causal structure—which explains the regularity with which certain events occur. But
Shafer’s account differs from a propensity theory in that it locates probability, not within
the physical structure of objects, but in the eyes of an observer. This might sound like
a subjective theory of probability, and to some degree it is. But there is an important
qualification. Shafer asks us to imagine an observer, Nature, who watches events unfold.
Where Nature sees sufficient regularity, she can predict events. Sometimes the predictions
will be absolute, but often they will involve probabilities, for all that Nature can say is
that there is a certain probability that an event will occur. We have seen that accounts of
objective probability can face the ‘determination problem’: probability is apt to disappear
if we presume too minute a level of knowledge about the world. Shafer’s account responds
to this by limiting Nature’s level of focus. Nature is not God—she only sees things at
the level of an ideal observer. ‘Nature is the imagined limit as we consider witnesses and
scientists who can see and predict more and more.’32 Thus if we suppose that coin-tossing
is a sufficiently complex process that our theories will never advance beyond noting that
coins produce outcomes with particular frequencies, then the objective probability for a
coin toss will simply be that frequency—0.5 in the usual case. But if we are able to advance
beyond this, to note the position of air molecules and the ways in which they will interact
with edges of the coin as it is tossed, and thus to predict outcomes more accurately in
individual cases, the objective probability of a coin landing heads on a particular toss will
be something other than 0.5.
Rather than looking in detail at this account of probability, it will suffice to note its
principal features. A level of focus is introduced to cope with the determination problem.
Unlike a state of the universe propensity theory, Shafer’s account locates probabilities at
a level where we are more likely to come to have knowledge of them. A very significant
feature of Shafer’s account, however, is that in acknowledging limits to the level of detail
with which Nature views the world, it allows that there are events for which there will be
no probabilities. In an example used by Shafer, if we try to predict whether a particular
boy is likely to pump up his flat bicycle tyre one afternoon, we may find that Nature does
30 G ILLIES, supra n. 14, 120.
31 Glenn Shafer, Nature’s Probabilities and Expectations in P ROBABILITY T HEORY: P HILOSOPHY, R ECENT
H ISTORY AND R ELATIONS TO S CIENCE, 147 (Vincent F. Hendriks et al. eds, 2001). For application of these
ideas, see NANCY C ARTWRIGHT, T HE DAPPLED W ORLD : A S TUDY OF THE B OUNDARIES OF S CIENCE, ch. 7
(1999).
32 Id. at 7.
282
M . REDMAYNE
not see sufficient regularity in the world to be able to post probabilities. Some questions
do not have determinate answers; ‘regularity can dissolve into irregularity when we insist
on making our questions too precise.’33 We might have to accept, therefore, that with a
problem such as the alibi in our example, there simply are no objective probabilities with
which to evaluate a particular fact-finder’s reasoning.
4. Counterfactual conditionals and conditional probabilities
Shafer’s account seems to be, not so much an account of what (objective) probability
is, but a means of describing how we might think about probability, a way of bringing
out some of its salient features. A recent proposal by Goldman might be seen in a
similar manner.34 Statements of conditional probability—such as ‘the probability that the
witness would provide an alibi, given that the defendant is guilty’ bear some similarity
to counterfactual conditionals: ‘if the defendant were guilty, the witness might offer an
alibi’ (uttered when the defendant is actually innocent). Goldman’s suggestion is that David
Lewis’s interpretation of such conditionals might offer a way of thinking about conditional
probabilities.
Goldman’s work on this proposal is so far only a ‘sketch’,35 so we will not analyse it
in too much detail. Briefly, Lewis interprets conditionals in terms of a series of possible
worlds in a similarity space centred on our actual world. Think of a series of concentric
circles. At the centre of the circles is the actual world. As we move out from the actual
world, the worlds within the circles become less similar. If we suppose some cut-off
point—say, at the tenth circle—then we have a finite space. Goldman suggests that we
think of an objective probability in terms of the frequency of worlds within this similarity
space in which the event of interest to us obtains. In our example, all of the worlds are taken
to be ones where the defendant is guilty. If in 40 per cent of the worlds the witness provides
him with an alibi, then the objective probability of the alibi given the defendant’s guilt is
0.4. As we travel outwards from the actual world, the worlds become progressively more
different from it. If, in the actual world, the defendant lives with the girlfriend-witness who
provides an alibi, then at some point we will come to worlds where the defendant does not
live with her. We might presume that far fewer of these worlds are alibi worlds. Taking
them into account will affect the frequency calculation, so an obvious question is: should
we do so?
This has brought us to one of the key difficulties with the possible worlds account
of objective probability.36 We need to impose some cut-off point on the similarity space
in order to keep out worlds which are too dissimilar to the actual world. As Goldman
acknowledges, this is a difficult issue.37 In our alibi example, there may well be some
intuitive criteria which can be used. That the defendant and the witness live together in
the actual world sounds to be a significant enough feature of the alibi evidence that we
would want to exclude worlds where they do not. But there must be plenty of scope for
33 Id. at 14.
34 Goldman, Quasi-Objective Bayesianism, supra n. 1, 245–251.
35 Id., at 247.
36 The similarity problem also afflicts the possible worlds analysis of counterfactual conditionals: see D ONALD
N UTE , T OPICS IN C ONDITIONAL L OGIC, 65–73 (1980).
37 Goldman, supra n. 1, 250–251.
OBJECTIVE PROBABILITY AND THE ASSESSMENT OF EVIDENCE
283
disagreement about the salient features of an evidential situation, and these will impact
on the similarity space, hence on the probabilities. The best we might hope for under the
possible worlds model, then, would appear to be objective probabilities relativized to a
similarity ordering.38
5. Objective constraints on subjective probability
Consider the DNA evidence in the example given earlier in this paper. That seems to be the
most objective probability in the example. However, for the subjective Bayesian, for whom
probabilities are determined by what a particular person accepts as a fair betting rate, the
DNA example might seem to pose a problem. Are we able to criticize a betting rate as in
some way defective if it fails to take account of the statistics which forensic scientists use to
determine DNA match probabilities? The probability of the DNA matching if the defendant
is innocent (and was not framed, does not have an identical twin, etc.) looks, intuitively,
to be constrained by the detailed statistical knowledge which scientists have about DNA
profiles. Yet the process of eliciting fair betting odds does not appear to guarantee this. A
juror who determines this probability to be 0.8 does not seem to be acting irrationally on
subjective Bayesian criteria. It is not surprising, then, to find that a number of writers have
attempted to develop criteria which explain how statistics such as those in the DNA case
should constrain subjective probabilities. A similar question about the relationship between
objective and subjective probabilities was broached at the end of our discussion of Gillies’
propensity theory, for there we came across the suggestion that subjective probabilities
may, to some degree, be constrained by objective ones.
Perhaps the best known attempt to forge an objective–subjective link is David Lewis’s
work on what he terms the ‘principal principle’.39 For our purposes, however, a paper by
Helen Beebee and David Papineau offers a more useful point of entry to the issues.40
Beebee and Papineau ask: what degree of belief is it correct to have in a particular
outcome? The answer they come to is that the correct degree of belief should equal the
‘relative probability’. A relative probability is the probability of an outcome relative to
the agent’s knowledge of the set-up.41 The probability of the outcome is determined by
any relevant probabilistic law. Thus where all a person knows is that a fair coin is to be
tossed, her probability of its landing heads should be 0.5 (the probabilistic law here being
‘the probability of a fair coin landing heads is 0.5’). Beebee and Papineau contrast this
‘relative principle’ with, and defend it against, a ‘single case principle’, under which the
correct probability would not be relative to knowledge of the set-up but would simply be
determined by the actual probability of the outcome in this case. Thus, if the coin in the
38 See ROBERT N OZICK , I NVARIANCES : THE S TRUCTURE OF THE O BJECTIVE W ORLD, 149 (2001).
39 David Lewis, A Subjectivist’s Guide to Objective Chance in P HILOSOPHICAL PAPERS , VOLUME II, 83
(1986). On these issues, see also D. H. Mellor, Chance and Degree of Belief in W HAT ? W HERE ? W HEN ?
W HY ? (Robert McLaughlin ed., 1982); C OLIN H OWSON and P ETER U RBACH , S CIENTIFIC R EASONING : THE
BAYESIAN A PPROACH (2nd edn, 1993), ch. 13. For a significant problem with ‘constraint’ models of objective
probability, see M. Strevens, Objective Probability as a Guide to the World 95 P HILOSOPHICAL S TUDIES 243
(1999).
40 Beebee and Papineau, supra n. 17.
41 Id., at 223–224. Although they talk in terms of outcomes, their theory seems flexible enough to use it in
relation to propositions. See the discussion in n. 17, supra.
284
M . REDMAYNE
example was, unknown to the agent, biased, the single case probability might be, say, 0.3.
If the coin had already been tossed, the single case probability would be 1 or 0.
Though they do not discuss at length what they mean by a probabilistic law, it is
obvious that by building this notion into their theory, Beebee and Papineau restrict the
reach of the relative principle. In essence, the relative principle will only apply to situations
where there are stable frequencies. In situations where the set-up is not known to generate
any ‘serious general patterns’, ‘there is no objective constraint on the agent’s degree of
belief.’42 Thus, the relative principle might apply to our DNA example, but, like Gillies’
long-run propensity theory, its reach would probably not extend to the character and alibi
examples.
What is interesting about the relative principle is that the concerns which structure
it are similar to those we have come across in previous sections. The principle is able
to avoid the determination problem because its focus is set at a relatively coarse level.
What matters is not what the single case probabilities are (they may often be 0 or 1), but
how the set-up is characterized by the agent. Beebee and Papineau contrast this with a
point made by David Lewis about his principal principle, which, in their terminology, is
a single case principle. Lewis has to consider ways of building a bulwark to protect the
principal principle from the determination problem. One he is sceptical about involves
the notion of ‘counterfeit chance’, i.e. probability relative to human knowledge. This
relativization makes counterfeit chance too arbitrary a notion for Lewis’s tastes.43 It
threatens to undermine the principal principle which, after all, is intended to impose
objective constraints on subjective probability. For their part, Beebee and Papineau are
untroubled by the fact that their constraining rule involves relativization to an agent’s
characterization of the set-up.44 It is worth noting that the use of relativization here echoes
the need to relativize to a similarity ordering under the possible worlds theory.45
A principle similar to the one proposed by Beebee and Papineau is the principle of
direct inference. According to McGrew,
Direct inference is perhaps the simplest and most natural expression of a
‘degree of entailment’ interpretation of probability. Given that the frequency
of property x in a population G is p, and the knowledge that a is a random
member of G with respect to possession of x, the probability that a is an x is
p.46
Intuitive though this principle is, it is not surprising that, for the purposes of the present
analysis, it faces similar problems to accounts we have already considered. For one thing,
its connection to frequencies suggests that its domain of application will be restricted.
Further difficulties lie in the need to develop an account of randomness which will tell us
42 Id. at 226.
43 Supra n. 39, at 117–121.
44 They do note, however, that it is better to know more about a chance set-up, for this will bring the relative
probability closer to the single case probability.
45 For the very interesting argument that such relativizations are fundamental to probability theory, see Alan
Hájek, Conditional Probability is the Very Guide of Life in P ROBABILITY IS THE V ERY G UIDE OF L IFE, supra
n. 29.
46 Timothy McGrew, Direct Inference and the Problem of Induction in P ROBABILITY IS THE V ERY G UIDE
OF L IFE , supra n. 29, 39.
OBJECTIVE PROBABILITY AND THE ASSESSMENT OF EVIDENCE
285
when a is a random member of G, and the need to specify G itself, which obviously raises
the reference class problem. These difficulties mean that there is little agreement on how a
sophisticated account of direct inference can be developed.47
6. Objective epistemic probability
Recently, Peter Achinstein has developed a novel conception of probability, objective
epistemic probability.48 It is an epistemic theory because, unlike the propensity theory,
it concentrates on beliefs. However, like the constraint theories described above, it takes
it that there are features of the world which determine which probabilities are correct.
Because he considers belief to be categorical, Achinstein’s theory is not developed in
terms of degrees of belief, but in terms of the degree to which it is reasonable to believe
a proposition. However, this feature of his theory is not essential for our purposes—
the theory could be reworked as a theory of degrees of belief.49 For Achinstein, the
reasonableness of a belief is objective (or ‘abstract’, as he puts it) in the sense that it can be
reasonable to believe something to a particular degree whether or not anyone believes it.
This notion of reasonableness, then, is not relativized to a particular community. Whether
it is reasonable to believe p does not depend on other beliefs held by people. To illustrate
with an example used by Achinstein: if Ann ate a pound of arsenic, then it is reasonable
to a high degree to believe that she is dead or dying, even if it is generally believed that
arsenic is good for the health.50 ‘We can ask how reasonable it is to infer or to conclude
that p from some fact, without considering the knowledge and beliefs of persons, if any,
who may be in the position of inferring or concluding that p.’51
Achinstein’s account faces a familiar difficulty: the determination problem. Taking the
example of a coin toss, Achinstein supposes that the outcome of the toss may be determined
by all the features of the chance set-up. If so, then it looks as though it will only ever be
reasonable to degree 0 or 1 to believe that the coin will land heads. The solution proposed
by Achinstein could once again be described in terms of the level of focus at which we
view probabilities. The level of focus of objective epistemic probability can be limited by,
among other things, a ‘disregarding condition’. In the coin tossing example, it is possible
that various unidentifiable aspects of the chance set-up determine that the degree to which
it is reasonable to believe that heads will be the outcome is very high. However, these
‘microconditions’ of the chance set-up can be disregarded for the purposes of determining
reasonableness of belief. This strategy is rather similar to that pursued by Beebee and
Papineau. They are prepared to accept that the probability constraint involved relative
probabilities, thus that the correct probability for a person to adopt would be relative to their
knowledge and characterization of the chance set-up. A difference is this: Achinstein is
clear that the disregarding condition does not involve relativization to an epistemic situation
(such as a state of knowledge, or a description of events). Particular microconditions can
be disregarded whether or not anyone knows about them. An agent might have evidence
47 See id.; and A LVIN P LANTINGA , WARRANT AND P ROPER F UNCTION, 152–156 (1993).
48 ACHINSTEIN, supra n. 24.
49 Achinstein allows this in id., at 118–120.
50 As it was once thought to be: see R ICHARD DAVENPORT-H INES , T HE P URSUIT OF O BLIVION : A S OCIAL
H ISTORY OF D RUGS, 102–105 (2001).
51 ACHINSTEIN, supra n. 24 at 98.
286
M . REDMAYNE
that a particular coin has a slight bias towards heads; we could still say that it is reasonable
to degree 0.5 for her to believe that it will land heads, disregarding this evidence and any
microconditions. The relativization is in terms of what is disregarded, not in terms of the
agent’s epistemic situation.
That said, it is rather odd to assess the reasonableness of believing that a coin will
land heads disregarding the fact that the coin is biased. Achinstein admits that there are
factors which it will be unreasonable to disregard. For an agent, what it will be reasonable
to disregard will depend on the agent’s epistemic situation. This idea is linked to broader
concepts of epistemic justification. In general, we should only disregard those things which
we can justify disregarding. So while Achinstein argues that his account of probability is
not a theory of justification,52 in day to day life, the degree to which it is reasonable for us
to have certain beliefs will depend on what we are justified in believing.
What is the scope of objective epistemic probability? Long-run propensity theory
is limited to chance-set ups which generate stable frequencies; the relative principle of
Beebee and Papineau probably has a similar scope; Shafer’s Nature will not post odds
for some events. These theories apply easily enough to our DNA example, but there
will be problems in applying them to the previous conviction or alibi evidence. Although
Achinstein does not comment on the scope of his theory, it seems that it could be applied
rather more widely than those other theories. With the previous conviction evidence, we
supposed that the difficulty of assigning the defendant to a reference class might prevent
us talking about objective—or at least wholly objective—probabilities; with Achinstein’s
theory we can simply disregard those complicating factors. If we have statistics on
reconviction rates for violent offenders, we could use them as the basis for an inference,
disregarding any complicating factors, such as the defendant’s having attended an anger
management course. That would allow us to talk of an objective epistemic probability of
the defendant’s re-offending. The problem is that this strategy appears to be illegitimate
if there is no justification for disregarding the anger management evidence. Here it seems
that what we have gained in objectivity is undercut by our having to rely so heavily on the
looser, non-probabilistic concept of justification.
7. Logical probability
There are several theories of logical probability.53 They have much in common with
Achinstein’s account, in that they are objective epistemic theories. This section will
take as an example of a logical theory that recently developed by Richard Swinburne.54
For Achinstein, it is facts abut the world which determine probabilities. Swinburne’s
logical theory starts not with facts, but with propositions. Logical probability is about
the relationship between propositions, for example the propositions ‘DNA matching the
52 Id., at 99.
53 A prominent example is Keynes’s theory: see G ILLIES, supra n. 14, ch. 3. For recent defences of Keynes’s
views, see Jochen Runde, Keynes After Ramsey: In Defence of A Treatise on Probability 25 S TUDIES IN H ISTORY
& P HILOSOPHY OF S CIENCE 97 (1994); J. Franklin, Resurrecting Logical Probability 55 E RKENNTNIS 277
(2001).
54 R ICHARD S WINBURNE , E PISTEMIC J USTIFICATION, chs 3–4 (2001). A useful summary is Richard
Swinburne, Introduction in BAYES ’ S T HEOREM, supra n. 17. For a broadly similar account, see P LANTINGA,
supra n. 47, ch. 9 (1993).
OBJECTIVE PROBABILITY AND THE ASSESSMENT OF EVIDENCE
287
defendant’s was found at the scene of the crime’ and ‘the defendant committed the assault’.
This might seem mysterious. We might accept that there are facts about the world, such as
arsenic’s being poisonous, that give rise to probabilities. But where do the relationships
between propositions come from? Swinburne’s position is simple: there must be such
relations. ‘We do think . . . that there are right and wrong ways to assess how probable
one proposition r makes another one q, and the philosopher of induction tries to codify the
right ways. . . . [H]e codifies the criteria that we almost all (including scientists) use and
believe to be correct. If we do not think that there are such criteria, then we must hold that
no one makes any error if he regards any scientific theory compatible with observations . . .
as is probable on the evidence as any other.’55 There is an important concession involved
in Swinburne’s claim, however. The right and wrong ways to assess evidential support may
impose only‘very rough[]’ constraints on logical probability. ‘[T]here are correct inductive
criteria that often give clear results, at any rate for claims of comparative probability (that
is, claims to the effect that proposition q given r is more probable than proposition s given
t).’56
Even with this concession to imprecision, Swinburne is keen to ensure that his
criteria for logical probability are not too demanding. He identifies a strict concept of
logical probability, which involves logical omniscience. This is distinguished from a
concept—which Swinburne refers to as epistemic probability—relativized to ordinary
human capacities to think through the consequences of an evidence base, given the
constraints that we face. But if even logical omniscience only produces rough probabilities,
then epistemic probability will be a vaguer standard still. The notion ‘is extraordinarily
vague’.57 Nevertheless, the concept can be distinguished from a yet vaguer concept, that
of subjective probability. With subjective probability, probabilities are relativized not only
to human capacities to appreciate the logical implications of evidence, but also to different
views of what those logical implications are (this is more or less the standard subjective
Bayesian account).
Swinburne goes on to sketch the criteria of logical probability which he takes
to constrain the probability of one proposition on another. The relationship between
propositions described by logical probability is taken to be an explanatory one: that the
defendant committed the assault explains why the matching DNA was found at the crime
scene. This requires something to be said about explanation. Swinburne suggests there
are two broad types: inanimate explanation, characteristic of the physical sciences, and
personal explanation, of the type which occurs in every-day life situations.58 The alibi
evidence in our example involves personal explanation; to understand how a proposition
about the alibi makes a proposition about the defendant’s guilt more or less probable,
one must be able to provide an explanation of the alibi in terms of basic human
psychology, involving beliefs and desires. Turning to the principles of logical probability
which constrain such explanations, Swinburne provides four: yielding the data (basically,
a likelihood principle); fit with background evidence; scope; and simplicity.59 Similar
55 Id., at 64.
56 Id.
57 Id., at 68.
58 Id., at 74–75.
59 Id., at 80–83.
288
M . REDMAYNE
principles explain how hypotheses have objective intrinsic—or prior—probabilities. With
these principles in place, Bayes’ rule can be brought in as a more refined statement of how
evidence supports hypothesis; though once again, Swinburne is quick to point out that the
vagueness of the probabilities involved may mean that the rule can generate little more
than statements of comparative probability.60
Unlike most of the other accounts of objective probability we have considered,
Swinburne’s logical probability at least has a chance of applying in the alibi example. The
defendant’s innocence, it might be argued, provides an explanation of the alibi. The scope
and simplicity of this explanation provide it with a reasonable degree of probability, as
does the fact that it yields the data better than its negation. But it is not obvious that logical
probability does give us very much to go on here. It could be argued that the alibi is actually
evidence of the defendant’s guilt: perhaps the girlfriend’s ability to remember what she was
doing on the night of the assault is thought to be suspicious, as is the expression on her face
while testifying. A response to this sort of objection is that expanding the proposition about
the alibi will be some help in clarifying things. The expanded proposition would describe
the alibi, how remarkable the feat of memory is (how long ago was the night in question?
Are there any factors making it memorable?), the expression on the witness’s face when
giving evidence, and so on. The more complex proposition would allow a probability to be
generated about the alibi.
Two points might be made about this. Logical epistemic probability is relative to the
propositions considered, and to an agent’s ability to analyse her evidence. If, as in the above
example, the propositions need to be made very precise and complex in order to generate
probabilities, there will be greater scope for interpersonal variation in the probabilities. Two
people may agree that the fact that the witness provided an alibi features in their evidence
base, but disagree as to whether the witness’s expression during the third minute of her
evidence in chief is significant. In a way, this is a problem about level of focus. At a coarse
level, a vague proposition about an alibi might not generate any logical probabilities. At
a finer level, a very detailed proposition about an alibi can generate logical probabilities,
but more interpersonal disagreement about what they are.61 Secondly, Swinburne allows
that some beliefs will be ‘basic’: they will not be supported by other beliefs held by an
agent. It would be possible for two people who have observed the alibi witness giving
evidence to have different basic beliefs about the evidence. One might have a belief that
the witness looked suspicious while giving evidence; the other might believe that she
looked honest. If the observational input that gave rise to these different beliefs is no
longer remembered, then the suspicious and honest beliefs will be basic,62 so we cannot
use the idea of logical probability to police them—to argue that one belief is better than
another. Again, this increases the scope for interpersonal variation in the grounding of
logical epistemic probability.
How might someone sceptical of the notion of objective probability respond to
Swinburne’s analysis? We have seen that logical epistemic probabilities will often be
60 Id., at 102–104.
61 A broadly similar point is made by Suppes, who notes that a theory of rationality needs a theory of attention:
some means of thinking about how we choose those elements of our evidence base which we attend to and which
then figure in our reasoning. S UPPES, supra n. 26, 212–214.
62 Id., at 136–137.
OBJECTIVE PROBABILITY AND THE ASSESSMENT OF EVIDENCE
289
vague, and may permit considerable interpersonal variation. The sceptic might feel that
there is little worth fighting over here. Still, Swinburne does assert that there must be
logical probabilities for propositions, even if they are so vague that they only ‘often’
give clear answers. The sceptic could parry with the claim that our intuition that there
are broadly right answers to questions of evidential support is based on a limited range
of cases. Perhaps there are logical probabilities for some propositions, but not for others.
This, as we have seen, is an implication of Shafer’s account of probability.63
Lessons
This brief review of theories of objective probability shows some of the resources on which
Goldman could draw in order to ground the argument for the veritistic potential of Bayes’
rule. There is no shortage of conceptions of objective probability. But it is not clear how
much help they are. Several of the theories are limited in scope. Gillies and Beebee and
Papineau tie their conceptions of probability more or less tightly to frequency theories.
But if the phenomena we are dealing with do not yield relatively stable frequencies, there
are no objective probabilities. The veritistic argument for Bayes’ rule would then have
very limited scope. Nevertheless, there is something rather odd about the idea that stable
frequencies form a kind of flat earth at the edges of which we fall immediately into strong
subjectivism. There is much to be said for Gillies’ view that probability statements form
a continuum from objective to subjective, and that where frequencies start to give out we
can still talk of partly objective probabilities.
Some of the theories we considered are less modest in scope. Shafer’s conceptualization is broader than the frequency view, but it too admits that there are gaps in the structure
of probabilities that governs our world. A state of the universe propensity theory, however,
is limited only by the possibility that determinism is true and that there are therefore no
probabilities. Goldman could tie his veritistic argument to such a theory. But in practical
terms, it is not clear that this would be very different from choosing Gillies’ long run
propensity theory. The only propensities we have a very clear idea of are those generating
frequencies. So while (absent determinism) a state of the universe propensity theory might
fit quite well with the veritistic argument for Bayes’ rule, the unknowable nature of the vast
majority of probabilities would give the argument little practical hold over our evidentiary
practices. Given its state of development, it is perhaps too early to reach a conclusion
about Goldman’s own possible worlds theory. Although potentially applicable to any type
of evidence, it does seem that any probabilities generated by the theory will be relative to
judgments about similarity.
What of the two epistemic theories reviewed here? Achinstein buys objectivity in
probability at the price of ‘disregarding conditions’. But that merely shifts attention to
why particular factors are being ignored. The veritistic argument for Bayes’ rule could in a
sense be secured by using objective epistemic probabilities, but it would only be part of the
story of evidential support. Without some objective justification for disregarding particular
factors, the Bayesian part of the story would not look particularly impressive. This brings
us to Swinburne. Logical epistemic probability does not suffer from problems of scope.
But the theory does involve a large concession to vagueness. In many of the situations
63 It is also a feature of Keynes’ theory of objective probability. See G ILLIES, supra n. 14, 33–37.
290
M . REDMAYNE
of interest to evidence lawyers, it seems we could only expect comparative probabilities;
Swinburne even hints that in some situations no clear comparative probabilities might
even be available.64 We have also seen that Swinburne’s theory allows for some—perhaps
considerable—interpersonal variation.
How problematic is such vagueness? The difficulties it causes should not be
exaggerated. The veritistic argument for Bayes’ rule holds even when probability ranges
rather than point-valued probabilities are used.65 Perhaps the argument could even work
with the notion of comparative probability. Even so, the argument might lose much of its
pull. Consider iterative use of Bayes’ rule to update belief in guilt by taking into account
several pieces of evidence. Vagueness in the probabilities would be cumulative. Probability
ranges might leave us with a conclusion that the defendant’s guilt was somewhere between
0.7 and 0.99 (with room for interpersonal variation). Comparative probabilities might give
us the simple conclusion that guilt is more probable than innocence. It is hardly obvious
that using Bayes’ rule to reach such conclusions has much to recommend it. Many other
methods of reasoning about evidence might land us in the same ballparks. And there might
be some that would lead us to more precise conclusions.
Bayesian theories of reasoning are often criticized for not being psychologically realistic. One criticism of this type is that attaching a probability to every proposition imposes
a considerable cognitive burden.66 It means that every possibility, even if it has only a
small probability, must be borne in mind and factored into the final calculation. After all,
even a small probability might end up making all the difference between a verdict of guilty
and one of not guilty. Our cognitive burdens, however, would be greatly eased is we could
ignore low probabilities and treat high probabilities as certainties.67 If we think it improbable that the witness is telling the truth, it is easier to ignore her completely than to allow the
small probability to bear on our calculations. For present purposes, what is significant about
this is that it has the potential to address a problem we have just noticed. If objective probabilities constrain reasoners only within relatively broad bounds, then agreement becomes
a problem. But if low probabilities are ignored, and high probabilities are treated as certainties, the conclusions reached by a group of reasoners are likely to fall within narrower
bounds. If, therefore, we were to measure objectivity in terms of the ability of fact-finders
to reach agreement, we might find that an approach less reliant on probabilities would be
more objective than one conceptualized in terms of logical epistemic probability.68
The preceding paragraph is beginning to suggest reasons why thinking about objective
accounts of probability may be useful, even if by doing so we do not uncover a theory
which could ground something like the veritistic argument for Bayes’ rule. There may be
wider lessons to draw from the problems encountered by theories of objective probability.
One suggested so far is that interpersonal agreement may be hindered rather than helped
by an over-emphasis on probability. This conclusion was in fact pre-empted to some extent
64 Id., at 64.
65 Goldman, Quasi-Objective Bayesianism, supra n. 1, 251.
66 See ROBERT N OZICK , T HE NATURE OF R ATIONALITY, 96 (1993); Gilbert Harman, Positive Versus
Negative Undermining in Belief Revision in NATURALIZING E PISTEMOLOGY (Hilary Kornblith ed., 2nd edn,
1994).
67 See id.; also Richard Foley, The Epistemology of Belief and the Epistemology of Degrees of Belief 29
A MERICAN P HILOSPHICAL Q UARTERLY 111, 122 (1992).
68 On this aspect of objectivity, see N OZICK, supra n. 38, 90–93.
OBJECTIVE PROBABILITY AND THE ASSESSMENT OF EVIDENCE
291
in our discussion of Swinburne’s account of logical epistemic probability. There we saw
that if we need to make propositions very precise in order to generate probabilities, there is
greater scope for interpersonal variation. There is a fairly general phenomenon here, which
is that it is easier to agree if debate is conducted at an abstract, rather than a very finely
focused level.69
This idea of a level of focus was one we came across several times in our survey of
objective theories of probability. Several theories deal with the determination problem
by choosing a level of focus at which what Achinstein refers to as ‘microconditions’
will not be visible. Gillies, for example, focuses at a level which picks out stable longruns. However, while this strategy may help to promote agreement, it may also make
it more difficult in some circumstances. When it comes to single events, the reference
class problem emerges. For Beebee and Papineau, this is the issue of description; under
the possible worlds model, it is the need to specify a similarity ordering; in Achinstein’s
account, the problem emerges in terms of justifying disregarding conditions. Probabilities
on these theories are relative to some sort of classification.70 Some of the theories reviewed
here, by placing the issue squarely within the framework of probability theory, seem to
emphasize this point in an interesting way. Thinking about character evidence in terms
of the reference class problem, for example, offers certain insights. This has already been
hinted at in discussion of the previous conviction evidence in the scenario introduced earlier
in this paper; here is another example.
Richard Friedman has developed a powerful argument against the relevance of previous
convictions to the issue of a defendant’s credibility.71 Unlike many writers on this topic,
Friedman allows that there is some connection between having previous convictions and
testimonial dishonesty. He argues, however, that when a defendant gives exculpatory
testimony at trial, previous convictions will not be relevant to the credibility of the account
given, except by the illegitimate inference that the criminal record directly increases the
probability of guilt. Think of the defendant’s testimony in terms of the following likelihood
ratio: P(testimony|testimony true)/P(testimony|testimony false). The probability that the
defendant would give exculpatory evidence given that the testimony is true (and he is
therefore innocent) seems high, irrespective of his having previous convictions. Where
the testimony is false, and the defendant is therefore guilty, previous convictions might
seem to affect the probability. But Friedman denies this, pointing out that if the defendant
is guilty, he has committed a crime, and is therefore among the group of people (criminals)
who have even fewer qualms about lying on oath to escape conviction than those without
previous convictions. That the defendant has committed other crimes in the past does not
therefore make much difference. There is thus little point in revealing information about
previous convictions to the jury to undermine the defendant’s credibility.
This argument is ingenious. But one objection to it can be put broadly in terms of
reference classes. Suppose, as some do, that a person’s first crime should be regarded as
69 See the quotation from S HAFER, supra n. 33; on the wider issues, see e.g. Cass Sunstein, Practical Reason
and Incompletely Theorized Agreements 51 C URRENT L EGAL P ROBLEMS (1998).
70 See further Hájek, supra n. 45.
71 Richard Friedman, Character Impeachment: Psycho-Bayesian [!?] Analysis and a Proposed Overhaul 38
U.C.L.A. L AW R EV. 637 (1991). Such evidence can be presented under Federal Rule of Evidence 608 and, in
the United Kingdom, under the Criminal Evidence Act 1898 s. 1(3)(ii).
292
M . REDMAYNE
a momentary lapse, and not as demonstrating a serious commitment to offending.72 Then,
if we assume that a particular defendant has committed the crime he is on trial for, we
are not really classifying him as a criminal, and therefore as part of the group who will
think little of lying on oath. On that assumption, evidence that the defendant has previous
convictions would legitimately increase the probability of guilt. The moral here is that
it is as important to think about how we should carve up the world—in this example
how we should categorize people—as it is to think about the structure of a probability
argument. And when it comes to Friedman’s argument, the question is not so much how
jurors should categorize defendants, as how they do. Whether or not a first crime is properly
seen as a momentary lapse, if jurors do think in such terms, then letting them use previous
convictions as evidence of lack of credibility may be a useful strategy.
Note that jurors who classify defendants differently will tend to come to different
judgments about the relevance of evidence in a scenario such as this. And this conclusion
is perfectly general: to the extent that people classify events differently, or adopt different
levels of focus when thinking about them, they will disagree about the inferences to be
drawn. That in general people do broadly agree about the implications of evidence suggests
that these are not terribly deep problems; there are perhaps natural ways of categorizing
events when we reason about them.73 That said, some well known debates about evidence
can be understood as debates about level of focus: sexual history evidence provides an
example.74 If we are to understand how evidential reasoning works, and perhaps why
people sometimes disagree about evidence, understanding how the reference class and level
of focus problems are resolved in practice is important. As Alice Drewery remarks: ‘how
we categorise and then generalize is not very well understood.’ ‘But’, she continues, ‘since
this is central to our reasoning, it ought to be.’75
A final question raised by the review of theories of probability concerns the attraction
of probability as a way of thinking through the question of how we should respond to
evidence. Why do we want objective probabilities as opposed to some other form of
objective judgement about evidence? Susan Haack, for example, suggests that there are
objective constraints on evidence assessment, without using the concept of probability in
her account.76 One reason for favouring probabilities is that they can be fed into Bayes’
rule which, when it comes to updating our beliefs, will provide us with a further objective
constraint.77 Bayes’ rule appears to give us an objective procedure for reasoning about
72 See Andrew von Hirsch, Desert and Previous Convictions in P RINCIPLED S ENTENCING : R EADINGS IN
T HEORY AND P OLICY, 191 (Andrew von Hirsch and Andrew Ashworth eds, 2nd edn, 1998).
73 See Alice Drewery, Laws, Regularities and Exceptions 13 R ATIO 1, 11 (2000).
74 The Canadian Supreme Court has stated that ‘The fact that a woman has had intercourse on other occasions
does not in itself increase the logical probability that she consented to intercourse with the accused.’ R. v.
Seaboyer 83 D.L.R. 4th 193, 258 (1991). This sort of thinking is a common reason for rejecting the relevance of
sexual history to questions of consent. The idea seems to be that we should think about consent to sex at a very
fine level of focus, as a unique single event, and not as one which displays a propensity or can be fitted into a set
which would generate a frequency.
75 Alice Drewery, Dispositions and Ceteris Paribus Laws 52 B RITISH J OURNAL FOR THE P HILOSOPHY OF
S CIENCE 723, 732 (2201).
76 See S USAN H AACK , E VIDENCE AND I NQUIRY: T OWARDS R ECONSTRUCTION IN E PISTEMOLOGY, ch.
10 (1993). Haack criticises probabilistic accounts in Clues to the Puzzle of Scientific Evidence 5 P RINCIPIA 253,
271–276 (2001).
77 Another reason is that, it might be argued, key evidential concepts, such as standards of proof, require
OBJECTIVE PROBABILITY AND THE ASSESSMENT OF EVIDENCE
293
evidence. We have come to see, however, that things are not this simple. It is difficult
to come up with an account of objective probability which can be applied to all of the
things we want to reason about evidentially. To be sure, Swinburne’s account of logical
probability has wide scope, but it is open to question in what sense his is a theory of
probability. We have noted the vagueness of the probabilities it depends on. There will be
cases where the probabilities will only be comparative. When we have reached the stage
at which probabilities are non-numerical, is it clear that we are dealing with probability
at all? Now Swinburne’s account of logical probability is part of his broader account of
epistemic justification. Epistemic justification has impinged on our account in other places
too: it plays an important role in setting the context for Achinstein’s probabilities; it might
also be suggested that justification will play a role in providing answers to the reference
class problem. Given the role that justification plays in these accounts, why not just settle
for saying that we are better off forming beliefs about evidence which are justified?
One reason why just saying this is unsatisfactory is that there are so many different
accounts of epistemic justification. And many of them explain justification in theoretical
terms which do not offer us much practical advice on how to make sure that our beliefs are
well supported. In closing, however, I would like to say something very general about these
debates. One notion of justification which has considerable appeal relates justification to
epistemic responsibility. Epistemic justification involves being able to back up the claims
we make, to provide reasons for them.78 This notion is not necessarily incompatible
with an approach emphasizing the importance of having beliefs which fall into line with
objective probabilities. Nevertheless, the justificatory process of reason-giving does seem
to involve a rather different emphasis than the objective probability approach. For one
thing, the language in which we justify our beliefs is, most of the time, not the language of
probability. Perhaps our arguments about our beliefs could be recast to satisfy something
like Swinburne’s criteria of logical probability. But why recast our arguments? To subject
them to some sort of critique, perhaps; to show that on occasion we are not as justified
as we think we are. But given the vagueness of the probabilities involved in Swinburne’s
scheme, is it not possible that our ordinary justificatory arguments will sometimes—even
often—involve more precise claims than those that could be justified were we to translate
into probability talk? Might objective probability often be second best?
If our ordinary criteria of justification are terribly bad, then an approach based on
objective probability might still have some mileage in it. And this points to another reason
why talking of justification in terms of objective probability is attractive. The value of
having justified beliefs is that such beliefs are more likely to be true, so to be plausible
a theory of justification must involve criteria of justification which are in some way truth
conducive. Saying that beliefs are justified when they reflect objective probabilities looks
to deliver the link with truth rather easily. But it is much harder to argue that our ordinary
criteria of justification are likely to get us to the truth. And that leaves room for sceptics
of various sorts to question our evidentiary arguments. However, given the difficulties
involved in finding some workable conception of objective probability, we probably do
elaboration in terms of probability. But some commentators do question whether these are probabilistic concepts:
see e.g. Allen, supra n. 8. It could also be argued that these concepts can only be accounted for in terms of
subjective probability.
78 M ICHAEL W ILLIAMS , P ROBLEMS OF K NOWLEDGE : A N I NTRODUCTION TO E PISTEMOLOGY, 34–35
(2001).
294
M . REDMAYNE
not have much choice. If we want to convince people that our evidentiary arguments are
worth listening to, we need to be able to draw on a theory of epistemic justification. For
all the insights that it may offer into evidential reasoning, objective probability does not
necessarily provide us with a shortcut to that goal.
Conclusion
Goldman’s argument for the veritistic virtues of Bayes’ rule is intriguing. For one thing,
it reminds us that Bayes’ rule can only be part of the picture of evidential reasoning.
If we want to rely on the rule as part of a claim that some evidential arguments are
better than others, then an account of the probabilities with which it operates is needed.
Objective probabilities would do the necessary work here; but our analysis highlights
the difficulties in hitting upon an account of objective probability which will meet the
demands made of it in evidential reasoning. Our conclusions need not be all negative,
however. An examination of accounts of objective probability highlights some important
features of evidential reasoning—problems of classification and level of focus which need
to be met in our evidential arguments—which evidence scholars would do well to consider
more deeply. Our analysis also highlights ways in which there may be more to evidential
reasoning than just getting the probabilities right.
Acknowldegement
I am grateful to participants at the Cardozo School of Law Conference, Inference, Culture,
and Ordinary Thinking in Dispute Resolution, and in particular to Branden Fitelson, for
comments.