Download Stanford Enciclopedia of Philosophy Bayesian Epistemology

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of randomness wikipedia , lookup

Randomness wikipedia , lookup

Indeterminism wikipedia , lookup

Probabilistic context-free grammar wikipedia , lookup

Infinite monkey theorem wikipedia , lookup

Probability box wikipedia , lookup

Birthday problem wikipedia , lookup

Ars Conjectandi wikipedia , lookup

Inductive probability wikipedia , lookup

Dempster–Shafer theory wikipedia , lookup

Probability interpretations wikipedia , lookup

Transcript
Stanford Enciclopedia of Philosophy
Bayesian Epistemology
‘Bayesian epistemology’ became an epistemological movement in the 20th century,
though its two main features can be traced back to the eponymous Reverend Thomas
Bayes (c. 1701-61). Those two features are: (1) the introduction of a formal apparatus
for inductive logic; (2) the introduction of a pragmatic self-defeat test (as illustrated by
Dutch Book Arguments) for epistemic rationality as a way of extending the justification
of the laws of deductive logic to include a justification for the laws of inductive logic.
The formal apparatus itself has two main elements: the use of the laws of probability as
coherence constraints on rational degrees of belief (or degrees of confidence) and the
introduction of a rule of probabilistic inference, a rule or principle of conditionalization.
Bayesian epistemology did not emerge as a philosophical program until the first formal
axiomatizations of probability theory in the first half of the 20th century. One important
application of Bayesian epistemology has been to the analysis of scientific practice in
Bayesian Confirmation Theory. In addition, a major branch of statistics, Bayesian
statistics, is based on Bayesian principles. In psychology, an important branch of
learning theory, Bayesian learning theory, is also based on Bayesian principles. Finally,
the idea of analyzing rational degrees of belief in terms of rational betting behavior led
to the 20th century development of a new kind of decision theory, Bayesian decision
theory, which is now the dominant theoretical model for the both the descriptive and
normative analysis of decisions. The combination of its precise formal apparatus and its
novel pragmatic self-defeat test for justification makes Bayesian epistemology one of
the most important developments in epistemology in the 20th century, and one of the
most promising avenues for further progress in epistemology in the 21st century.
1. Deductive and Probabilistic Coherence and Deductive and Probabilistic Rules of
Inference
2. A Simple Principle of Conditionalization
3. Dutch Book Arguments
4. Bayes' Theorem and Bayesian Confirmation Theory
5. Potential Problems
6. Other Principles of Bayesian Epistemology
Bibliography
Other Internet Resources
Related Entries
-------------------------------------------------------------------------------1. Deductive and Probabilistic Coherence and Deductive and Probabilistic Rules of
Inference
There are two ways that the laws of deductive logic have been thought to provide
rational constraints on belief: (1) Synchronically, the laws of deductive logic can be
used to define the notion of deductive consistency and inconsistency. Deductive
inconsistency so defined determines one kind of incoherence in belief, which I refer to
as deductive incoherence. (2) Diachronically, the laws of deductive logic can constrain
admissible changes in belief by providing the deductive rules of inference. For example,
modus ponens is a deductive rule of inference that requires that one infer Q from
premises P and P Q.
Bayesians propose additional standards of synchronic coherence -- standards of
probabilistic coherence -- and additional rules of inference -- probabilistic rules of
inference -- in both cases, to apply not to beliefs, but degrees of belief (degrees of
confidence). For Bayesians, the most important standards of probabilistic coherence are
the laws of probability. For more on the laws of probability, see the following
supplementary article:
Supplement on Probability Laws
For Bayesians, the most important probabilistic rule of inference is given by a principle
of conditionalization.
2. A Simple Principle of Conditionalization
If unconditional probabilities (e.g. P(S)) are taken as primitive, the conditional
probability of S on T can be defined as follows:
Conditional Probability:
P(S/T) = P(S&T)/P(T).
By itself, the definition of conditional probability is of little epistemological
significance. It acquires epistemological significance only in conjunction with a further
epistemological assumption:
Simple Principle of Conditionalization:
If one begins with initial or prior probabilities Pi, and one acquires new evidence which
can be represented as becoming certain of an evidentiary statement E (assumed to state
the totality of one's new evidence and to have initial probability greater than zero), then
rationality requires that one systematically transform one's initial probabilities to
generate final or posterior probabilities Pf by conditionalizing on E -- that is: Where S is
any statement, Pf(S) = Pi(S/E).[1]
In epistemological terms, this Simple Principle of Conditionalization requires that the
effects of evidence on rational degrees be analyzed in two stages: The first is noninferential. It is the change in the probability of the evidence statement E from Pi(E),
assumed to be greater than zero and less than one, to Pf(E) = 1. The second is a
probabilistic inference of conditionalizing on E from initial probabilities (e.g., Pi(S)) to
final probabilities (e.g., Pf(S) = Pi(S/E)).
Problems with the Simple Principle (to be discussed below) have led many Bayesians to
qualify the Simple Principle by limiting its scope. In addition, some Bayesians follow
Jeffrey in generalizing the Simple Principle to apply to cases in which one's new
evidence is less than certain (also discussed below). What unifies Bayesian
epistemology is a conviction that conditionalizing (perhaps of a generalized sort) is
rationally required in some important contexts -- that is, that some sort of
conditionalization principle is an important principle governing rational changes in
degrees of belief.
3. Dutch Book Arguments
Many arguments have been given for regarding the probability laws as coherence
conditions on degrees of belief and for taking some principle of conditionalization to be
a rule of probabilistic inference. The most distinctively Bayesian are those referred to as
Dutch Book Arguments. Dutch Book Arguments represent the possibility of a new kind
of justification for epistemological principles.
A Dutch Book Argument relies on some descriptive or normative assumptions to
connect degrees of belief with willingness to wager -- for example, a person with degree
of belief p in sentence S is assumed to be willing to pay up to and including $p for a
unit wager on S (i.e., a wager that pays $1 if S is true) and is willing to sell such a wager
for any price equal to or greater than $p (one is assumed to be equally willing to buy or
sell such a wager when the price is exactly $p).[2] A Dutch Book is a combination of
wagers which, on the basis of deductive logic alone, can be shown to entail a sure loss.
A synchronic Dutch Book is a Dutch Book combination of wagers that one would
accept all at the same time. A diachronic Dutch Book is a Dutch Book combination of
wagers that one will be motivated to enter into at different times.
Ramsey and de Finetti first employed synchronic Dutch Book Arguments in support of
the probability laws as standards of synchronic coherence for degrees of belief. The first
diachronic Dutch Book Argument in support of a principle of conditionalization was
reported by Teller, who credited David Lewis. The Lewis/Teller argument depends on a
further descriptive or normative assumption about conditional probabilities due to de
Finetti: An agent with conditional probability P(S/T) = p is assumed to be willing to pay
any price up to and including $p for a unit wager on S conditional on T. (A unit wager
on S conditional on T is one that is called off, with the purchase price returned to the
purchaser, if T is not true. If T is true, the wager is not called off and the wager pays $1
if S is also true.) On this interpretation of conditional probabilities, Lewis, as reported
by Teller, was able to show how to construct a diachronic Dutch Book against anyone
who, on learning only that T, would predictably change his/her degree of belief in S to
Pf(S) > Pi(S/T); and how to construct a diachronic Dutch Book against anyone who, on
learning only that T, would predictably change his/her degree of belief in S to Pf(S) <
Pi(S/T). For illustrations of the strategy of the Ramsey/de Finetti and the Lewis/Teller
arguments, see the following supplementary article:
Supplement on Dutch Book Arguments
There has been much discussion of exactly what it is that Dutch Book Arguments are
supposed to show. On the literal-minded interpretation, their significance is that they
show that those whose degrees of belief violate the probability laws or those whose
probabilistic inferences predictably violate a principle of conditionalization are liable to
enter into wagers on which they are sure to lose. There is very little to be said for the
literal-minded interpretation, because there is no basis for claiming that rationality
requires that one be willing to wager in accordance with the behavioral assumptions
described above. An agent could simply refuse to accept Dutch Book combinations of
wagers.
A more plausible interpretation of Dutch Book Arguments is that they are to be
understood hypothetically, as symptomatic of what has been termed pragmatic selfdefeat. On this interpretation, Dutch Book Arguments are a kind of heuristic for
determining when one's degrees of belief have the potential to be pragmatically selfdefeating. The problem is not that one who violates the Bayesian constraints is likely to
enter into a combination of wagers that constitute a Dutch Book, but that, on any
reasonable way of translating one's degrees of belief into action, there is a potential for
one's degrees of belief to motivate one to act in ways that make things worse than they
might have been, when, as a matter of logic alone, it can be determined that alternative
actions would have made things better (on one's own evaluations of better and worse).
Another way of understanding the problem of susceptibility to a Dutch Book is due to
Ramsey: Someone who is susceptible to a Dutch Book evaluates identical bets
differently based on how they are described. Putting it this way makes susceptibility to
Dutch Books sound irrational. But this standard of rationality would make it irrational
not to recognize all the logical consequences of what one believes. This is the
assumption of logical omniscience (discussed below).
If successful, Dutch Book Arguments would reduce the justification of the principles of
Bayesian epistemology to two elements: (1) an account of the appropriate relationship
between degrees of belief and choice; and (2) the laws of deductive logic. Because it
would seem that the truth about the appropriate relationship between the degrees of
belief and choice is independent of epistemology, Dutch Book Arguments hold out the
potential of justifying the principles of Bayesian epistemology in a way that requires no
other epistemological resources than the laws of deductive logic. For this reason, it
makes sense to think of Dutch Book Arguments as indirect, pragmatic arguments for
according the principles of Bayesian epistemology much the same epistemological
status as the laws of deductive logic. Dutch Book Arguments are a truly distinctive
contribution made by Bayesians to the methodology of epistemology.
It should also be mentioned that some Bayesians have defended their principles more
directly, with non-pragmatic arguments. In addition to reporting Lewis's Dutch Book
Argument, Teller offers a non-pragmatic defense of Conditionalization. There have
been many proposed non-pragmatic defenses of the probability laws, the most
compelling of which is due to Joyce. All such defenses, whether pragmatic or nonpragmatic, produce a puzzle for Bayesian epistemology: The principles of Bayesian
epistemology are typically proposed as principles of inductive reasoning. But if the
principles of Bayesian epistemology depend ultimately for their justification solely on
the laws of deductive logic, what reason is there to think that they have any inductive
content? That is to say, what reason is there to believe that they do anything more than
extend the laws of deductive logic from beliefs to degrees of belief? It should be
mentioned, however, that even if Bayesian epistemology only extended the laws of
deductive logic to degrees of belief, that alone would represent an extremely important
advance in epistemology.
4. Bayes' Theorem and Bayesian Confirmation Theory
This section reviews some of the most important results in the Bayesian analysis of
scientific practice -- Bayesian Confirmation Theory. It is assumed that all statements to
be evaluated have prior probability greater than zero and less than one.
Bayes' Theorem and a Corollary
Bayes' Theorem is a straightforward consequence of the probability axioms and the
definition of conditional probability:
Bayes' Theorem:
P(S/T) = P(T/S) × P(S)/P(T) [where P(T) is assumed to be greater than zero]
The epistemological significance of Bayes' Theorem is that it provides a straightforward
corollary to the Simple Principle of Conditionalization. Where the final probability of a
hypothesis H is generated by conditionalizing on evidence E, Bayes' Theorem provides
a formula for the final probability of H in terms of the prior or initial likelihood of H on
E (Pi(E/H)) and the prior or initial probabilities of H and E:
Corollary of the Simple Principle of Conditionalization:
Pf(H) = Pi(H/E) = Pi(E/H) × Pi(H)/Pi(E).
Due to the influence of Bayesianism, likelihood is now a technical term of art in
confirmation theory. As used in this technical sense, likelihoods can be very useful.
Often, when the conditional probability of H on E is in doubt, the likelihood of H on E
can be computed from the theoretical assumptions of H.
Bayesian Confirmation Theory
A. Confirmation and disconfirmation. In Bayesian Confirmation Theory, it is said that
evidence confirms (or would confirm) hypothesis H (to at least some degree) just in
case the prior probability of H conditional on E is greater than the prior unconditional
probability of H: Pi(H/E) > Pi(H). E disconfirms (or would disconfirm) H if the prior
probability of H conditional on E is less than the prior unconditional probability of H.
B. Confirmation and disconfirmation by entailment. Whenever a hypothesis H logically
entails evidence E, E confirms H. This follows from the fact that to determine the truth
of E is to rule out a possibility assumed to have non-zero prior probability that is
incompatible with H -- the possibility that ~E. A corollary is that, where H entails E, ~E
would disconfirm H, by reducing its probability to zero. The most influential model of
explanation in science is the hypothetico-deductive model (e.g., Hempel). Thus, one of
the most important sources of support for Bayesian Confirmation Theory is that it can
explain the role of hypothetico-deductive explanation in confirmation.
C. Confirmation of logical equivalents. If two hypotheses H1 and H2 are logically
equivalent, then evidence E will confirm both equally. This follows from the fact that
logically equivalent statements always are assigned the same probability.
D. The confirmatory effect of surprising or diverse evidence. From the corollary above,
it follows that whether E confirms (or disconfirms) H depends on whether E is more
probable (or less probable) conditional on H than it is unconditionally -- that is, on
whether:
(b1) P(E/H)/P(E) > 1.
An intuitive way of understanding (b1) is to say that it states that E would be more
expected (or less surprising) if it were known that H were true. So if E is surprising, but
would not be surprising if we knew H were true, then E will significantly confirm H.
Thus, Bayesians explain the tendency of surprising evidence to confirm hypotheses on
which the evidence would be expected.
Similarly, because it is reasonable to think that evidence E1 makes other evidence of the
same kind much more probable, after E1 has been determined to be true, other evidence
of the same kind E2 will generally not confirm hypothesis H as much as other diverse
evidence E3, even if H is equally likely on both E2 and E3. The explanation is that
where E1 makes E2 much more probable than E3 (Pi(E2/E1) >> Pi(E3/E1), there is less
potential for the discovery that E2 is true to raise the probability of H than there is for
the discovery that E3 is true to do so.
E. Relative confirmation and likelihood ratios. Often it is important to be able to
compare the effect of evidence E on two competing hypotheses, Hj and Hk, without
having also to consider its effect on other hypotheses that may not be so easy to
formulate or to compare with Hj and Hk. From the first corollary above, the ratio of the
final probabilities of Hj and Hk would be given by:
Ratio Formula:
Pf(Hj)/Pf(Hk) = [Pi(E/Hj) × Pi(Hj)]/[Pi(E/Hk) × Pi(Hk)]
If the odds of Hj relative to Hk are defined as ratio of their probabilities, then from the
Ratio Formula it follows that, in a case in which change in degrees of belief results from
conditionalizing on E, the final odds (Pf(Hj)/Pf(Hk)) result from multiplying the initial
odds (Pi(Hj)/Pi(Hk)) by the likelihood ratio (Pi(E/Hj)/Pi(E/Hk)). Thus, in pairwise
comparisons of the odds of hypotheses, the likelihood ratio is the crucial determinant of
the effect of the evidence on the odds.
F. The typical differential effect of positive evidence and negative evidence. Hempel
first pointed out that we typically expect the hypothesis that all ravens are black to be
confirmed to some degree by the observation of a black raven, but not by the
observation of a non-black, non-raven. Let H be the hypothesis that all ravens are black.
Let E1 describe the observation of a non-black, non-raven. Let E2 describe the
observation of a black raven. Bayesian Confirmation Theory actually holds that both E1
and E2 may provide some confirmation for H. Recall that E1 supports H just in case
Pi(E1/H)/Pi(E1) > 1. It is plausible to think that this ratio is ever so slightly greater than
one. On the other hand, E2 would seem to provide much greater confirmation to H,
because, in this example, it would be expected that Pi(E2/H)/Pi(E2) >>
Pi(E1/H)/Pi(E1).
These are only a sample of the results that have provided support for Bayesian
Confirmation Theory as a theory of rational inference for science. For further examples,
see Howson and Urbach. It should also be mentioned that an important branch of
statistics, Bayesian statistics is based on the principles of Bayesian epistemology.
5. Potential Problems
This section reviews some of the most important potential problems for Bayesian
Confirmation Theory and for Bayesian epistemology generally. No attempt is made to
evaluate their seriousness here, though there is no generally agreed upon Bayesian
solution to any of them.
5.1 Objections to the Probability Laws as Standards of Synchronic Coherence
A. The assumption of logical omniscience. The assumption that degrees of belief satisfy
the probability laws implies omniscience about deductive logic, because the probability
laws require that all deductive logical truths have probability one, all deductive
inconsistencies have probability zero, and the probability of any conjunction of
sentences be no greater than any of its deductive consequences. This seems to be an
unrealistic standard for human beings. Hacking and Garber have made proposals to
relax the assumption of logical omniscience. Because relaxing that assumption would
block the derivation of almost all the important results in Bayesian epistemology, most
Bayesians maintain the assumption of logical omniscience and treat it as an ideal to
which human beings can only more or less approximate.
B. The problem of the priors. Are there constraints on prior probabilities other than the
probability laws? Consider Goodman's "new riddle of induction": In the past all
observed emeralds have been green. Do those observations provide any more support
for the generalization that all emeralds are green than they do for the generalization that
all emeralds are grue (green if observed before now; blue if observed later); or do they
provide any more support for the prediction that the next emerald observed will be
green than for the prediction that the next emerald observed will be grue (i.e., blue)?
This question divides Bayesians into two categories:
(a) Objective Bayesians (e.g., Rosenkrantz) hold that there are rational constraints on
prior probabilities that require that observations support the green-generalization and the
green-prediction much more strongly than the grue-generalization and the grueprediction. Objective Bayesians are the intellectual heirs of the advocates of a Principle
of Indifference for probability. Rosenkrantz builds his account on the maximum entropy
rule proposed by E.T. Jaynes. The difficulties in formulating an acceptable Principle of
Indifference have led most Bayesians to abandon Objective Bayesianism.
(b) Subjective Bayesians (e.g., de Finetti) do not believe that rationality alone places
enough constraints on one's prior probabilities to make them objective. For Subjective
Bayesians, it is up to our own free choice or to evolution or to socialization or some
other non-rational process to determine one's prior probabilities. Rationality only
requires that the prior probabilities satisfy relatively modest synchronic coherence
conditions.
Subjective Bayesians believe that their position is not objectionably subjective, because
of results (e.g., Doob or Gaifman and Snir) proving that even subjects beginning with
very different prior probabilities will tend to converge in their final probabilities, given
a suitably long series of shared observations. These convergence results are not
completely reassuring, however, because they only apply to agents who already have
significant agreement in their priors and they do not assure convergence in any
reasonable amount of time. Also, they typically only guarantee convergence on the
probability of predictions, not on the probability of theoretical hypotheses. For example,
Carnap favored prior probabilities that would never raise above zero the probability of a
generalization over a potentially infinite number of instances (e.g., that all crows are
black), no matter how many observations of positive instances (e.g., black crows) one
might make without finding any negative instances (i.e., non-black crows). In addition,
the convergence results depend on the assumption that the only changes in probabilities
that occur are those that are the non-inferential results of observation on evidential
statements and those that result from conditionalization on such evidential statements.
Because of the problem of the priors, it is an open question whether Bayesian
Confirmation Theory has inductive content, or whether it merely translates the
framework for rational belief provided by deductive logic into a corresponding
framework for rational degrees of belief.
5.2 Objections to The Simple Principle of Conditionalization as a Rule of Inference,
Especially as an Explanation of Theory Confirmation in Science
A. The problem of uncertain evidence. The Simple Principle of Conditionalization
requires that the acquisition of evidence be representable as changing one's degree of
belief in a statement E to one -- that is, to certainty. But many philosophers would
object to assigning probability of one to any contingent statement, even an evidential
statement, because, for example, it is well-known that scientists sometimes give up
previously accepted evidence. Jeffrey has proposed a generalization of the Principle of
Conditionalization that yields that principle as a special case. Jeffrey's idea is that what
is crucial about observation is not that it yields certainty, but that it generates a noninferential change in the probability of an evidential statement E and its negation ~E
(assumed to be the locus of all the non-inferential changes in probability) from initial
probabilities between zero and one to Pf(E) and Pf(~E) = [1 Pf(E)]. Then on Jeffrey's
account, after the observation, the rational degree of belief to place in an hypothesis H
would be given by the following principle:
Principle of Jeffrey Conditionalization:
Pf(H) = Pi(H/E) × Pf(E) + Pi(H/~E) × Pf(~E) [where E and H are both assumed to have
prior probabilities between zero and one]
Counting in favor of Jeffrey's Principle is its theoretical elegance. Counting against it is
the practical problem that it requires that one be able to completely specify the direct
non-inferential effects of an observation, something it is doubtful that anyone has ever
done. Skyrms has given it a Dutch Book defense.
B. The problem of old evidence. On a Bayesian account, the effect of evidence E in
confirming (or disconfirming) a hypothesis is solely a function of the increase in
probability that accrues to E when it is first determined to be true. This raises the
following puzzle for Bayesian Confirmation Theory discussed extensively by Glymour:
Suppose that E is an evidentiary statement that has been known for some time -- that is,
that it is old evidence; and suppose that H is a scientific theory that has been under
consideration for some time. One day it is discovered that H implies E. In scientific
practice, the discovery that H implied E would typically be taken to provide some
degree of confirmatory support for H. But Bayesian Confirmation Theory seems unable
to explain how a previously known evidentiary statement E could provide any new
support for H. For conditionalization to come into play, there must be a change in the
probability of the evidence statement E. Where E is old evidence, there is no change in
its probability. Some Bayesians who have tried to solve this problem (e.g., Garber) have
typically tried to weaken the logical omniscience assumption to allow for the possibility
of discovering logical relations (e.g., that H and suitable auxiliary assumptions imply
E). As mentioned above, relaxing the logical omniscience assumption threatens to block
the derivation of almost all of the important results in Bayesian epistemology, so there
is no general agreement among Bayesians on how to solve this problem. Other
Bayesians (e.g., Lange) employ the Bayesian formalism as a tool in the rational
reconstruction of the evidentiary support for a scientific hypothesis, where it is
irrelevant to the rational reconstruction whether the evidence was discovered before or
after the theory was initially formulated.
C. The problem of rigid conditional probabilities. When one conditionalizes, one
applies the initial conditional probabilities to determine final unconditional
probabilities. Throughout, the conditional probabilities themselves do not change; they
remain rigid. Examples of the Problem of Old Evidence are but one of a variety of cases
in which it seems that it can be rational to change one's initial conditional probabilities.
Thus, many Bayesians reject the Simple Principle of Conditionalization in favor of a
qualified principle, limited to situations in which one does not change one's initial
conditional probabilities. There is no generally accepted account of when it is rational to
maintain rigid initial conditional probabilities and when it is not.
D. The problem of prediction vs. accommodation. Related to the problem of Old
Evidence is the following potential problem: Consider two different scenarios. In the
first, theory H was developed in part to accommodate (i.e., to imply) some previously
known evidence E. In the second, theory H was developed at a time when E was not
known. It was because E was derived as a prediction from H that a test was performed
and E was found to be true. It seems that E's being true would provide a greater degree
of confirmation for H if the truth of E had been predicted by H than if H had been
developed to accommodate the truth of E. There is no general agreement among
Bayesians about how to resolve this problem. Some (e.g., Horwich) argue that
Bayesianism implies that there is no important difference between prediction and
accommodation, and try to defend that implication. Others (e.g., Maher) argue that there
is a way to understand Bayesianism so as to explain why there is an important
difference between prediction and accommodation.
E. The problem of new theories. Suppose that there is one theory H1 that is generally
regarded as highly confirmed by the available evidence E. It is possible that simply the
introduction of an alternative theory H2 can lead to an erosion of H1's support. It is
plausible to think that Copernicus' introduction of the heliocentric hypothesis had this
effect on the previously unchallenged Ptolemaic earth-centered astronomy. This sort of
change cannot be explained by conditionalization. It is for this reason that many
Bayesians prefer to focus on probability ratios of hypotheses (see the Ratio Formula
above), rather than their absolute probability; but it is clear that the introduction of a
new theory could also alter the probability ratio of two hypotheses -- for example, if it
implied one of them as a special case.
6. Other Principles of Bayesian Epistemology
Other principles of Bayesian epistemology have been proposed, but none has garnered
anywhere near a majority of support among Bayesians. The most important proposals
are merely mentioned here. It is beyond the scope of this entry to discuss them in any
detail.
A. Other principles of synchronic coherence. Are the probability laws the only
standards of synchronic coherence for degrees of belief? Van Fraassen has proposed an
additional principle (Reflection or Special Reflection), which he now regards as a
special case of an even more general principle (General Reflection).[3]
B. Other probabilistic rules of inference. There seem to be at least two different
concepts of probability: the probability that is involved in degrees of belief (epistemic
or subjective probability) and the probability that is involved in random events, such as
the tossing of a coin (chance). De Finetti thought this was a mistake and that there was
only one kind of probability, subjective probability. For Bayesians who believe in both
kinds of probability, an important question is: What is (or should be) the relation
between them? The answer can be found in the various proposals for principles of direct
inference in the literature. Typically, principles of direct inference are proposed as
principles for inferring subjective or epistemic probabilities from beliefs about objective
chance (e.g., Pollock). Lewis reverses the direction of inference, and proposes to infer
beliefs about objective chance from subjective or epistemic probabilities, via his
(Reformulated) Principal Principle.[4]
C. Principles of rational acceptance. What is the relation between beliefs and degrees of
belief? Jeffrey proposes to give up the notion of belief (at least for empirical statements)
and make do with only degrees of belief. Other authors (e.g., Levi, Maher, Kaplan)
propose principles of rational acceptance as part of accounts of when it is rational to
accept a statement as true, not merely to regard it as probable.
Bibliography
Bayes, Thomas, "An Essay Towards Solving a Problem in the Doctrine of Chances",
Philosophical Transactions of the Royal Society of London (1764) 53: 37-418, reprinted
in E.S. Pearson and M.G. Kendall, eds., Studies in the History of Statistics and
probability (London: Charles Griffin, 1970).
Carnap, Rudolf, Logical Foundations of Probability (Chicago: University of Chicago
Press; 1950).
Carnap, Rudolf, The Continuum of Inductive Methods (Chicago: University of Chicago
Press; 1952).
de Finetti, Bruno, "La Prevision: ses lois logiques, se sources subjectives" (Annales de
l'Institut Henri Poincare 7 (1937): 1-68. Translated into English and reprinted in Kyburg
and Smokler, Studies in Subjective Probability (Huntington, NY: Krieger; 1980).
Doob, J.L., "What is a Martingale?", American Mathematical Monthly 78 (1971): 451462.
Earman, John, Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory
(Cambridge, MA: MIT Press; 1992).
Gaifman, H., and Snir, M., "Probabilities over Rich Languages", Journal of Symbolic
Logic 47 (1982): 495-548.
Garber, Daniel, "Old Evidence and Logical Omniscience in Bayesian Confirmation
Theory", in J. Earman, ed., Testing Scientific Theories, Midwest Studies in the
Philosophy of Science, Vol. X (Minneapolis: University of Minnesota Press; 1983): 99131.
Goodman, Nelson, Fact, Fiction, and Forecast (Cambridge: Harvard University Press;
1983).
Glymour, Clark, Theory and Evidence (Princeton: Princeton University Press; 1980).
Hacking, Ian, "Slightly More Realistic Personal Probability", Philosophy of Science 34
(1967): 311-325.
Hempel, Carl G., Aspects of Scientific Explanation (New York: Free Press; 1965).
Horwich, Paul, Probability and Evidence (Cambridge: Cambridge University Press;
1982).
Howson, Colin, and Peter Urbach, Scientific Reasoning: The Bayesian Approach, 2nd
ed. (Chicago: Open Court; 1993).
Jaynes, E.T., "Prior Probabilities", Institute of Electrical and Electronic Engineers
Transactions on Systems Science and Cybernetics, SSC-4 (1968): 227-241.
Jeffrey, Richard, The Logic of Decision, 2nd ed. (Chicago: University of Chicago Press;
1983).
Jeffrey, Richard, Probability and the Art of Judgment (Cambridge: Cambridge
University Press; 1992).
Joyce, James M., "A Nonpragmatic Vindication of Probabilism", Philosophy of Science
65 (1998): 575-603.
Joyce, James M., The Foundations of Causal Decision Theory (Cambridge: Cambridge
University Press; 1999).
Kaplan, Mark, Decision Theory as Philosophy (Cambridge: Cambridge University
Press; 1996).
Lange, Marc, "Calibration and the Epistemological Role of Bayesian
Conditionalization", Journal of Philosophy 96 (1999): 294-324.
Levi, Isaac, The Enterprise of Knowledge (Cambridge, Mass.: MIT Press; 1980)
Levi, Isaac, The Fixation Of Belief And Its Undoing (Cambridge: Cambridge
University Press; 1991).
Lewis, David, "A Subjectivist's Guide to Objective Chance", in Richard C. Jeffrey, ed.,
Studies in Inductive Logic and Probability, vol. 2 (Berkeley: University of California
Press; 1980): 263-293.
Maher, Patrick, "Prediction, Accommodation, and the Logic of Discovery", PSA, vol. 1
(1988): 273-285.
Maher, Patrick, Betting on Theories (Cambridge: Cambridge University Press; 1993).
Pollock, John L., Nomic Probability and the Foundations of Induction (Oxford: Oxford
University Press; 1990).
Popper, Karl, The Logic of Scientific Discovery, 3rd ed. (London: Hutchinson; 1968).
Ramsey, Frank P., "Truth and Probability," in Richard B. Braithwaite (ed.), Foundations
of Mathematics and Other Logical Essay (London: Routledge and Kegan Paul; Check
on 1931 publication date), pp. 156-198.
Réyni, A., "On a New Axiomatic Theory of Probability", Acta Mathematica Academiae
Scientiarium Hungaricae 6 (1955): 285-385.
Rosenkrantz, R.D., Foundations and Applications of Inductive Probability (Atascadero,
CA: Ridgeview Publishing; 1981).
Savage, Leonard, The Foundations of Statistics, 2nd ed. (New York: Dover; 1972).
Skyrms, Brian, Pragmatics and Empiricism (New Haven: Yale University Press; 1984).
Skyrms, Brian, TheDynamics of Rational Deliberation (Cambridge, Mass.: Harvard
University Press; 1990).
Teller, Paul, "Conditionalization, Observation, and Change of Preference", in W. Harper
and C.A. Hooker, eds., Foundations of Probability Theory, Statistical Inference, and
Statistical Theories of Science (Dordrecht: D. Reidel; 1976).
Van Fraassen, Bas C., "Calibration: A Frequency Justification for Personal Probability",
in R.S. Cohen and L. Laudan, eds., Physics, Philosophy, and Psychoanalysis: Essays in
Honor of Adolf Grunbaum (Dordrecht: Reidel; 1983).
Van Fraassen, Bas C., "Belief and the Will", Journal of Philosophy 81 (1984): 235-256.
Van Fraassen, Bas C., "Belief and the Problem of Ulysses and the Sirens",
Philosophical Studies 77 (1995): 7-37.
Zynda, Lyle, "Old Evidence and New Theories", Philosophical Studies 77 (1995): 6795.
Other Internet Resources
[Please contact the author with suggestions]
Related Entries
Bayes' Theorem | logic: inductive | probability, interpretations of
Acknowledgements
In the preparation of this article, I have benefited from comments from Marc Lange,
Stephen Glaister, Laurence BonJour, and James Joyce.
Copyright © 2001
William Talbott
[email protected]