Download AVOIDING EPISTEMIC HELL: LEVI ON PRAGMATISM AND

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Probability wikipedia , lookup

Transcript
ERIK J. OLSSON
AVOIDING EPISTEMIC HELL: LEVI ON PRAGMATISM AND
INCONSISTENCY
ABSTRACT. Isaac Levi has claimed that our reliance on the testimony of others, and
on the testimony of the senses, commonly produces inconsistency in our set of full beliefs. This happens if what is reported is inconsistent with what we believe to be the
case. Drawing on a conception of the role of beliefs in inquiry going back to Dewey,
Levi has maintained that the inconsistent belief corpus is a state of “epistemic hell”: it
is useless as a basis for inquiry and deliberation. As he has also noticed, the compatibility
of these two elements of his pragmatist epistemology could be called into question. For
if inconsistency means hell, how can it ever be rational to enter that state, and on what
basis could we attempt to regain consistency? Levi, nonetheless, has tried to show that the
conflict is only apparent and that no changes of his theory are necessary. In the main part
of the paper I argue, by contrast, that his attempts to reconcile these components of his
view are unsuccessful. The conflict is real and thus presents a genuine threat to Deweyan
pragmatism, as understood by Levi. After an attempt to pinpoint exactly where the source
of the problem lies, I explore some possibilities for how to come to grips with it. I conclude
that Levi can keep his fundamental thesis concerning the role of beliefs in inquiry and
deliberation, provided that he (i) gives up the view that the agent can legitimately escape
from inconsistency, and (ii) modifies his account of prediction alias deliberate expansion by
acknowledging a third desideratum, besides probability and informational value, namely,
not to cause permanent breakdown further down the line of inquiry. The result is a position
which is more similar to Peter Gärdenfors’s than is Levi’s original theory, while retaining
the basic insights of the latter.
1. INTRODUCTION
Just how bad is it to be inconsistent? Is inconsistency among our beliefs
something we should always shun? Or can we sometimes end up having contradictory beliefs without having done anything wrong? Levi has
answered the last question affirmatively. Agents will, he claims, sometimes
be justified in trusting the testimony of other agents and the testimony
of the senses. Such legitimate trust will lead to inconsistency in the set
of full beliefs when the information source reports something that is in
conflict with what the agent fully believes to be the case. For instance, if I
believe A and you, whom I trust, tell me that non-A is true, I will end up
in contradiction. Following C. S. Peirce, Levi views such trust as a form of
Synthese 135: 119–140, 2003.
© 2003 Kluwer Academic Publishers. Printed in the Netherlands.
120
ERIK J. OLSSON
habitual belief; trusting an information source means automatically believing in everything the source says. Levi’s technical terms for this updating
process is “routine expansion”.1
Levi’s favorite example from the philosophy of science concerns
Michelson’s attempts to ascertain the velocity of the earth relative to the luminiferous ether in the experiments initiated in Potsdam in 1881. On Levi’s
reconstruction of this case, the null results Michaelson obtained injected
inconsistency into his background assumptions via routine expansion (Fixation, p. 105). To take another one of his illustrations, suppose that I am
walking on 72nd Street and Columbus Avenue and see someone who is a
dead ringer for Victor Dudman. I am sure that Dudman is safely ensconced
in Sydney, New South Wales. At the same time my initial confidence in
my eyesight and in Dudman’s appearance should make me believe also
that Dudman is on 72nd and Columbus, and so I have “expanded into inconsistency” (Fixation, 76). The next section outlines the relevant parts of
Levi’s epistemology. I will move on to raise some doubts as to the internal
coherence of Levi’s account of trust and consistency.
2. LEVI ON BELIEF FIXATION AND ITS UNDOING
The first thing to note, as part of the stage-setting, is the important role
played by the commitment-performance distinction in Levi’s work. Given
that a person holds certain beliefs, she is committed to full belief in what
follows logically from those beliefs. The beliefs which a person is committed to at a given time thus form a logically closed set. A person may fail to
recognize all beliefs that she is committed to, that is to say, she may fail in
her performance to live up to all her commitments. Limitation concerning
memory and computational abilities may have that effect. In Levi’s view,
we should require that a person live up to her commitments to the extent
that she is able (Enterprise, p. 10). The reason is that a person who lives
up to her commitments does better than a person who does not (Fixation,
p. 7). Levi has stated, further, that his concern is with revision of doxastic
commitments and not with revision of doxastic performances (Fixation,
9); that is, the changes he studies are changes from one corpus to another
where by a corpus is meant the collection of all the beliefs that the agent is
committed to at a certain time.
Levi has proposed an illuminating taxonomy of rational belief changes.
In expansion beliefs are added, in contraction beliefs are deleted. There
are, we are told, two types of legitimate expansions and two types of
legitimate contractions:
AVOIDING EPISTEMIC HELL: LEVI ON PRAGMATISM AND INCONSISTENCY
121
In deliberate expansion the agent chooses intentionally to add strength
to the belief state by adding some beliefs, e.g., an explanation, a hypothesis
or a prediction. Among the candidate expansions, we should choose one
that maximizes the expected epistemic utility in accordance with Bayesian
decision theory. The two ingredients in expected the epistemic utility of a
candidate expansion are its probability and its information content. Deliberate expansion is Levi’s analysis of the traditional concept of induction.
Advocating doxastic decision theory is in line with his pragmatism and
his unity of reason thesis, according to which practical and theoretical
reasoning are similar on a structural level, although they differ regarding
the underlying values and utilities.2
In routine expansion, we recall, the agent strengthens the belief state
by adding a belief in accordance with a pre-established routine. Routine
expansion is Levi’s account of direct observation, as when we believe
something as the result of immediately perceiving it. It also represents
his analysis of testimony, as when we believe something as the result of
querying a trusted information source.
Coerced contraction is called for when the agent’s belief state has
turned inconsistent and something must be given up in order to regain
consistency. I shall return to this below. The agent may also contract
without being forced to do so, that is to say, without being inconsistent.
In such uncoerced contraction, the agent chooses deliberately to weaken
the belief state by deleting some beliefs. He might do so in order to give
an alternative hypothesis a fair hearing, in which case he would contract
to a neutral belief state in which none of the hypotheses to be compared is
accepted.
Other changes in belief are deemed illegitimate. This goes in particular
for the direct replacement of an old belief by its negation. Replacements
are legitimate only insofar as they can be reconstructed as a legitimate
contraction, whereby the old belief is given up, followed by a legitimate
expansion, whereby the negation of the old belief is added. Levi’s commensuration thesis, finally, says that every legitimate belief change should be
decomposable into a sequence of legitimate contractions and expansions.
3. LEVI ON BECOMING INCONSISTENT AND DOING SOMETHING
ABOUT IT
Levi thinks that routine expansion may result in inconsistency in the set
of full beliefs. We may believe something and then by routine expansion
come to believe its negation, by direct observation or by relying on the
122
ERIK J. OLSSON
testimony of others. That agents end up having inconsistent beliefs in this
way is, according to Levi, “common” (Fixation, 94).
If Levi is right in that expansion into inconsistency is commonplace,
this would, to some extent, also support its legitimacy. Levi, nonetheless,
wants to provide it with further rational underpinning by showing that
its legitimacy follows from certain general choice-theoretic consideration.
The argument leading up to this conclusion is somewhat intricate, and I
will return to it in Section 5.
Let us instead turn to the question of what happens if the agent has
stumbled into inconsistency. Is this bad, and, if so, what can be done about
it? Levi’s answers to these questions are consequences of his view on the
nature of belief. Following Dewey, he thinks that the main role of a person’s beliefs is to function as a resource in inquiry and deliberation, in the
sense that her beliefs constitute her standard of serious possibility. A proposition is a serious possibility for the agent if and only if the proposition
is consistent with her belief corpus (Enterprise, 5).
The distinction between what is seriously possible and what is not is
crucial in practical deliberation: if the agent X were offered a gamble on
the outcome of the toss of a coin, he would not take seriously the logical
possibility that the coin will fly out towards Alpha Centauri. Nor would
he take into account the logical possibility that the Earth will explode
(Enterprise, p. 3). These hypotheses are logically possible, but the agent
will ignore them because they are not seriously possible from his point
of view. The distinction in question is equally important in theoretical inquiry: when devising hypotheses, many logically possible alternatives are
excluded since they are not possible from the agent’s doxastic perspective.
For example, when devising hypotheses about the constitution of quasars,
no one considers the logical possibility that they are conglomerations of
drosophila flies to be a serious possibility (Enterprise, 5).
Now we come to the crucial question: What happens when the agent’s
beliefs are inconsistent? The answer is that in that case every proposition
is ruled out as impossible. This follows from the definition of serious
possibility. For a proposition is seriously possible for X if and only if
it is consistent with X’s full beliefs, and if X’s beliefs are inconsistent,
nothing is consistent with those beliefs. At the same time the agent must
count every proposition as certain; she is committed to holding each and
everyone of them to be certainly true; after all, they constitute her full
beliefs.3 This means that the inconsistent state, as Levi puts it, “is not
serviceable as a standard for serious possibility” (Fixation, p. 88), that
is to say, “[i]f the corpus is inconsistent . . . assessments of truth value
collapse into incoherence and no discrimination with respect to truth value
AVOIDING EPISTEMIC HELL: LEVI ON PRAGMATISM AND INCONSISTENCY
123
can sensibly be made” (Fixation, p. 151). Judged by the many reiterations
in Enterprise and Fixation, this view is firmly entrenched in Levi’s thinking
about inconsistency.4
An inconsistent belief system has broken down as a standard for serious
possibility and is useless as a resource for theoretical inquiry and practical
deliberation. This is one respect in which inconsistency is “epistemic hell”
from the point of view of an inquiring and deliberating agent, to use a
phrase coined by Peter Gärdenfors, who in his 1988 book takes a similar
position with respect to the nature of the inconsistent belief state.
Thus, an inconsistent agent will not be able to make a distinction
between what is seriously possible and what is not. For practical deliberation this means that the agent will not be able to identify either the possible
alternative actions or the possible states of nature. As for theoretical inquiry, the agent will not be able to narrow down the seriously possible
hypotheses among which she has to choose. Inquiry and deliberation are,
in a deep sense, impossible from the point of view of the inconsistent belief
corpus.
Considering the unattractiveness of the inconsistent state, the agent is
obliged to try to find an escape route. She can legitimately do so only by
contracting some belief from her corpus, since expansion (the other kind
of legitimate change) would lead her nowhere. Hence contraction from
inconsistency is obligatory or, to use Levi’s term, “coerced”.5
The obligation to contract from inconsistency entails its possibility. So
how are we to extricate ourselves from the inconsistent state? According
to Levi, the correct strategy is to identify the source of the inconsistency
and then take steps based on that identification. Suppose that the person
initially believes non-A and comes to expand by A via routine expansion. Levi suggests three possible rational strategies that seem relevant to
inquirers in this predicament:
(1)
Give up A, that is, give up the information obtained as a result
of routine expansion.
(2)
Give up non-A, that is, give up the previous belief that is in
conflict with the new information.
(3)
Do both (1) and (2).
For instance, when you observe Victor Dudman, whom you believe to be
far away, you would probably react either by doubting that the man you
saw was Dudman or by calling into question that Dudman is far away, or
both.
124
ERIK J. OLSSON
4. LEVI ’ S PROBLEM
Levi thinks that “it is possible to make sense out of contraction from
inconsistency, provided that we can appeal to the inputs that injected inconsistency in the first place” (Fixation, 164). The problem lies in the
proviso. From the point of view of the contradictory state, you cannot make
a coherent judgment as to what is true. Your standard of serious possibility
has broken down. It follows, in particular, that you cannot make a coherent
judgment as to what input caused the inconsistency. Hence you will not
be able to escape gracefully, even if you wanted to do so. In fact, you
would even lack the motive to escape since you would not be able to make
a coherent judgment as to whether you are inconsistent. If you are, this
will be an inaccessible fact to you, and you will lack reasons to take any
measures in order to get out.
There are two things to note. First, there is, as I just pointed out, an
internal conflict regarding the possibility of escaping from inconsistency.
The method Levi endorses for extrication, like any other reasonable prescription, presupposes that the agent can identify the source of conflict,
but such identification is impossible from the point of view of the inconsistent state, this at least is the position which Levi’s view on the nature
of inconsistency and, ultimately, his stance as regards the role of beliefs
in inquiry and deliberation commits him to. The fruitfulness of any such
method also presupposes that the inconsistent agent can form a coherent
judgment as to whether he is inconsistent, so that he would have a motive
to do something in the first place, but this is also impossible given this view
on the pragmatics of belief.
The second dissonance involves routine expansion. If agents cannot
rationally escape from inconsistency, as Levi appears forced to admit, this
should cast a shadow of doubt on the legitimacy of routine expansion into
inconsistency, considering the unpleasant character of that state. For why
should the agent take the risk of expanding into inconsistency if this means
entering a state of permanent epistemic hell?
Levi has, in his writings, paid little attention to these internal difficulties. The only explicit discussion is contained in a footnote in Enterprise
in which he acknowledges the difficulty but argues that the conflict is only
apparent and that no changes in his view are called for. To accomplish this,
Levi, as I understand him, wants to show that while no coherent statement
as to what is true can be made from the point of view of an inconsistent
belief corpus, it is nevertheless possible to escape legitimately from such a
corpus. If this were indeed the case, there would be no conflict after all.
AVOIDING EPISTEMIC HELL: LEVI ON PRAGMATISM AND INCONSISTENCY
125
Let us take a closer look at this attempt. Levi states the problem as
follows:
When X shifts from corpus K1 to corpus K2 , any justification X can offer to himself for
making the shift should be based on the assumption (expressed in the metalanguage L1 )
that all items in K1 are true (in L) and infallibly so. However, when K1 is inconsistent, X
cannot proceed in this manner. If he did so, the inconsistency infecting K1 would spread to
his metacorpus expressible in L1 , and so on. X would have no coherent basis for evaluating
alternative ways to contract from K1 .
As a remedy he proposes the following:
. . . I suggest that we look on such situations as cases where X treats the object language L
syntactically (as long as his corpus in L is inconsistent) and treats K1 and other potential
contractions of K1 as so many different uninterpreted systems of sentences. He can, in this
way, retain the consistency of his metacorpus and avoid begging questions as to which of
the potential contraction strategies to adopt by assuming at the outset that some items of
K1 are possibly false and others are not.
Levi is here reiterating his view that no coherent judgment as to what is true
can be made on the basis of an inconsistent belief corpus. He is adding that
this need not be as fatal as it might seem: if an inconsistency should crop
up we can always move to a consistent metalevel corpus, from the point of
view of which the conflict can be resolved in a coherent manner, provided
that we treat out object beliefs as mere uninterpreted sentences.
There are, however, at least two reasons for being discontent with this
proposal.
First, the solution in Enterprise presupposes that if we are inconsistent,
we can recognize this fact, and take appropriate action based on that recognition, that is to say, treat the sentences representing our beliefs as mere
uninterpreted syntactic objects. But as we have seen, the inconsistency of
X’s belief system is, from X’s own point of view, an inaccessible fact. The
inconsistent agent X will not be able to recognize the inconsistency, and
so she will not be able to implement Levi’s recommendation.
Second, in Fixation Levi writes: “One cannot give a principled account
of how to extricate oneself from inconsistency that arises inadvertently
from routine expansion without taking into account the informational values of the several available contraction strategies”. Levi, as we have just
seen, has also suggested that we should, in case of inconsistency, treat our
beliefs as mere uninterpreted sentences. The problem, then, is to evaluate different subsets of the set of sentences representing our corpus with
respect to their relative informational value. But how can we assess the
information value of mere uninterpreted sets of sentences? How can we
know how valuable the sentences are to the inquirer if we do not know
whether they are about apples and bananas or astrophysics? To be sure,
126
ERIK J. OLSSON
there are things to be said about informational value on the basis of syntactic derivability relations alone, but it is implausible to think that the
values Levi mentions in this connection – explanatory adequacy, simplicity, systematicity and precision – could be accounted for in terms of
syntax only. Hence if Levi is right in the passage just quoted, no principled
account for how to escape from inconsistency can be given if we treat our
beliefs syntactically.
Levi has recently expressed dissatisfaction with his argument in Enterprise, offering a different proposal for how one could argue for the
coherence of his position.6 Before presenting his main argument, he
makes the following observations. Given a specification of the inquirer’s
standards for assessing informational value before routine expansion, the
inquirer could have told us before expanding into inconsistency by adding
A inconsistent with the belief corpus what the optimal contraction from
the inconsistent state should be. The recommendation the inquirer is committed to endorsing at the time of expansion is, one could argue, the one
the inquirer ought to implement. So there is no problem with providing
a coherent characterization of the move the inquirer ought to make under
the circumstances relative to the belief state prior to expansion into inconsistency or, for that matter, any other coherent belief state. The inquirer X
can identify what the contraction should be on the supposition that it arises
from adding A to the corpus to form the inconsistent set. That contraction
is the one recommended on the supposition that A is added to the corpus
to yield inconsistency in the state of inconsistency. Thus, X can coherently
state what contraction to adopt so long as X is not in the inconsistent belief
state.
But, Levi notes, this recommendation cannot be implemented by X in
the state of inconsistency and, hence, is of small value – even if the recommendation is construed as a recommendation for changing commitment.
This is so because in order to implement the change, X must recognise
in some way the corpus prior to expansion into inconsistency and what
information was added to yield inconsistency. But, as we have already
noticed, this cannot be coherently done.
This leads us to Levi’s main point. While it is true that the inconsistent
agent X cannot proceed coherently, “X can often incoherently identify the
source of his difficulty and take steps to remedy his trouble even in the
state of inconsistency”. How is this possible? Levi explains:
In the state of inconsistency, X may still recognize the conceptual framework of potential
states of full beliefs, the state he was initially in and the input that led to inconsistency.
To be sure, if the agent were perfectly fulfilling the commitment to the inconsistent state,
he would recognize the negations of the judgments involved. But inquirers who are in
inconsistent belief states because they have failed to recognize all the logical consequences
AVOIDING EPISTEMIC HELL: LEVI ON PRAGMATISM AND INCONSISTENCY
127
of what they believe can identify consistent “strands” of their commitments albeit in a way
that fails to fulfill commitments. And when they do uncover the inconsistency in their
beliefs, they can do so by identifying a locus of conflict that restricts the way in which they
seek to retreat from inconsistency. All of this activity is, indeed, incoherent. But it can be
done and the prescriptions can be implemented.
The idea here seems to be that while it is true that no coherent judgment
regarding truth value can be made on the basis of an inconsistent system,
incoherent judgments can be made, and an incoherent judgment as to what
caused the inconsistency would suffice to make a legitimate escape from
inconsistency possible, so long as the incoherence is not detected.
I shall argue that this attempt, too, fails to block the inference from the
hellish character of the inconsistent state to the impossibility of escaping
from it.
First and foremost, Levi has declared, as we saw in Section 2, that his
concern is generally with revision of doxastic commitment and not with
revision of doxastic performance. The interesting question, from this point
of view, is whether an agent would have reasons to fall into inconsistency,
and, if so, how he could legitimately proceed in order to recover, given that
he fulfills all his commitments. Addressing only agents whose performance
fails to live up to their commitments, Levi’s new solution simply lacks
relevance to what he has identified as the main issue.
Second, I pointed out, again in Section 2, that an agent, in Levi’s view,
generally “does better” if she is living up to her commitments than if she
is not. But in his new argument for how to reconcile his views, Levi is
saying that a person who is living up to her commitments is stuck forever
in inconsistency, whereas one who is not living up to her commitments
may succeed in extricating herself, that is to say, the agent who is not
living up to her commitments is in fact doing better. Hence, not only is
this proposal strictly speaking irrelevant; it also leads to further internal
problems in Levi’s epistemology.
Contrary to what Levi seems to think, this is not a situation that he could
rest content with. Superficially, the easiest way to reduce tension would be
to give up the view that a person’s beliefs function as her standard of serious possibility. Then the impossibility of extrication from inconsistency
would not follow, and Levi’s theory of coerced contraction would make
sense, as would his thesis concerning the legitimacy of expanding into
inconsistency. This move would open up for the sort of attitude towards
inconsistency taken by Philip Kitcher. Finding Levi’s and Gärdenfors’s
view that consistency is hell “too extreme”, Kitcher maintains that inquiry
is possible even on inconsistent premises, and that the appropriate response
is sometimes to “wait and see”. For instance, in the case of Antoine Lavoisier’s investigations of chemical reactions, which supposedly proceeded on
128
ERIK J. OLSSON
inconsistent assumptions, “[c]onsistency was only achieved at the end of a
painstaking inquiry over two decades” (Kitcher 1993, p. 430).7
This notwithstanding, the Deweyan thesis concerning the nature and
function of beliefs is deeply entrenched in Levi’s pragmatism and is
something he could not, and in my view should not, sacrifice so easily.
Moreover, I do not think that we are forced to think of Kitcher’s examples
from the history of science as cases of inconsistency in the set of full
beliefs. There remain the possibilities of reconstructing them as involving
either (i) inconsistency among scientific conjectures or (ii) anomaly among
full beliefs.8 In neither case would Levi be committed to the impossibility
of rational inquiry in the face of internal conflict. A more careful exploration of the plausibility of these alternative accounts would have to require
closer study of historical facts, i.e., concerning the extent to which the
advocates of the theories in question were really fully convinced of the
truth of those theories in the first place. Mere provisional acceptance is
presumably the more common case in science.
As I see it, Levi is committed at a deep level to the thesis that our beliefs
function as our standard of serious possibility. He is also committed to the
view that coherent judgments are impossible on the inconsistent corpus. In
the absence of valid reasons to think otherwise, I conclude that he is also
committed to the impossibility of recovering from inconsistency. In my
view he should resolve the conflict by giving up his view that there could
be a legitimate escape route from inconsistency; that is, the conclusion
should be that “coerced contraction” strictly speaking does not make sense
on the background of Levi’s pragmatism.
As we have seen, however, this would not suffice to remove conflict
altogether. If it is impossible to contract from the contradictory belief state,
it still seems strange that it could be rational to enter that state. What could
possibly be worth taking the risk of entering permanent epistemic hell?
The problem is that Levi’s thesis in favor of the legitimacy of expanding
into inconsistency cannot be given up so easily since it follows from his
decision-theoretic account of belief expansion. It will be instructive to see
exactly how it follows.
5. LEVI ON THE LEGITIMACY OF EXPANDING INTO INCONSISTENCY
It will be seen to follow from Levi’s theory that whether or not you
will turn inconsistent upon receiving a belief-contravening report from a
reliable source hinges on whether you have excluded beforehand the possibility of receiving such a report. If you have excluded that possibility,
inconsistency-inducing routines may be decision-theoretically optimal. If
AVOIDING EPISTEMIC HELL: LEVI ON PRAGMATISM AND INCONSISTENCY
129
you have not excluded it, only consistency-preserving routines will be optimal (provided that the agent is not maximally bold, see below). In order
to see how this follows in Levi’s framework, we need to take a closer look
at his theory of expansion.9
Let us start with deliberate expansion. Suppose that the agent is in an
initial state of full belief K. There is a question which the agent wants to
answer, and so he has identified a set of strongest possible answers consistent with K in the manner proposed by Levi in his Gambling with Truth
and other later works. This set is his ultimate partition U . Given U , a set
of potential expansions relevant to the inquirer’s demands for information
can be identified. These potential expansions are formed by adding to K
an element of U or a disjunction of such elements. For technical reasons,
Levi assumes that adding some belief-contravening proposition is also a
potential expansion.
Suppose, for example, that we would like to find out what the color of a
given liquid will be after a chemical reaction with another substance. If the
experiment is difficult or costly to carry out, the agent might be interesting
in trying to settle the matter by predicting the outcome. Let us assume that
from the agent’s epistemic point of view only three colors are seriously
possible: red, white or blue. The ultimate partition consists of Red, White
and Blue. The potential expansions are obtained by expanding K by one
of the following: Red, White, Blue, Red or White, Red or Blue, White or
Blue, Red or White or Blue. Adding the last proposition would be refusing
to expand beyond K, as the agent is already committed to accepting it.
Levi takes expansion by a belief-contravening proposition, such as Red
and White, to be a potential expansion as well.
How are we to decide what potential expansion to implement? Levi’s
answer is that we should maximize the expected epistemic utility. The
expected utility of adding a proposition A to the belief corpus is representable as E(A) = αQ(A) + (1 − α)Cont(A), or some positive affine
transformation thereof, where Q(A) is the subjective (“credal”) probability of A and Cont(A) the informational value of A. Thus, the expected
utility is a weighted average of the utility functions representing the two
different desiderata. Moreover, Cont(A) can be identified with 1 − M(A)
where M(A) is the information determining probability of A. Information
determining probability is similar to Carnap’s logical probability. Divide
E(A) by α and subtract from the result q = (1 − α)/α. The result will
be E (A) = Q(A) − qM(A).10 This is a positive affine transformation
of the weighted average, and so maximizing this index is equivalent to
maximizing the weighted average. The parameter q can be interpreted as
the inquirer’s degree of boldness. If q equals 0, the inquirer is an inductive
130
ERIK J. OLSSON
skeptic; he will always refrain from expanding beyond the current corpus. A reasonable requirement is that adding a contradiction should never
be preferred to refusing to expand. Levi shows that this requirement is
satisfied so long as q ≤ 1.
To continue the example, suppose that Q(Red) = 0.9 and Q(White)
= 0.09, so that Q(Red or White) = 0.99. Assume, further, that the
information-determining probability assigned to each element of U is 1/3.
Hence, M(Red) = 1/3 and M(Red or White) = 2/3. Setting q = 0.3, the
expected epistemic utility of expanding by Red equals Q(Red) − qM(Red)
= 0.9 − 0.3 · 1/3 = 0.8. Consider now the weaker proposition Red or
White. Its expected epistemic utility will be Q(Red or White) − qM(Red
or White) = 0.99 − 0.3 · 2/3 = 0.79. Hence, expanding by Red has a greater
expected epistemic utility than expanding by Red or White. It is easy to
verify that expanding by Red maximizes expected epistemic utility. In this
case, it was possible to settle the matter about the color of the liquid from
an armchair position, without ever having to enter the laboratory.
However, it is not always possible to make a definite prediction. A less
bold agent, i.e., an agent with a lower q-value, may not be willing to take
the risk of predicting a specific color. For instance, setting q = 0.2, the
expected utility of expanding by Red now equals 0.83, and that of Red or
White 0.85. It can be verified that the optimal strategy is to expand by Red
or White. The agent can rest assure that the color is either red or white,
but he cannot say anything more definite about the color. This may, or may
not, be sufficient to satisfy his curiosity. If not, he might be tempted to run
the experiment after all which leads us to routine expansion.
A program for routine expansion may be represented by a function
from outcomes of an experiment to expansions by adding various bits of
information. The idea is that the inquirer or observer runs an experiment,
makes an observation or conducts a trial. There is a “sample space” of
possible outcomes. Let that be of the experiment. The function specifies
for each outcome the expansion strategy to be adopted. The inquirer does
not choose an expansion strategy. If he chooses anything, it is a program
represented by a function from to Boolean combinations of a set of maximal potential answers to a given kind of question. The inquirer
is given information about the objective statistical probability of realizing
a point in conditional on conducting the experiment and the truth of
one of the elements of . If he adopts a program represented by a specific
program, the point realized as an outcome of the experiment will determine
how he should expand.
If the agent wants to ascertain the color of the liquid, she may choose
to run the experiment and observe the resulting color. The sample space AVOIDING EPISTEMIC HELL: LEVI ON PRAGMATISM AND INCONSISTENCY
131
consists of the various kinds of color reports the inquirer makes in response
to such a trial. The chance distribution will depend on the color of the
liquid. If the color is red, for example, the chance of reporting that it is red
will be relatively high and the chance of reporting that it is white rather low.
consists of the colors in question. Suppose that consists of Red, White
and Blue.11 The chance distribution over these reports on an observation
given Red is 0.99, 0.009, and 0.001, respectively. For White the distribution is 0.001, 0.99, and 0.009, and for Blue it is 0.009, 0.001, and 0.99. The
most obvious program for routine expansion is one that matches colors
with color reports. This program is represented by the function f (report
the color as C) = accept that the color is C. Let us call this program P1 .
This is not the only possible program, though. For instance, there is also
the routine that refuses to expand beyond K regardless of what report is
made. Let us call it P2 . It has the advantage of guaranteeing avoidance
of error. But it is clear that it will not be satisfactory for an agent who is
seeking new information.
Conditional on each value θ of , one can determine for a given program what the epistemic utility of the recommended expansion will be for
each point in . If θ cum K entails the expansion by adding H , no error
is committed. The epistemic utility is 1 − qM(H ). If it entails ¬H , the
epistemic utility is −qM(H ). Conditional on the given value θ, there is a
probability distribution over the values of . From this one can determine
the expected epistemic utility of the program for routine expansion. Letting
Q be the total probability conditional on θ of avoiding error by adopting
that program and (M) the weighted average of the M-values, the answer
is Q − q(M).12
If the inquirer is maximally cautious so that q = 0, then the expected
utility of P1 , the color matching program, is 0.99 − q/3 = 0.99, no matter
which of the three colors is the true one. Similarly, the expected utility of
P2 , the program that refuses to expand beyond K, come what may, is 1.
So P2 will be optimal. As Levi notices, P1 will be preferable to P2 if q is
greater than 0.015.13
The agent can expand into inconsistency only if some value θ in has been ruled out beforehand as a serious possibility. Let us consider a
case in which the inquirer is sure that the liquid is not blue, but remains in
doubt as to whether it is red or white. In other words, θ = Blue is ruled out
as a serious possibility. Notice that the blue-report has not thereby been
ruled out – only the proposition that the color is blue. Yet, we know in the
example that the chance of reporting the color to be blue is no greater that
0.009. For that reason, even a very cautious agent will deliberately expand
to the conclusion that the Blue-report will not be made, provided that she
132
ERIK J. OLSSON
is interested in making such predictions in the first place. Let us assume
that she is, and that she makes the prediction. This means that only the
red-report and the white-report are seriously possible reports.
Consider P1 , the color matching program. The expected epistemic utility conditional on Red is now 0.991 − 0.5q, and the corresponding utility
conditional on White is now 0.999 − 0.5q. With a few exceptions, it is
superior to every other program with respect to security, i.e., minimum
conditional expectation. The other optimal programs assign matching colors to color reports except that they do not assign Blue to the blue-report.
Exactly how they react to the blue-report has no bearing on their expected
epistemic utility. Since the inquirer is certain that the blue-report will not
be made, from her perspective all these programs have the same merit as
P1 itself.
If the inquirer has chosen P1 and reports the color to be blue, she will
expand into inconsistency. The same happens if she adopts some of the
other optimal programs. Take for instance the program, let us call it P3 ,
which assigns matching colors to color reports except for a blue-report,
in which case the output of the program is Red or White. Suppose that
the inquirer has adopted P3 and reports the color to be blue. Then she
will suspend judgment. Since the blue-report is the only report that leads
to suspension of judgment, the inquirer will be committed to believing
that the blue-report was made. But the inquirer had previously excluded a
blue-report as seriously possible, and so her full beliefs are inconsistent.
As Levi fails to notice, not every optimal program leads to inconsistency if a blue-report is made. Let P4 be a program which assigns matching
colors except for the blue-report, in which case the output is simply Red.
This program will also be optimal. If the inquirer adopts P4 and makes a
blue-report, he will not be able to deduce that a blue-report was made from
observing the output of the program. He can only be sure that either a bluereport or a red-report was made. That either a blue-report or a red-report
should be made has not been excluded as seriously possible, and so there
is no inconsistency.
For some reason that is unclear to me, Levi does not want to presuppose
that the inquirer has direct cognitive access to the reports. Rather, he only
assumes that they can sometimes be extracted using information about
what routine was employed and what effect it had on the belief corpus. It
seems to me though that we may have access to facts of a report character
without having to reconstruct those facts from other facts in the suggested
manner. For instance, while it is true that the direct effect of visual perception are beliefs about physical objects, it is implausible to think that I
reconstruct my beliefs about what visual impressions I have on the basis
AVOIDING EPISTEMIC HELL: LEVI ON PRAGMATISM AND INCONSISTENCY
133
of the former beliefs. Rather, it seems that I have other routines that can
be used to form beliefs about visual appearances. In the following I will
assume that we can in principle always find out what report was made on
a given occasion of routine expansion.
Assume now instead that the inquirer does not attempt to predict what
report will be made. She is interested in the color of the liquid and not in
what reports will be made about that color. So she does not exclude the possibility that a blue-report will be made. P3 will now have a higher expected
epistemic utility than P1 . To see this, note that conditional on Red, P1 has
an expected value of 0.99 − q[0.991(0.5)]. Note that the M-values of Red
and White are both equal to 0.5 on the assumption of not-Blue and that the
M-value of Blue is 0. Conditional on White, P1 has an expected value of
0.99 − q[0.999(0.5)]. For P3 , the expected epistemic utility conditional on
Red equals 1 − q[0.991(0.5)] and conditional on White 1 − q[0.999(0.5)].
Clearly the modified program has a higher expected epistemic utility. If
the inquirer endorses P3 she will avoid expanding into inconsistency if a
blue-report is made.
6. A POSSIBLE SOLUTION
In the previous section we saw that there will be inconsistency-injecting
programs that are optimal if the following conditions are satisfied: (i) some
value in is excluded as seriously possible, (ii) the inquirer is moderately
bold in making trade-offs between informational value and risk of error,
and (iii) the inquirer has predicted that a report to the effect that the excluded value in holds will not be made. Under such circumstances, the
inquirer will run the risk of expanding into inconsistency if he maximizes
expected epistemic utility in Levi’s sense. If Levi wants to retain his deeply
entrenched view on the role of beliefs in inquiry – a view which commits
him to the impossibility of a rational escape from inconsistency – he must,
it seems, block at least one of (i)–(iii).
Blocking (i) amounts to requiring that all values in be seriously possible at the time when the routine is implemented. Levi thinks that this
requirement has undesirable consequences. In his view, cases in which
some values in are excluded “are of interest to us because we might
want to continue employing the programs for routine expansion what we
favor using when all values of are serious possibilities” (Fixation, p. 99).
One could add that blocking (i) would have other serious negative
consequences for Levi’s theory. Suppose that we were to require that all
values in be seriously possible. The inquirer will always expand with
a seriously possible proposition, and every report he can make has ser-
134
ERIK J. OLSSON
iously possible content. It seems that under these circumstances routine
expansion can never lead to internal conflict. I here take “internal conflict”
in a broader sense than “inconsistency” to include anomaly and possibly
also other sorts of incoherence. But if routine expansion cannot produce
internal conflict, it cannot be used to account for the mechanisms behind
the Michelson experiment and the Dudman story referred to in Section
1. They are clear examples of how observation can legitimately lead to
cognitive dissonance.
There is also the option of blocking (ii). However, the example given in
the previous section illustrates the fact that even a very cautious agent may
fall into inconsistency as the result of employing routine expansion. I take
it, therefore, that this is not an interesting alternative.
Instead I will explore the alternative of preventing (iii) from being satisfied. I will argue that on closer scrutiny, a rational inquirer should not
exclude belief-contravening reports as seriously possible.
Levi’s theory of deliberate expansion entails that predictions to the
effect that this or that report will not be made by a given trusted source
are sometimes decision-theoretically optimal, even if they may lead to
permanent doxastic breakdown in the future. This gives us excellent reasons to reconsider this part of Levi’s framework because it suggests that
predictions of this nature should be deemed illegitimate.
There are different ways to achieve this result, some more convincing
than others. An inelegant way would be to stipulate that decision-theory is
fine so long as the agent is not attempting to predict what trusted sources
will say or not say. If she does so attempt, special rules apply that forbid the
agent to make such predictions. This approach amounts to making ad hoc
exceptions to expected utility maximisation in cases involving potentially
troublesome predictions.
A more convincing approach focuses on revising the decision theory
underlying deliberate expansion in order to make the problematic predictions sub-optimal. Unlike the aforementioned strategy, this proposal
handles all cases of deliberate expansion in the same decision-theoretic
way, without making exceptions. There are good reasons for following
this track. If inconsistency means permanent epistemic hell, and if entering this unfortunate state can be indirectly caused by the agent’s making
certain predictions that such and such reports will not be received, then
surely the agent should think twice before making such predictions. More
precisely, she should take the danger of producing inconsistency later on
into account in the process of deliberating on whether or not to make the
prediction in the first place. The problem with Levi’s theory of deliberate
expansion, from this perspective, is that it does not take future robustness
AVOIDING EPISTEMIC HELL: LEVI ON PRAGMATISM AND INCONSISTENCY
135
against permanent breakdown into account, but only the probability and
informational value of the candidate expansions. Robustness is obviously
a different desideratum that cannot be reduced to either of them.
On closer examination it turns out that the only negative aspect of the
inconsistent state that is taken into account in Levi’s decision theory is
the fact that inconsistency means certain error. It is not taken into consideration that the inconsistent state, having broken down as a standard
of serious possibility, is useless in inquiry and deliberation. And it is not
given due attention that the inconsistent state is an intellectual point of no
return. The effect is that the inconsistent state, on Levi’s decision-theoretic
analysis, looks much less ugly than it really is, that is to say, less ugly than
it is from the Deweyan perspective. Not unexpectedly, what comes out
of his analysis is that the agent should frequently put everything at stake,
even in cases in which the possible gain in terms of informational value is
negligible.
The robustness factor will in all standard inquiries play no role. Only
if the agent attempts to predict what reports will not be made by a trusted
information source will the risk of producing inconsistency later on be
real. Considering the fatal consequences of falling into inconsistency, the
robustness factor should be given very strong weight. We could even exclude inconsistency altogether by requiring the weight to be large enough;
that would certainly not be unreasonable, and is indeed the approach I
recommend.
A simple way to achieve this is the following. Let R be a function
representing the desideratum to avoid the permanent breakdown of inquiry
and deliberation caused by entering the inconsistent state. R(A) takes on
the value 0 if expansion by A increases the probability of future inconsistency, and 1 otherwise. The expected epistemic utility of a potential expansion A should be represented not as E(A) = αQ(A) + (1 − α)Cont(A) as
in Levi’s theory, but as E(A) = αQ(A) + βCont(A) + γ R(A), where the
weights α, β and γ sum to 1. This representation satisfies Levi’s principle
that the expected epistemic utility should be a weighted average of the
desiderata (Levi 1967, p. 106).
Expansions that do not increase the risk of future inconsistency will
always be preferable to those that do if γ > 1/2.14
On this alternative understanding of the expected utility of an expansion, maximising expected utility will lead to the agent’s refraining from
excluding belief-contravening reports from reliable sources. Prior to using
the expansion routine the agent will not exclude the possibility of a beliefcontravening report, and so there will be no inconsistency if such a report
is subsequently received.15
136
ERIK J. OLSSON
This small change in the theory of deliberate expansion eliminates the
tension I have called attention to between inconsistency as permanent epistemic hell, on the one hand, and routine expansion into inconsistency as
legitimate, on the other hand. It is no longer true that such expansion into
inconsistency is legitimate.
I conclude that Levi can keep his fundamental thesis concerning the role
of beliefs in inquiry and deliberation, provided that he (i) gives up the view
that inquirers can legitimately escape from inconsistency, and (ii) modifies
his account of prediction alias deliberate expansion by acknowledging a
third desideratum, besides probability and informational value, namely, the
desideratum not to cause permanent breakdown further down the line of
inquiry.
7. DISCUSSION
On this modification of Levi’s theory, the agent will not become inconsistent as the result of receiving a belief-contravening report from a reliable
source. This is not to say that the new state of belief is a satisfactory one.
Rather, the conflicting report faces the agent with an anomaly. I follow
Levi concerning the nature of anomalous states. Unlike the inconsistent
state, an anomalous state, as here conceived of, is not useless in inquiry.
Hence contraction from an anomalous state is not “coerced”, neither is any
other particular reaction to the anomaly. Usually, several options are open
to the anomalous agent, including the option to stay anomalous for the time
being.
Let me illustrate this in connection with Levi’s Victor Dudman example. Initially you believe that Dudman is far away, but you have not
excluded that you will have a visual Dudman-impression. Now one of
your observational routines report that Dudman is in fact standing over
there. What will happen at this point if you have chosen an appropriate
routine? You will not end up accepting that Dudman is over there, and so
there will be no outright inconsistency with your belief that Dudman is far
away. Any routine leading to the acceptance of Dudman’s presence will
be sub-optimal, and routines that have no effect (i.e., make you suspend
judgement) will always be better. At the same time you will be aware of
the fact that you had a Dudman-impression. Since you have not excluded
beforehand that you would have such an impression, this will not lead to
inconsistency. And yet, given your belief that Dudman is far away, it is very
unlikely that you would have the impression in question. Moreover, since
your impression could be satisfactorily explained by reference to the alternative hypothesis that Dudman is actually here, this impression of yours
qualifies as “surprising” and your current state of belief as anomalous.16
AVOIDING EPISTEMIC HELL: LEVI ON PRAGMATISM AND INCONSISTENCY
137
Levi might welcome my proposal for how to resolve conflict not only
because it restores harmony but also because it makes possible a substantial simplification of his theory. On the alternative view outlined here,
agents who recognize all their doxastic commitments can never legitimately fall into inconsistency, and so there is no need for a special theory of
contraction from inconsistency, i.e., Levi’s theory of coerced contraction
becomes redundant. Thus instead of Levi’s original four basic legitimate
belief revision operations – deliberate expansion, routine expansion, uncoerced contraction and coerced contraction – the alternative view gets
around with only three legitimate types of changes: (robust) deliberate
expansion, routine expansion and uncoerced contraction.
In his review of Fixation, Philip Kitcher compares Levi’s view on consistency with that of Peter Gärdenfors (the AGM theory): “Levi shares
with Gärdenfors the view that ‘inconsistency is hell’ . . . . For Gärdenfors,
hell is so terrible that one should never enter it; for Levi, hell is a place
that even the most responsible epistemic agents sometimes enter, but it
is their duty to extricate themselves as quickly as possible”. I agree with
Gärdenfors that hell is so terrible that it should be avoided at all costs. At
the same time I object strongly to the way in which this result is achieved
in Gärdenfors’s theory. On one reading, Gärdenfors is proposing that cases
like the Dudman example be seen as prompting a replacement of the old
information by the new, so that after having received the observational
input we should replace our old belief that Dudman is far away by the new
belief that Dudman is just over there.17 Revision, thus construed, satisfies
the dubious principle of ‘primacy of new information’ (Dalal 1988). If
there is a conflict between new and old information, it is always resolved
by giving up some of the old. In practice, however, new information is
often rejected if it contradicts more entrenched previous beliefs. Unlike
what Gärdenfors seems to think, no special priority should be assigned to
new information due only to its novelty.
Levi’s theory does not share this deficiency since it treats old and new
information on a par. Novelty is supposed to play no role in the agent’s
attempts to extricate herself from inconsistency. The same is true for the alternative view I am proposing here, according to which belief-contravening
testimony from a trusted source produces anomaly in the belief corpus. The
conflict can be resolved in several different ways. In particular, the agent
is not forced to accept the new information.18
Levi thinks that change in belief prompted by testimony should satisfy
the following three conditions (Fixation, p. 107):
1. The result of the process should be an expansion.
2. Initiating the process is legitimate, according to the inquirer, if the
138
ERIK J. OLSSON
inquirer is convinced prior to starting the process that the expansion
resulting from the process is the outcome of a reliable program for
routine expansion.
3. The process can lead to the injection of inconsistency in the evolving
doctrine.
While Levi subscribes to all three conditions, Gärdenfors would reject 1
and 3. Concerning 1, the result should, in Gärdenfors’s view, be an expansion only if the input is consistent with the belief corpus. If the input is
inconsistent with the corpus, the resulting change should not be an expansion but a replacement. Gärdenfors has, as far as I know, not discussed the
second condition. As for the alternative position I have outlined, it is in
agreement with Levi concerning the adequacy of the first two conditions.
As for the third condition, it sides with Gärdenfors; an agent who fulfills
all her doxastic commitments cannot legitimately fall into contradiction as
the result of receiving belief-contravening testimony from reliable sources.
Subscribing to the account of belief-contravening testimony put forward
here amounts, in this sense, to taking a middle course between the positions
of Levi and Gärdenfors.
ACKNOWLEDGMENTS
I am greatly indebted to Isaac Levi for his extremely valuable comments on earlier drafts. My research was financed by the DFG (Deutsche
Forschungsgemeinschaft) as a contribution to the project Logik in der
Philosophie.
NOTES
1 My main references to Levi’s work will be The Enterprise of Knowledge from 1980 and
The Fixation of Belief and Its Undoing from 1991. They will be referred to as Enterprise
and Fixation, respectively.
2 “What is ‘pragmatic’ about pragmatism is the recognition of a common structure to
practical deliberation and cognitive inquiry in spite of the diversity of aims and values that
may be promoted in diverse deliberations and inquiries” (Fixation, p. 78).
3 This is one aspect of Levi’s infallibilism (his view that one is committed to viewing one’s
present full beliefs as certainly true), a position he has sought to combine with corrigibilism
(his view that an agent can rationally believe that her beliefs may undergo changes in the
future).
4 See, for example, Enterprise, p. 59 and p. 62.
5 “An inconsistent corpus fails as a standard for serious possibility to be used in inquiry
and deliberation. The corpus is useless to X at t and should be modified” (Enterprise,
p. 59). “It is always urgent to contract from an inconsistent state of full belief.” (ibid., 68)
“Because the inquirer has inadvertently expanded into inconsistency via routine expansion,
AVOIDING EPISTEMIC HELL: LEVI ON PRAGMATISM AND INCONSISTENCY
139
he is compelled to contract. This is what I call coerced contraction” (ibid., 102). “In coerced
contraction, I have suggested that the question of whether to contract is not an issue. The
standard for serious possibility has lapsed into incoherence, and this suffices to make the
case for contraction” (ibid., 152).
6 Personal communication, for which I am greatly indebted (permission to quote granted).
7 For a similar criticism of Levi from the point of view of so-called paraconsistent logic,
see da Costa and Bueno (1998).
8 Levi makes a distinction between “background assumptions that are a settled ingredient
in a state of full belief”, on the one hand, and “tentatively or conjecturally held collateral claims”, on the other (Fixation, p 171, footnote 4). The concept of anomaly is also
discussed in the final section of this paper.
9 See Fixation, Chapter 3, for a more detailed exposition.
10 Here is an alternative route to this equation. In deliberate expansion, the decision-maker
evaluates the epistemic utility expanding K by adding A without importing error and with
importing error. The former is 1−qM(A)and the latter is −qM(A). The expected epistemic
utility of adding A is determined by multiplying the first term by the credal probability that
A and the second term by the credal probability that ¬A. Then the sum of these products
is taken which is Q(A) − qM(A).
11 The following example is adopted from Fixation, p. 98.
12 Note that we have not been given an unconditional probability distribution over the
values of . So we lack unconditional expected epistemic utilities for the various programs
for routine expansion that could be constructed.
13 Having considered other possible programs as well, Levi concludes that “on the assumption that routine expansion applies only insofar as deliberate expansion can no longer
be exploited, when the inquiring agent is moderately bold in making trade-offs between
informational value and risk of error, the program recommending matching colors will be
recommended” (p. 99). For the exact meaning of “the assumption that routine expansion
applies only insofar as deliberate expansion can no longer be exploited” I refer to Levi’s
text.
14 Suppose that expanding by B, but not by A, increases the risk of future inconsistency.
Assume, for the worst, that P (A) = Cont(A) = 0 and that P (B) = Cont(B) = 1. Then
αP (A) + βCont(A) + γ R(A) > αP (B) + βCont(B) + γ R(B) ⇔ γ > α + β ⇔ γ > 0.5.
15 This proposal does not by itself succeed in avoiding inconsistency in all circumstances.
Suppose that I consider at time t a given information source S who is believed to be rather
reliable but not to the extent that she is worthy of trust. It seems possible that it could
nonetheless be very improbable that S would deliver the belief-contravening report R.
Hence I predict that R will not be received. Now, at time t , I obtain new evidence that
strongly supports the reliability of S whom I therefore decide to trust. This new evidence
does not in any way tell against my prediction (but rather supports it) and the prediction is
consequently retained. Unfortunately, S reports R, and I fall into contradiction. For another
example, suppose that I predict, at t, that the report R, which is not belief-contravening at
t, will not be made by the source S. At time t my view undergoes some changes to the
effect that R is now, in fact, belief-contravening. As it happens, the source S now reports
R, and I fall into inconsistency. These examples exploit the fact that which routines are
worthy of trust and what is belief-contravening may change with the belief corpus. Hence
we may end up in a situation in which the agent has made predictions that fail to be robust
now, although they were robust at the time of adoption. The upshot is that we have to
require that the agent keep track of her previous predictions in order not to fall into this
140
ERIK J. OLSSON
predicament. If my solution to Levi’s problem does not work for this reason, then so much
worse for Levi’s theory. It seems to me, though, that such track keeping should not present
any insurmountable problems for idealized agents of the sort under consideration, and I
will proceed on the assumption that this objection can be overcome in the manner just
suggested.
16 I am relying here on the account of anomaly in Fixation, Section 4.9. See also Horwich (1982, pp. 100–104) for a closely related elucidation of what constitutes a genuinely
surprising event.
17 This is not the only reading. David Makinson (1997) has argued that the AGM theory
should be seen as addressing only the problem of how to incorporate something given that a
decision has already been made to the effect that it should be incorporated. On this reading
the AGM theorist is not committed to the view that new information should always be
accepted. Unfortunately, by restricting the scope of application this other reading has the
effect of making the AGM theory less interesting.
18 Levi’s theory is extreme in its symmetric treatment of old and new information. It does
not matter whether the agent first believes A and then (routinely) expands by non-A, or
first believes non-A and then (routinely) expands by A. She will end up in the same state
in both cases, viz., the inconsistent one. This is not true on the picture I am advocating. In
the first case, the agent will believe that A and that she has received a report that non-A
from a reliable source. In the second case she will believe non-A and that she has received
a report that A from a reliable source. These are different end states. My model is slightly
more conservative than Levi’s in the sense that the old belief is temporarily retained until
the conflict has been resolved. This does not strike me as an unwelcome feature, bearing
in mind the possibility of giving up the old view in the subsequent process of conflict
resolution.
REFERENCES
da Costa, N. C. A. and Bueno, O.: 1998, ‘Belief Change and Inconsistency’, Logique et
Analyse 41, 31–56.
Dalal, M.: 1988, ‘Investigations into a Theory of Knowledge Base Revision: Preliminary
Report’, Seventh National Conference on Artificial Intelligence (AAAI-88), 475–479.
Gärdenfors, P.: 1988, Knowledge in Flux. Modeling the Dynamics of Epistemic States, The
MIT Press.
Horwich, P.: 1982, Probability and Evidence, Cambridge University Press.
Kitcher, P.: 1993, Review of Issac Levi’s ‘The Fixation of Belief and Its Undoing’, The
Journal of Philosophy 425–432.
Levi, I.: 1967, Gambling with Truth. An Essay on Induction and the Aims of Science, The
MIT Press.
Levi, I.: 1980, The Enterprise of Knowledge, The MIT Press.
Levi, I.: 1991, The Fixation of Belief and Its Undoing. Changing Beliefs Through Inquiry,
Cambridge University Press.
Makinson, D.: 1997, ‘Screened Revision’, Theoria LXIII(1–2), 14–23.
E. J. Olsson, Fachgruppe Philosophie, Universität Konstanz,
Postfach 5560 D21, 78457 Konstanz, Germany
E-mail: [email protected]