Download Consistent Belief Reasoning in the Presence of Inconsistency

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Enactivism wikipedia , lookup

Agent (The Matrix) wikipedia , lookup

Logic programming wikipedia , lookup

Belief revision wikipedia , lookup

Linear belief function wikipedia , lookup

Personal knowledge base wikipedia , lookup

Embodied cognitive science wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Transcript
Consistent Belief Reasoning in the Presence of Inconsistency
Jinxin Lin
Dept. of C o m p u t e r Science
University of Toronto
Toronto, MhS 1A4, C a n a d a
lin~db, toronto, edu
Abstract
Since everything is a consequence of an inconsistency, classical logics are not useful in modeling
the reasoning of an agent who has inconsistent beliefs. In this paper, we differentiate consistent
beliefs from inconsistent beliefs. We propose two belief operators B c and B, standing for
consistent belief and belief, respectively. B c has the modus ponens property, by which the
agent is able to reason with consistent beliefs as normal and draw consistent conclusions. B
tolerates inconsistency, and by B the agent can reason about his inconsistent beliefs as well.
The concept of consistent belief and our logical formalism for it are new, in that reasoning
consistently about the information in an inconsistent knowledge base is possible. We also
present a complete axiomatization for the logic and discuss the application of B c and B in
reasoning about implicit knowledge in a group of agents and eliminating inconsistency from
a knowledge base.
1
Introduction
It has long been recognized that inconsistencies may easily arise in knowledge based information processing (see, e.g., [Be175, IS93]). This may be due to some conflicting rules or d a t a
being recorded in the knowledge base system. Removing inconsistencies from a knowledge
base is difficult and expensive since, as we know, inconsistencies may not lie on the surface
and in most cases there is no single solution to eliminate them. Furthermore, the knowledge
base may be in use for quite some time before an inconsistency is ever detected [BKMS92].
Thus, as stated in [Ha186], reasoning in the presence of inconsistency is an issue which will
need to be considered in the design of knowledge bases.
Since everything is a consequence of an inconsistency in classical logic, a single contradiction may ruin all the information in the knowledge base. For instance, suppose the knowledge
base ¢ = p A -~p A q A (q D r), then ¢ ~ a for any formula a in classical logic. But intuitively
q and r are irrelevant to the inconsistency. It is desirable that they are distinguished from p
and are treated as valuable information.
A knowledge base could be inconsistent while knowledge in the real world is never inconsistent. It is therefore more appropriate to view a knowledge base as the reflection of an agent's
beliefs rather than as that of the true state of affairs in the real world. That is, we assume that
there is an agent who reasons about the world according to the knowledge; base. Then, we
80
,.°
can differentiate two kinds of beliefs of the agent: consistent beliefs and inconsistent beliefs.
In the above knowledge base ¢, we say q and q D r are consistently believed (thus they are
consistent beliefs of the agent) since intuitively they are not involved in the inconsistency. In
contrast, we say p and -~p are inconsistently believed (thus they are inconsistent beliefs of the
agent). Inconsistent beliefs have no meaning and they represent the beliefs that are damaged
by the inconsistency, while consistent beliefs are still valuable despite the existence of the
inconsistency.
The motivation of our logic is to provide some ways for reasoning about consistent beliefs.
As mentioned, in real life a knowledge base may be inconsistent. This can easily occur in a
medical expert system whose knowledge is obtained from multiple physician experts, where
it is common that different experts hold conflicting views on their domain of expertise. Then
the question is: given the conflict (inconsistency) in the expert system, can correct diagnoses
be made upon the patients' diseases? Consider the knowledge base ~ = p h -~p h q A (q D r),
where p means "the patient has hepatitis". Then it is clear that the diagnosis that the patient
has hepatitis is a bad one since another piece of knowledge in the knowledge base (~p) has
already denied it. So is the diagnosis that the patient has not hepatitis. Both could have
a detrimental effect on the patient, i.e., the former results that the patient goes through
unnecessary treatment and the later results the treatment of the patient being delayed. This
shows that decisions should not be made on the basis of inconsistent beliefs. Instead, they
should be made on the basis of consistent beliefs. Suppose, for example, it is not p but r
that means "the patient has hepatitis", and q D r is a rule of the meaning "if a patient has
jaundice then he/she has hepatitis". Then we would not have problem to assert that "the
patient has hepatitis" according to the fact that he/she has jaundice. Therefore we need some
way of reasoning about consistent beliefs so that decisions or assertions can be made if they
are "consistently believed" by the knowledge base. In this paper, the logic represented by B e
we propose is for such purpose.
In many logical formalisms based on multi-valued logics (e.g., [Dun76, Be175, Prigl, Gin88,
KL89, Lin87], to name a few), inconsistencies are tolerated and the drawback that everything
is a consequence of an inconsistency is avoided. In [Dun76, Be175, Pri91, Lin87], a proposition
can be assigned three or four truth values, while in [Gin88, KL89], a proposition can be
assigned more than four truth values. These logic systems have applications in many areas
of AI, such as automated deduction, non-monotonic reasoning, belief revision, etc. However,
they cannot be used to reason about consistent beliefs, since they cannot reason consistently
about the information in an inconsistent knowledge base. They may, on one hand, conclude
a statement and, on the other hand, conclude the negation of it. For example, given the
aforesaid ¢, both p and -~p will be deduced by these systems. While in our logic, as we shall
see, the consistent belief operator B c is always consistent.
There are modal logics of belief ( e.g., [Lev84, Lak90, FtI88, Var86, FHV90]) where the
presence of an inconsistency is also not as damaging as in classical logics. Among them, the
logic of [Lev84] is most closely related to ours. The logic of explicit belief in [Lev84] allows an
agent to have inconsistent explicit beliefs without explicitly believing every sentence. Fagin
et al. [FH88] present a logic of local reasoning where an agent is viewed as a society of minds,
each with its own set of beliefs, which may contradict each other. However, since in these
logics the agent's beliefs are generally not closed under implication, they cannot reason about
consistent beliefs. For example, given the aforementioned + it is not possible to derive r from
81
q and q D r in these logics, while in our logic r can be obtained by the modus ponens property
of the consistent belief operator.
The work on combining knowledge bases [BKMS92] is also related to the topic of tiffs
paper. In [BKMS92], a cautious semantics is defined for inconsistent theories. The cautious
semantics considers all maximally consistent subsets of the inconsistent theory, and a sentence
is considered true if it is true in all of the maximally consistent subsets. However, by this
approach, nothing can be derived from the aforesaid ~b since the maximal consistent subset of
the g, is empty. While in our approach q and r can still be consistently believed. Besides, the
approach in [BKMS92] is dependent on the particular syntax of the knowledge base, while in
ours two knowledge bases are treated equivalently if their semantics based on situation are
the same.
Related topics are belief revision (see, e.g.,[G88, FKUV86, KM91]) and truth-maintenance
systems (see, e.g., [Doy79]). But in belief revision some knowledge in the knowledge base is
given up to resolve inconsistency, while in our approach nothing is given up and reasoning is
performed in the presence of inconsistency. Truth-maintenance systems need to keep track of
justifications, which induces substantial maintenance cost. Moreover, they require the initial
knowledge base to be consistent and they assume that the presence of an inconsistency is due
to some assumptions being introduced during the evolution of the knowledge base. In this
paper, we do not have that requirement and assumption.
The rest of this paper is organized as follows. Section 2 gives the definition of situations
and presents the basic part of the logic. Section 3 formally defines the semantics of belief and
consistent belief. Section 4 explores the properties of belief and consistent belief, in particular,
the modus ponens property by which the agent is able to reason with his consistent beliefs.
Section 5 presents a sound and complete axiomatization for the logic. Sections 6 discusses
the possible applications of the logic, especially in the areas of reasoning about the implicit
knoNledge in a group of agents and eliminating inconsistency from a knowledge base. Finally,
Section 7 suggests some topics for future research.
2
T h e Basic Part of t h e Logic
The formulas in the language we consider are formed from a set of atomic sentences T' using
the three standard connectives -~, A, and V as usual, and two modal operators B and B c.
The modal operator B e is to capture consistent beliefs and B to capture beliefs that include
not only consistent beliefs but also possibly inconsistent ones. A formula such as B a is read
"the agent believes a", BCa is read "the agent consistently believes a". We only consider a
single agent and the formulas that do not have nesting modal operators. We choose modal
operators to represent beliefs so that we are able to express both the agent's beliefs and the
facts in the real world, e.g., a description like {Bp, q} means that the agent believes p, and q
is true in the reM world. By this way it is possible to reason not only about an agent's beliefs
but also about the true state of affairs in the real world. Note that reasoning about the real
world is different from reasoning about the beliefs of an agent, because what the agent thinks
may not reflect what is happening in the real world.
Other connectives D and ~_ are defined in terms of -~, A, and V; that is, a D /3 is the
abbreviation of -~a V/3, and a ~_/3 the abbreviation of (a D/3) h (/3 D a). l~bllowing [Lev90],
we give the term objective sentences to those without any belief operator, subjective sentences
82
to those where all atomic sentences occur within the scope of a belief operator. Since we
only consider the formulas that do not have nesting belief operators, it should be understood
t h a t formulas appearing within the scope of a belief operator are objective, throughout this
paper. A n d by the knowledge base ~b we usually mean an objective sentence or a :finite set of
objective sentences. We usually use p, q, r, a, b, c to denote atomic sentences ('atoms for short),
1 to denote literal which is an a t o m or the negation of an atom. For convenience, we use the
n o t a t i o n - I for -~p if 1 = p, and p if I = ~p, p E P .
Since no possible world is an interpretation of an inconsistent KB, we should generalize the
notion of possible worlds, to situations. A situation is a partial possible world that supports
the t r u t h or falsity or b o t h for every atomic sentence in the underlying language 1 Incoherent
situations are allowed; they are the situations that support both p and ~p for some atom p.
l)brmally, a situation is a mapping: P ~ { {t}, {f}, {t, f} }. We call s(p) the set-up for p in
situation s. For a situation s, we use O(s) to denote the set of atoms whose set-ups in s are
{t, f } , i.e., O(s) = {p [ s(p) = {t, f } , p e P}. Then intuitively, a possible world is a situation
s such t h a t O(s) is empty. We denote the set of all situations by S, and the set of all possible
worlds by W. According to what is believed by the agent, we can identify M g ,5 to be the
set of situations t h a t could be the actual ones, which we call the belief model of the agent.
Given a belief model M and a situation s, we now define the support relations DT and
~ F between t h e m and formulas in the language. The symbol ~ T means "support the truth
of" and ~ F m e a n s " s u p p o r t the falsity of". The following part of this section is similar to
the basic part of the logic in [Lev84]:
1. M , s ~ T P iff t C s(p), where p is an atom.
M , s ~ F P iff f E s(p), where p is an atom.
2. M , s ~ T ( a A / 3 ) iffM, s ~ T a a n d M , s ~ T f l
M , s ~ / r (a A fl) iff M, s ~FF o~ or M, s ~ F
3. M, s ~ r (" V/3) iff M, s ~ r ~ or M, ~ ~ r / 3
M , s ~ f (a V/3) iff M , s ~ F a and M , s ~ F fl
4. M, s ~ T -nOt iff M, s ~ F oe
M, 8 ~ F -~a itff M, s ~ T a
A n d a situation supports the t r u t h of a set of formulas if and only if it supports the t r u t h
of every formula in the set, and supports the falsity of the set if and only if it supports at
least the falsity of a formula in the set.
In the following, we use the notation ~a] to denote the set of situations in $ that support
the ~truth of a. W h e n there is no risk of confusion, we simply use 'support a ' for 'support the
t r u t h of a ' .
3
The
Semantics
of B ¢ and B
Return to the example ~b = p A -~p A q A (q D r), we would like to obtain BCq and BC(q D r)
since intuitively q and r have nothing to do with the inconsistency in ~b. This means that the
1The situation here is different from the one in [Lev84], where Levesque also considers the situations that
support, neither p nor up for some atom p.
83
inconsistency should be minimized (in this case, minimized on atom p) so that the propositions
irre!evant to the inconsistency can be obtained as consistent beliefs. Furthermore, we would
like to conclude B e t from Beq and Be(q D r) since inconsistency should not disrupt the
normal reasoning process of consistent beliefs. In addition, since the agent's beliefs include
consistent beliefs, we want to conclude B r as well.
However, consider the situations that support the ~ = p A -~p A q h (q D r) as :follows:
p
81
82
83
84
85
{t,f}
{t,f}
{t,f}
{t,f}
{t,f}
q
{t}
{t}
{t,f}
{t,f}
{t,f}
r
{t}
{t,f}
{t}
{f)
{t,f}
There is a situation 84 that supports ~b but does not support r. On closer examination of
s4, we observe that in it not only p receives the inconsistent set-up {t, f} (which is justified
since both p and -~p are in ¢), but also q receives {t, f}. While in Sl (which supports r),
only p receives {t, f}. In fact in sl fewer atoms receive {t, f} than in any other situation,
and therefore sl is the situation among the five that is "closest" to a possible world where no
atom receives {t, f}. This suggests us to restrict our attention to the set of situations that
are "closest" to possible worlds, which we refer to as most-complete situations.
We define a situation s to be more complete than s' (denoted by s <~ s') if 0 ( s ) C O(s').
Define the set of most-complete situations in S to be MC(S) = {s ] s E S, ~ s' E S such that
s ~ <~ s}. Similar orders for comparing two situations were also used in the logics of [Pri91]
and [KL89]. Most-complete situation enjoys the following property:
P r o p o s i t i o n 3.1 If ¢ is consistent then .MC([~,]) is the set of possible worlds that satisfy ~b.
The semantics for B and B e is:
5. M, s ~ T B a iff for every s' E .A4C(M), M, s' ~T a.
M, s ~ F B a iff M, s ~ T Ba.
6. M, s ~ r BOa iff for every s' E .MC(M), M, s' ~T O~and M, s' V=F a.
M, s ~ F B ca iff M, s ~=T B ca.
The agent believes a if every most-complete situation in his belief model supports a. We
refer to the set of most-complete situations of M (i.e., M e ( M ) ) , instead of M itself in the
definition of B (and Be), which is to minimize the effect of inconsistency on the agent's beliefs.
By referring to M e ( M ) we are actually endowing B with some 'modus ponens' (we shall see
this in more detail in the next section), which makes the inference like inferring B r from Bq
and B(q D r) possible in the aforementioned g,. Note that the derivation of B r in the g, is
necessary since, as we have mentioned, we want Ber to be derived in the g, and intuitively
the agent's beliefs include consistent beliefs, which means, alternately, a consistent belief is
also a belief.
2V,[e may think equivalently in terms of accessibility relation, that for B and B c the set of situations
.MC(M) is accessible from any situation. It has been shown that for the modal logic weak Ss tile same fixed
set of worlds can be thought of as being accessible from any world.
84
The agent consistently believes a if each most-complete situation in his belief model coherently supports a: In other words, each most-complete situation in his belief model supports
the t r u t h but not the falsity of a. Note that a situation in AdC(M) supporting both a and -~o~
means a is also involved in an inconsistency. For example, suppose a KB ~ = pA (-~pV-~q)A q,
then intuitively both p and q equally contribute to the inconsistency of ~b. To be conservative,
neither p nor q should be consistently believed. However, .MC(~¢]) consists of two situations,
one of which supports both the t r u t h and the falsity of p and the truth only of q, and the
other supports both the truth and the falsity of q and the truth only of p. Hence, we require
that every situation in JVlC(M) support the truth but not the falsity of a in order for BCa
to be true.
Some readers may come up with other possible definitions for B c, in particular, the
definition which define B e in terms of B as follows: M, s ~T BCa iff M, s ~T B a and M, s ~ T
B ~ a . There is a problem with this definition, however. Consider again the knowledge base
of the agent ~b = p A (-~p V -~q) A q. Then M = [~]. It is easy to verify that M, s ~:T B-up
and M, ~ ~:T B-~q but M, s ~T Bp and M , s ~ T Bq. Hence M ~T BCp and M ~T BCq
by this definition. T h a t is, p and q are consistently believed, even though they contribute in
part to the inconsistency of the ~b. Since a nice property we would like to have for consistent
beliefs is that BC(p A q) is equivalent to BCp h B c q, the sentence p h q is consistently believed.
However, the agent believes -~p V -~q which is the negation of p A q! An agent certainly cannot
consistently believe a sentence if he believes the negation of the sentence.
Finally, we say a is valid (write ~ a) if M, 8 ~T a for all s C )/Y and for all M C S.
If a is objective, we may abbreviate M, ~ ~ T a by s ~T a since M is not relevant. If c~ is
subjective, we may write M ~T a for M, s ~T c~ since s is not in question.
4
Properties
ofB c and B
First of all, it is easy to see that an objective sentence is wlid if and only if it is a standard
tautology. This means that the propositional subset of the language is correctly handled and
so the agent reasons about the real world accurately.
For consistent beliefs, we expect them to satisfy the following conditions: (1) consistent
belief is consistent; (2) the set of consistent beliefs is consistent; (3) consistent belief is closed
under implication. These conditions are rather intuitive and need not to be explained, some
of which, e.g.,Condition (3), has been regulated before. Indeed, we have the axioms:
:,
~BC(~oL), where o~ is a tautology.
(A1)
B e n A BCfl -: BC(a A fl).
B C a A BC(a D fl) D BC/~.
(A2)
(A3)
The firstaxiom demonstrates that the agent cannot consistentlybelieve any inconsistent formula (or falsehood) 3. It follows from this axiom that consistent beliefis always consistent,
satisfying Condition (i) above. The second axiom shows that consistently believing a conjunction is equivalent to consistentlybelievingeach conjunct. From these two axioms we know
that the set of consistent beliefsis consistent,satisfyingCondition (2). The third axiom is the
3Note ~v tautology we m e a n standard tautology and hence in the first axiom - ~ means a formula that is
inconsistent in the sense of classical logic.
o
85
version of modus ponens for B e, which shows consistent belief is closed under implication,
satisfying Condition (3). This axiom makes the inferences such as inferring B e t from BCq
and Be(q D r) possible, so the agent is able to draw conclusions from consistent beliefs. And
since the semantics of B e uses the set of most-complete situations, the agent's inconsistent
beliefs are minimized and so consistent beliefs are maximized. In particular, the propositions
irrelevant to the inconsistency are obtained as consistent beliefs, which is exactly what we
want for consistent beliefs outlined in the introduction section.
For the relationships between belief and consistent belief, we have:
BCa D B a .
(14)
B e n D ~B~e~.
(A5)
The axiom A4 shows the intuition that belieN include consistent beliefs. The axiom A5
reflects the following fact. If the agent consistently believes a formula, then he must not
believe the negation of the formula for otherwise he believes both the formula and its negation,
contradicting that he consistently believes the formula. So we can be assured that consistent
belief is never be the cause of inconsistency. For example, in the knowledge base pA ~pA qA (q D
r), the inconsistency is located at p. But the agent does not consistently believe p (or ~p)
since he believes ~p (or p, respectively).
For belief, we have:
B a , where a is a tautology.
(A6)
B a A B,3 ~ B(a A,3).
(17)
The axiom A6 implies that our agent is "perfect" in believing tautologies, as in classical logic.
However, B is different from classical logic because the agent can have inconsistent beliefs
without believing every sentence. For example {Bp, B-~p,-~Bq} is satisfiable. Axiom 1 7 is
the property of conjunctive belief.
We have limited versions of modus ponens between B c and B:
BCa A B ( a D ,3) D Bj3.
(AS)
B a A BC(a D ~) D Be,3.
(A9)
Both axioms are rather intuitive. They say that whenever the antecedent of the implication
or the implication itself is consistently believed, a conclusion can be drawn through the implication. Indeed, in our daily life if there is no controversy about a fact or a rule, we can
usually use them to make inferences.
5
A Complete Axiomatization
In this section we concentrate on obtaining a complete axiomatization for our logic. We have
presented some axioms for B c and B in the last section 4. We shall introduce additionM
axioms to complete the axiom system.
4Note that some axioms are redundant, e.g., A3 can be obtained from A9 and A4, and A1 can be obtained
from A6 and A5.
86
5.1
The O Operator
It may not be true that the more the agent believes, the more the agent consistently believes.
For example, if the agent's belief set is {p}, i.e., p is the only proposition that the agent
believes, then the agent consistently believes p. However, if -~p is added into the agent's belief
set then the agent believes both p and -,p and so he cannot consistently believe p. From
this it follows that consistent belief B c is nonmonotonic. We need to fix the upper bound of
what the agent believes so that the set of consistent beliefs can be determined. We do so by
incorporating into the language a new belief operator O which specifies the totality of what
the agent believes. Intuitively, Oo~ is read as "~ is all that the agent believes" or "the agent
only believes a ' . 5 The following semantics of O is similar to the ones in [Lak90, Lev90].
7. M , s ~ T Oct ifffor every s / E M ~
M, s ~ F Oct iff M, s ~=T Oo~.
M , # ~ T c~.
As in the possible world semantics, the more worlds the agent can access, the less knowledge the agent has; conversely, the fewer worlds the agent can access, the more knowledge
the agent has. Therefore, M cannot be larger than [a] otherwise the agent believes less than
a, and M cannot be smaller than ~a] otherwise the agent believes not only a, but more.
So the agent only believes a if and only if his belief model M = [a]. Note that we cannot
refer to .hdg(M) in the definition of O since we need to fix M first which is done by O itself.
This may make the meaning of "believe" in the "only-believe" of O slightly different from the
"believe" of B. Nevertheless, the difference is not important here, as the introduction of O is
for the technical purpose only, i.e., for the purpose of establishing a complete axiomatization
for B c.
W i t h the O operator, we can further investigate the relationship between B and B e in
the case that ¢., is consistent.
P r o p o s i t i o n 5.1 If ¢ is consistent then
for any objective a.
This proposition follows from the fact that for a consistent ~b, .MC([~b]) is the set of possible
worlds that satisfy ¢. Hence if there is no inconsistency in what is believed, belief is identical
to consistent belief and is closed under (classical) logical consequence, showing that both B e
and B are faithful extensions of classical logic. If ¢ is inconsistent, the relationships among
the three operators may not be so simple. We shall see this in the next section.
5.2
The Complete
Axiomatization
The axiom system should include the axioms and rules of inference of the standard propositional logic, as follows:
All instances of tautologies.
(A10)
From a and a D fl, infer/3 (modus ponens).
(All)
5The concept of 'only believing' originated from [Lev90].
In addition, we need axioms stating that B e, B and O respect the standard properties,
sucl3 as commutativity, associativity, distributivity, De Morgan's laws and double negation.
For example, if ozY t3 is consistently believed then so is ~3V oz, and if a is consistently believed
then so is ~ a , etc. These properties distinguish our logic from syntactic ones which suffer
from the shortcomings mentioned in [Lev84]. We summarize these properties in the following
axiom (CNF), where aCNF and CtDNF are the conjunctive normal form and disjunctive normM
form of a, respectively, and L stands for either one of B e, B or O:
(A12)
(A13)
Lol ~ L a C N F ~ L a D N F .
L(a A (a V/3)) -= La.
(Axiom of subsuming)
Specifically for O, we need axioms to specify when two O sentences are equivalent. We
will take advantage of a syntactic form called standard form of objective sentences. We define
the standard form of an objective sentence a (denoted by aSTD) to be the set of clauses 6
in O~CNF that is not a standard tautology or subsumed by the other clauses in aCNF. That
is, aSTD = {C [ C E aCNF, ~P such that p,~p E C, and ~ C ~ E aCNF such that C' C C}.
The significance of the standard form is that it completely determines whether two objective
sentences have the same set of supporting situations. More specificMly, we have the following
proposition whose proof is given in the Appendix.
P r o p o s i t i o n 5.2 For any objective a and/3, aSTD
----~STD
iff [a] = [j3].
Then we have the following axioms for O:
O a ~_ 0/3, for any a and/3 such that
OISTD ---- /~STD-
O a D -~Oj3, for any a and ~ such that aSTD ¢ ~STD-
(A14)
(A15)
The axiom that describes the relationship between O and B is simple:
O~ D Be.
(A16)
Finally, some axioms for generating consistent belief and negative consistent belief (-~B c)
from O, are needed. For a set of literals D, let D ~ be the set of literals l in D such that -1
is also in D, and D ~ = D - 0 % Let f~(g,) = {D I D e ¢DNF, ~ Q E ~DNF s.t. Q~ c D ~}, and
~b* = V D e f ~ ( ¢ ) A D e . Then we have:
O ¢ D B e e *.
(A17)
O~b D -~Be-~D, for all D e f~(¢).
(A18)
We have the following soundness and completeness result:
T h e o r e m 5.1 The axioms A1- A18 give a sound and complete axiomatization for the above
logic.
The proof of soundness is presented in the Appendix, while the proof of completeness is
somewhat involved and is presented in [Lin93].
aWe will omit the primitive connectives V and A in C~CNF and aDNF, and therefore ceCNF is a set of clauses
(a clause is a set of literals where V is implied), and aDN F is a set of dual clauses (a dual clause is a set of
literals where A is implied).
88
6
Applications
lit addition to reasoning in the presence of inconsistency, the logic is useful in reasoning about
the implicit knowledge in a group of agents, and in eliminating inconsistency from a, knowledge
base.
6.1
Reasoning
a b o u t t h e I m p l i c i t K n o w l e d g e in a G r o u p of A g e n t s
[HM85] shows that it is desirable to be able to reason about the knowledge that is implicit in
a group of agents (called implicit knowledge in [HM85]). For example, if an agent knows a and
another agent knows a --+ b then combining their knowledge together obtains b, even though
it might be the case that neither of the agents individually knows b. Similarly, [BKMS92]
shows that in many cases an expert system needs to encode the knowledge of multiple experts, so that the resulting expert system will be able to derive the facts that neither of the
exp~rts individually knows. If there is no contradiction among the agents' knowledge, implicit knowledge can be easily obtained as the logical consequences of the ration of all agents'
knowledge. However, different agents (or experts) can, and often do, hold conflicting views
on their domain of expertise. Then the implicit knowledge is every sentence in the language,
if we define it as the classical logic consequences of the union of all agents' knowledge.
With B e, we can give interpretations to the union of all agents' knowledge. Let ~bl, ... , ~bn
be the knowledge bases of the agents, then we define:
D e f i n i t i o n 6.1 a is implicit knowledge of the agents ¢1, ..., ~bn iff ~ O(g h U ... U ~b~) D BCa.
Such definition has the following nice properties: (1) the set of implicit knowledge is consistent;
(2) if ~1 U ... U ~n is consistent, then a is implicit knowledge if and only if a is a classical
logic consequence of ~1 (-j "'" ('j ~n"
o
E x a m p l e 6.1 Suppose ~)1 = a A b, ~b2 = -~a A (b ~ e), and ~-'3 = e ~ d, then b, e, d are all
implicit knowledge of the three agents, while a and -~a are not. We can see that the conflict
among the agents is minimized on a and the knowledge implied in the group of agents, b, c
and d, are inferred.
Note that one simple definition of implicit knowledge is to define it as the logical consequence of the disjunction of the maximally consistent subsets of ~b1 U ... U g,=. This is
essentially the approach adopted by Baral et al. [BKMS92]. But by this approach, the result
of combining the knowledge bases in the above example is (a A b V -~a A (b ~ c)) A (c ~ d),
from which neither of b, c, and d can be derived.
If we do not require the set of implicit knowledge to be consistent, we can define implicit
knoy~ledge using B:
D e f i n i t i o n 6.2 a is implicit knowledge of the agents ¢1,..., ~n iff ~ O(~b1 U ... U en) D Ba.
This definition has the second property of the last definition, but not the first one.
E x a m p l e 6.2 Suppose that the three knowledge bases are as in the last example, then
a, ~a, b, c, d are all implicit knowledge of the three agents. So the set of implicit knowledge concluded may be inconsistent, but the knowledge implied in the group of agents , b, c
and d, are inferred.
Other methods of merging the knowledge of multiple agents are discussed in [LM93a].
89
6.2
Inconsistency Elimination
As fhost existing inference systems are based on classical logic, it is essential to eliminate any
inconsistency once it arises so that the inference of the systems can function properly. The
central question is what should be eliminated and what should be preserved in the knowledge
base. Since consistent beliefs represent those that are unrelated to the inconsistency, we
may want consistent beliefs to be preserved and inconsistent beliefs to be eliminated. In the
example ~ = p A -~p A q h (q ~ r), then q and q -* r should be preserved and p and -~p
should be eliminated. And the result is q A (q ~ r), which is what is desirable. Details will
be explored in [LM93b].
7
Conclusion
We have proposed a formal semantics for reasoning in the presence of inconsistency in terms
of two belief operators B c and B. The phenomenon of inconsistent beliefs seems to occur
even in science. As cited in [FH88], the physicist Eugene Wigner noted that the two great
theories physicists reason with are the theory of quantum phenomena and the theory of
relativity. However, Wigner thought that the two theories might well be incompatible! The
incompatibility should not prevent us from reasoning about other facts irrelevant to the
theory of quantum phenomena and relativity, e.g., inferring that Sam is not immortM from
the fact that he is a man. The logic represented by the B c allows the agent to reason with
his consistent beliefs, and the logic represented by the B allows the agent to reason with both
his consistent and inconsistent beliefs. The effect of inconsistency is localized (or minimized)
in the sense that conclusions can still be drawn through implication from consistent beliefs.
One future research direction is to generalize the logic to the first-order case and/or
mul¢iple agent case. It is also interesting to extend the language to include meta-beliefs
(care should be taken when we consider nested beliefs and positive or negative introspection
[GMR92]), and investigate the usefulness of the logic in other areas such as belief revision,
counterfactuats, etc.
Acknowledgments
I am indebted to Alberto Mendelzon for his detailed comments on the draft of this paper. I
also thank Anthony Bonner, Joe Halpern, Hector Levesque, Fangzhen Lin, Ray Reiter, and
Danny Zilio for their helpful comments. Thank Gerhard Lakemeyer for making a number of
suggestions on improving this paper. This research was financially supported by the Dept. of
Computer Science, Univ. of Toronto, and the Institute for Robotics and Intelligent Systems,
Canada.
A p p e n d i x : the P r o o f of S o u n d n e s s
T h e o r e m A.1 The axioms A1- A18 are sound.
Proof: A I : follows from axiom A6 which will be proved below and axiom A5 whose soundness
is easy to see.
o
90
A3: Let M be a set of situations such that M ~ T BCa h BC(a D ~). Consider s E
/~slg(M). F r o m M ~ T B C c ~ , w e h a v e s ~T a a n d s ~:F a. Hence s ~FF-~aand s ~:T ~a.
From M ~ T BC(a D j3), we know M ~ T BC(-~aV~) and hence s ~ T -~aVfl and s ~ f -~aV~.
From s ~ T -~a V fl and s ~=T ~O~ it follows that s ~T ~- And from s ~:F -~a V fl and s ~ F -~a
it follows that s ~:F ft. Hence M ~ T Bc,3A 6 : Let M be any set of situations and a be a tautology. Consider s E .A4C(M). Let
w be a possible world such that for all p ~ P, w(p) = {t} if s(p) = {t, f ) and w(p) = s(p)
otherwise. Then we have w(p) C s(p) for all atom p e P. In addition, we have w ~ T a since
a is a tautology which is supported by every possible world. It follows from these two facts
that s ~ T a. As s is any situation in .MC(M), we have M ~ T Ba.
A S , A 9 : similar to the proof of A3.
A 1 4 : follows from Proposition 5.2.
A 1 5 : Let M be any set of situations. For the case that M ~:T On, we have M ~T -~Oa.
The soundness of the axiom follows trivially. So we consider the case that M ~ T On. Then
by the semantics of O, M = [a~. Let fl be an objective formula such that flSTD ~ aSTD. It
follows from Proposition 5.2 that ~a~ ~ ~fl~. Hence M ~ ~fl]. Then, by the semantics of O
we have M ~:T O~, i.e., M ~T -lOft.
A 1 7 : Let M be a set of situations and M ~T O~b. Then from the semantics of O we
have M = [¢]. Let s E .MC(M), then according to Lemma A.1, there is D e ~(~,) such
that s ~ T D and O(s) = ~(D). Since s ~ T D, s ~ T D% Consequently s ~T ¢*. We now
prove s ~=T ~DC. Assume to the contrary that s ~ T -~DC, then s ~T --l for some 1 E D%
Meanwhile, s ~ T l since s ~ T De. Let p be the atom that appears in I. Then s(p) = {t, f }
since s ~T l and s ~ T --l. This means p e 0(s). However, since 1 e D e, - l ¢ D by
the definition of D ~ and so p ¢ O(D). This contradicts the fact that O(s) = O(D). Hence
s ~ T ~DC, from which it follows that s ~ T -1¢*. As s is an arbitrary one in M g ( M ) , we
have M ~ T BC~b*.
A 1 8 : Let M be a set of situations and M ~ T O~b. Then from the semantics of O,
M = ~b]. Let a C ~(~b), then according to Lermna A.1, there is some s E M e ( M ) such that
s ~ T a. Then by the semantics of B c we have M ~=T Be~ol. Thus, M ~T ~Be-Ta. Hence
O~b D -~BC-~a, for all a E fl(¢).
It is easy to see t h a t the remaining axioms are sound. I
L e m m a A.1 For a set o.f literals D, let ¢(D) = {p [ p e P such that p, ~p e D}. Then
(1) for each s e MC([~l,]), there exists D e ~ ( ¢ ) such that s ~ T D and O(s) = ~I,(D);
(2) for each D E ~(~b), there exists s e .MC([¢]) such that s ~ T D and O(s) = ~(D).
P r o o f : (1). Let s E Mg([~/,]). Then since s ~ T ¢, s ~ T ¢DNF and so s ~ T D, for some
D e ~bDNFF. We show O(s) = ~ ( D ) and D e ~(¢). First, we show O(s) = ¢(D). It is
easy to see that ~ ( D ) C_ 0(8) since s ~ T D. Suppose ~(D) C O(s). Consider the situation
s' such t h a t for p e T', s'(p) = { t , f } if both p and -~p E D, s'(p) = {t} if p e D but
-~p ~ D, s'(p) = {f} if-~p e D but p ¢ D, and s'(p) = {t} otherwise. Then it is easy
to see s' ~ T D and O(s') = @(D). Thus s' ~ T ~bDNF, which also means s' E ~b]. But
since O(s') = ~ ( D ) C O(s), this contradicts the assumption that s E MC(~¢~). Therefore
~ ( D ) = O(s). It remains to be shown that D e f~(~b). Assume D ¢ ~(~b). Then 3 0 ' e ~DNF
such that ~ ( D ' ) C # ( D ) . Let s* be the situation such that for p e "P, s*(p) = {t, f} if both
91
p and ~p e D', s*(p) = {t} if p C D' b u t ~p ¢ D', s*(p) = {f} if -~p E D ' but p ¢ D', and
s*(p) = {t} otherwise. T h e n it is easy to see s* ~T D' and @(s*) = ,I,(D'). Thus s* ~ T '4'DNF,
which also means s* e [¢]. However, ®(s*) = ~(1)') C ,I,(D) = ®(s). This contradicts the
a s s u m p t i o n t h a t s e A4C([~b~). Hence D e a ( ~ ) .
(2). Let D e ft(~b). We consider the situation s such that for p e P, s(p) = {t, f } if
both p and -~p e D, s(p) = {t} if p e D but ~p ~' D, s(p) = { f } if -~p e D but p ¢ D,
and s(p) = {t} otherwise. T h e n it is easy to see s ~ Z D, and O(s) = ~ ( D ) . We now prove
s e j~C([~b]). Assume s ¢ .MC([¢]), then 3s' e .MC([¢]) such t h a t O(s') C O(s). From
(1) we know 3D' e f~(¢) such t h a t s' ~ r D' and ,I~(D') = ® ( J ) . Then since @(s) = ,I,(D)
and ®(s') C ®(s), we have ,I,(D') C ,It(D), contradicting the fact that D E a ( ¢ )
Hence
e J u c ( H ) . []
Proposition
5.2 For any objective a and/3, [(~] = [/3~ iff OLSTD = /3STD.
P r o o f : "¢=": Let s C [o~], then s ~ r O~CNF- 8 ~ T C, for all C E oZCNF. Therefore, we have
8 ~T O~STD. T h e n since C~STD = t3STD we have s ~r/3STD. T h e n s ~T C, for any C C /3STD.
For any clause C E/3CNF but C ¢f/3STD, we know either C is a tautology or there is C ~ C/3STD
such t h a t C ~ C C. If C is a tautology then it is trivial t h a t s ~ T C. If there is C ~ C /3STD
such t h a t C' C C, then since s ~ T C' we also have s ~ T C. So s ~ r C, for all C E /3CNF.
Hence s ~ T /3CNF. It follows t h a t s ~ r / 3 , i.e., s E [/3]. Similarly we can prove for all s C [/3],
s E [c~]. T h u s [a] = I/3].
" 0 " : Let a a n d / 3 be two objective formulas such t h a t [c~] = [/3].
We now prove t h a t for any C E C~STD, there is a Q E/3STD such t h a t C = Q.
To do so, we first prove t h a t for any C E C~STD, there is a Q E/3STD such t h a t Q c C.
Assume to t h e contrary t h a t there is a C E o~STD such t h a t Q ~ C for all O E flSTD. T h e n
for all Q E/3STD, there is a literal in Q but not in C. Notice C E ~STD does not contain both
p and -~p for any p E P . We let s be the situation such t h a t for any p C P, s(p) = { f } if
p E C, s(p) = {t} if ~p e C, and 8(p) = {t, f } otherwise. T h e n s ~ T C, and consequently
s ~ T aCNF. So s ~ r ~. But since for any Q E /3STD there is a literal in Q but not in C, we
have $ ~ z Q, for any Q E/3STD. It follows t h a t s ~ Z /3STD. Using the similar proof in "¢=",
we can show t h a t s ~ T j3. Hence s e [/3] but .9 ¢ [a], contradicting the fact t h a t [c~] = [/3].
Hence for each C E ~STD, there is a Q E/3STD such t h a t Q c_ C. T h e n similarly, we have t h a t
for t h e Q E /3STD there i s a C ' E
aSTD such t h a t C ~C_ Q. H e n c e C ' C _ Q c_ C. Since there
is no element in ~STD t h a t is subsumed by some other elements in it, we have C t = C = Q.
Therefore, for any C E C~STD, there is a Q E/3STD such t h a t C = Q.
We can show in a similar way t h a t for any Q c ~STD there is a C C C~STD such t h a t C = Q.
Her~ce ~STD = /3STD as we regard ~STD and flSTD to be sets where the order of elements is
irrelevant. I
References
[BelT5]
N . D . Belnap. A useful four-valued logic. In J. M. D u n n and G. Epstein, editors,
Modern Uses of Multiple-Valued Logic, 1975.
92
[BKMS92] C. Baral, S. Kraus, J. Minker, and V. S. Subrahmanian. Combining knowledge
bases consisting of first-order theories. Computational Intelligence, 8:45-71, 1992.
[Doy79]
J. Doyle. A truth maintenance system. Artificial Intelligence, 12:231-272, 1979.
[Dun76]
M. Dunn. Intuitive semantics for first-degree entailments and coupled trees. Philosophical Studies, 29:149-168, 1976.
[FHS8]
R. Fagin and J. Halpern. Belief, awareness, and limited reasoning. Artificial
Intelligence, 34:39-76, 1988.
[FHV90]
R. Fagin, J. Halpern, and M. Y. ¥~rdi. A nonstandard approach to the logical
omniscience problem. In Proceedings of 3rd Conference on Theoretical Aspects of
Reasoning about Knowledge, 1990.
[FKUV861 R. Fagin, G. M. Kuper, J. D. Ullman, and M. Y. Vardi. Updating logical databases.
Advances in Computing Research, 3:1-18, 1986.
[G 8]
Peter Ggrdenfors. Knowledge in Flux - Modeling the dynamics of epistemic states.
MIT press, 1988.
[Gin88]
M. L. Ginsberg. Multivalued logics: A uniform approach to reasoning in artificial
intelligence. Computational Intelligence, 4:265-316, 1988.
[GM88]
Peter Gf~rdenfors and D. Makinson. Revision of knowledge systems using epistemic
entrenchment. In M. Vardi, editor, Proceedings of 2nd Conference on Theoretical
Aspects of Reasoning about Knowledge, 1988.
[GMR92]
G. Grahne, A. O. Mendelzon, and Ray Reiter. On the semantics of belief revision
systems. In Proceedings of 4th Conference on Theoretical Aspects of Reasoning
about Knowledge, 1992.
[Hal86]
J. Halpern.
[HM85]
J. ttalpern and Y. O. Moses. A guide to the modal logics of knowledge and belief.
In Proceedings IJCAI-85, pages 480-490, 1985. A complete version appears in Artificial Intelligence, 54(3):319-379, 1992, under the title "A guide to completeness
and complexity for modal logics of knowledge and belief".
[IS93]
Y. E. Ioannidis and T. K. Sellis. Supporting inconsistent rules in database systems.
To appear in the Int. Journal of Intelligent Information Systems. An earlier version
appeared under the title "Conflict Resolution of Rules Assigning Values to Virtual
Attributes" in Proceedings of the 1989 ACM-Sigmod Conference, pages 205-214,
1989, 1993.
[KL89]
Michael Kifer and E. L. Lozinskii. RI: A logic for reasoning with inconsistency.
In Proc. ~th Symposium on Logic in Computer Science, pages 253-262, 1989.
[KM91]
H. Katsuno and A. O. Mendelzon. Propositional knowledgebase revision and
minimal change. Artificial Intelligence, 52:263-294, 1991.
Reasoning about knowledge: an overview. In Proceedings of the
Conference on Theoretical Aspects of Reasoning about Knowledge, 1986.
93
[Lak90]
Gerhard Lakemeyer. Models of belief tbr decidable reasoning in incomplete knowledge base. Ph.D thesis, Univ. of'Ibronto, Dept. of Computer Science, 1990.
[Lev84]
Hector J. Levesque. A logic of implicit and explicit belief. FLAIR Texh. Rept.
32, Fairchld Lab. ibr AI Research, Palp Alto. A preliminary version appears in
Proc. of the 4th National conference of the American Association for Artificial
Intelligence, pages 198-202,1984, 1984.
[Lev90]
Hector J. Levesque. All I know: a study in autoepistemic logic. Artificial Intelligence, 42:263 309, 1990.
[Lin87]
Fangzhen Lin. Reasoning in the presence of inconsistency. In Proceedings of the
6th National conference of the American Association for Artificial Intelligence,
1987.
ILia93]
J. Lin. Consistent belief reasoning in the presence of inconsistency. KRR-TR-93-1,
Dept. of Computer Science, Univ. of Toronto, Toronto M5S 1A4, 1993.
[LM93a]
J. Lin and A. O. Mendelzon. Knowledge base merging by majority. KRR Technical
Report, Dept. of Computer Science, Univ. of Toronto, 1993.
[LM93b]
J. Lin and A. O. Mendelzon. On inconsistency elimination in knowledge bases.
Forthcoming, 1993.
[Pri91]
Graham Priest. Minimally inconsistent LP. Studia Logica, L2:321-331, 1991.
[Var86]
M. Vardi. On epistemic logic and logical omniscience. In Proc. of the Conference
on Theoretical Aspects of Reasoning about Knowledge, pages 293-305, 1986.
94