Download Distributed Knowledge

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Model theory wikipedia , lookup

Quantum logic wikipedia , lookup

Laws of Form wikipedia , lookup

Mathematical logic wikipedia , lookup

Nyaya wikipedia , lookup

Catuṣkoṭi wikipedia , lookup

Nyāya Sūtras wikipedia , lookup

Inquiry wikipedia , lookup

Cognitive semantics wikipedia , lookup

Propositional calculus wikipedia , lookup

Intuitionistic logic wikipedia , lookup

Truth-bearer wikipedia , lookup

Law of thought wikipedia , lookup

Saul Kripke wikipedia , lookup

Modal logic wikipedia , lookup

Accessibility relation wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Transcript
Distributed Knowledge
Jelle Gerbrandy
ILLC/Department of Philosophy
University of Amsterdam
[email protected]
Abstract
In this paper, two denitions for the notion of distributed knowledge in possible worlds semantics are discussed and compared. In the rst denition distributed knowledge is characterized
by intersection of information states, the second denes the concept as `those sentences that
are logical consequences of the beliefs of the agents.' I will argue that the eect of the rst
kind of denition depends on the ontological view one has about possible worlds, and that the
second kind of denition depends on the expressive power of the language. I will also show
that the logic of the two operators is the same, in the sense that the logics have the same
weakly sound and complete axiom system.
1 Distributed Knowledge
Consider a situation with two agents, a and b, and suppose that a has the information that p is
the case, b knows that p implies q, but neither a nor b knows that q is the case. Even though
neither one of the agents knows that q, there is a sense in which the information that q is the
case is already present in their information states taken together: q is a logical consequence of the
information that the two agents have. One way of formulating this is to say that the information
that q is present in the `system' consisting of both agents in a `distributed' form: the information
that q is distributed over information states of a and b. The standard term for such kind of
knowledge is distributed knowledge.
The distributed knowledge between a and b is the information of a and b `together.' To give
a formal analysis of this notion, we need a notion of adding two information states. This is
the central question of this paper: what does it mean to add the information contained in one
information state to the information contained in another one?
We can also put the question in a more concrete form. Suppose both a and b each have a
certain amount of information, and communicate everything they know to a third agent, who
initially has no information at all. What is the information state of this third agent c after a and
b have communicated everything they know to c?
Kripke Semantics
I will study the question of how to dene the notion of distributed knowledge using standard
Kripke semantics. In particular, we will be concerned with giving a semantics for the following
language:
Denition 1.1 Given a set of agents A, and a set of propositional variable P , the language LD
of epistemic logic with distributed knowledge is given by the following denition:
:= p j ^ j : j Ka j DB where p 2 P , a 2 A, and B A.
This paper has beneted from comments of Joseph Halpern and Yde Venema. Responsibility for mistakes and
misunderstandings is all mine, of course.
111
The set of sentences in which the operator D does not occur will be denoted by the symbol L,
the language of classical modal logic. We use the standard abbreviations: ! for :( ^ :),
and _ for : ! .
2
The language contains propositional variables P , conjunction ^, negation :, and operator Ka and
DB for each a 2 A and B A. The intended interpretation of a sentence of the form Ka is
that agent a `knows' or `believes' that , and a sentence of the form DB should be read as \ is
distributed knowledge between the agents of B" or \ is implicitly believed in the group B."
Here, the terms `agent', `knowledge' and `belief' are to be read in a very loose sense. An `agent'
can be any kind of object for which it makes sense to say that it has certain information: humans,
but also robots, database systems; and, in a more abstract way, computer processes and computer
parts. Saying that human agents know and believe certain things is not a very controversial thing to
do; ascribing knowledge and belief to more abstract agents is more controversial. The literature on
computer science and artical intelligence contains a fairly sophisticated and well-motivated theory
of agenthood, with Fagin et al. (1995) providing a systematic method for ascribing knowledge to
such abstract agents.
One way of providing a semantics for the language LD is in terms of Kripke models (Kripke
1963).
Denition 1.2
a
A pointed Kripke model (K; w) is a quadruple (W; f,!g
a2A ; V; w), where W is a set of possible
a is a relation on W for
worlds, w is a distinguished element of W (the point of evaluation), ,!
each a 2 A, V is a valuation function that assigns a truth-value (either 0 or 1) to each pair of a
world v 2 W and a propositional variable p 2 P .
2
The denitions of satisfaction of sentences of L in a model is as follows.
Denition 1.3
(K; w) j= p
(K; w) j= ^
(K; w) j= :
(K; w) j= Ka
i V (w; p) = 1
i (K; w) j= and (K; w) j=
i (K; w) 6j= a v : (K; v ) j= i for all v such that w ,!
The intuition behind this semantics as a semantics for epistemic logic derives from Hintikka (1962):
we can model the belief of an agent as `the set of possibilities that are compatible with the belief of
that agent', or, alternatively but not incompatibly, as `the set of possibilities that, with respect to
the beliefs of the agent, could be (models of) the real world.' In a Kripke model, this intuition is
a : if w ,!
a v in a Kripke model K , this means that v is epistemically
reected by the relations ,!
possible
for a in world w. In the following, I will often refer to the set of worlds v such that
a v as the information state of a in w, and sometimes write w(a) for this set. The clause in
w ,!
the denition simply says that a believes that in world w just in case is true in all worlds in
the information state of a in w.
It is often assumed that beliefs are introspective; that each agent knows exactly what he
believes. This assumption corresponds to taking the following two axiom schemes as valid: Ka !
Ka Ka (if a believes that , she knows that she believes that ) and :Ka ! Ka :Ka (if a does
not believe that , she knows that she doesn't believe that ). These two axioms are valid in all
a v ,!
a u, then w ,!
a u
Kripke models in which the accessibility relations are transitive (if w ,!
a
a
a
for all a, w, v and u) and euclidean (if w ,! v and w ,! u, then v ,! u). Another way to
formulate this property is to say that each agent considers only worlds possible in which she is in
exactly the same information state that she is actually in, i.e. that for each v 2 w(a), it holds that
v(a) = w(a).
Knowledge, as opposed to belief, also has the property of being factive, if a knows , then must
actually be the case. This property is captured by the axiom scheme Ka ! , and corresponds
112
a w for each w, i.e. that ,!
a is reexive for each a. Reexivity says that
to the property that w ,!
each agent considers the actual world possible, i.e. that w 2 w(a) for each w.
Possible world semantics gives rise to a very idealized notion of belief and knowledge. In
particular, it holds that if is a logical consequence of , then in all models in which Ka is true
{i.e. where agent a knows that is true{ Ka is true as well. In other words, the belief (and
the knowledge) of an agent is closed under logical consequence. This does not conform at all to
the way these concepts are used in daily life: people do not always see the consequences of our
beliefs, and know that other people are limited in the same way. This fact is particularly apparent
when we look at mathematical truths: once we know the basic axioms of arithmetic, we do not
know that Fermat's last theorem is true, although it is a logical consequence of the axioms. This
mismatch between the formal semantics and the pretheoretic concepts of knowledge and belief
has been called the problem of logical omniscience, and dierent semantics have been proposed
to solve it. The proposals to deal with the problem of logical omniscience are of a wide variety;
some examples of semantics that can with some plausibility be called variations on the possible
worlds view those that introduce `impossible possible worlds', the so-called `awareness logics,' and
perhaps situation theory can be classied under this heading as well.
However, the operators Ka , even if they are not a perfect reection of the notions of knowledge
and belief, still can be of interest for a theory of the more abstract concept of information. This is
what Barwise (1989) proposes: to use the term `information' instead of `knowledge' as the informal
notion that best corresponds to the interpretation of Ka. In contrast with knowledge and belief, it
does make sense to say that information is closed under logical consequence: if a sentence logically
follows from the information you already have, then in some sense, you also have the information
that this sentence is true, even though you may not be aware of it. In the following, I will use
the terms `belief,' `knowledge' and `information' more or less interchangeably; more precisely I
will use the term `belief' for what perhaps is better characterized as `information that may not be
correct,' and the term `knowledge' for information that is also true.
Distributed information as intersection.
Let us consider rst what the `inventors' of the notion of distributed knowledge have to say.
Halpern and Moses (1990) dene the notion of distributed knowledge between two agents a and b
in a Kripke model (K; w) like this:
Denition 1.4 For Kripke models (K; w):
a v andw ,!
b v
(K; w) j= Dfa;bg i(K; v) j= forallvsuchthatw ,!
This notion is called `implicit knowledge' in Halpern (1987). The intuition behind the denition is
a are the worlds compatible with the information
straightforward: in w, the worlds accessible by ,!
b
of a and the worlds accessible by ,! are the worlds compatible with the information of b. The
information distributed between them can be characterized by the set of worlds that are compatible
a and
with both the information of a and that of b: those worlds accessible from w both by ,!
! b. Reformulating this: the information that is distributed between two information states w(a)
and w(b) is characterized by the intersection of w(a) and w(b). Or, as Fagin et al. (1995) put it:
\we combine the knowledge of the agents in group B by eliminating all worlds that some agent in
B considers impossible."
What is not immediately obvious, perhaps, is that the plausibility of this denition depends
very much on the ontological view one has on possible worlds. To illustrate this point, consider a
very simple model K with three worlds w; v and u, two agents a and b, an accessibility relation
a = f(w; v )g and ,!
b = f(w; u)g, and a valuation function that the same truth-values to all
,!
propositional variables in v and u.
113
w
a
J
Jb
J
^
J
v
u
In this model, the set of worlds that a considers possible is a singleton set containing one world
v in from which no further worlds are accessible. The set of worlds that b considers possible is
also a singleton set containing one world, u, in which the same propositional variables are true as
in v, and in which no further worlds are accessible. Of course, this example is not an example of
a Kripke model that models the beliefs or the knowledge of an agent. One can easily give adapt
a v , v ,!
b v,
the example to get a transitive and euclidean model: simply add a and b-edges v ,!
b u and u ,!
a u. Changing the example into a proper model of knowledge, in which the
u ,!
accessibility relations are equivalence relations takes a little more work. Such an example can be
found in van der Hoek et al. (to appear). For the sake of simplicity, I will stick to the original
example in the following, and just disregard these issues. The argument I will give does not depend
on them.
The information that a has in w is given by the singleton set f(K; v)g, and the information
of b is given by the singleton set f(K; u)g. Since the two worlds are dierent, the intersection of
state of a and that of b is empty, and hence, (K; w) j= Dfa;bg?: the distributed knowledge of
the two agents is inconsistent. On the other hand f(K; u)g and f(K; v)g have exactly the same
structure, in the sense that their generated submodels are isomorphic (in particular, this means
that the same sentences of LD are true in the two models). The denition 1.4 of distributed
knowledge, then, only makes sense if we take a view on Kripke models in which the dierence
between isomorphic worlds is somehow essential for the information that is represented by them:
there must be something that distinguishes the world v from the world u that is relevant to the
information of a and b in w, and this distinction is not visible by looking at the value of the
propositional variables and the information states of the agents alone (since these are the same in
u and v).
This is indeed the view that is implied by the framework that is developed by Fagin et al. (1992)
and in the fourth chapter of Fagin et al. (1995). In this model, the possible worlds in a Kripke
model have an internal structure themselves: they are descriptions of `states', basically `ways the
world could be.' It is this internal structure of the possible world that makes the world `what it
is': the values of the propositional variables and the accessibility relations in the Kripke model
are an extra logical layer that makes it possible to speak about possible worlds in the language
of epistemic logic, but this extra layer may represent the internal structure of the possible worlds
only partially. If the structure of the Kripke model (that is, the model modulo the worlds in that
model) gives no clue as to how to distinguish two possible worlds, this is still no reason to consider
them `the same.' In the example above, the distributed knowledge of a and b should indeed be
inconsistent under this conception: after all, their information states contain dierent worlds.
This view on Kripke models is very dierent then that of Hintikka, for example. Hintikka
(1962) identies possible worlds with model sets: descriptions of possible states of aairs in some
logical language. For our purposes, we can identify a possible world with a set of sentences of L
that is maximal consistent in some logic, say K45, the modal logic of belief. We can then dene a
Hintikka model as a Kripke model in which each
world is a maximal consistent set of sentences,
a , and K 2 , then 2 ,.1
and for each and , 2 W , it holds that if ,!
a
Returning to our example, we can represent the Hintikka model that corresponds with it as
follows:
1
I am just giving a rough denition to illustrate the point.
114
w
a
J
Jb
J
^
J
v
u
In this picture, w is the set of sentences that is true at w, and v and u are the sentences true
at v and u respectively. Note that each sentence that is true at v is true at u as well, so drawing
the Hintikka model as above is a bit misleading: the worlds v and u are in fact the same world.
That means that the intersection of the worlds accessible for a from w with those accessible
for b is not empty. In other words, in the Hintikka model that corresponds to our example, the
distributed knowledge between a and b is not inconsistent.
The dierence between the two viewpoints leads to a dierence at the logical level as well. If
our set of propositional variables is nite, we can describe the set of sentences w corresponding
the world w up to uniqueness by a single sentence; w is nitely axiomatizable. Let w be that
sentence. Our example shows that :Dfa;bg ? is a semantical consequence of w with respect to
all Hintikka models, but not with respect to Kripke models.
Hintikka's view is not the only alternative to Kripke semantics. Elsewhere (Gerbrandy and
Groeneveld 1997; Gerbrandy 1997, see also Barwise and Moss 1996), I have developed a semantics
for epistemic logic that takes a slightly dierent take on the notion of a possible world in a Kripke
model. The idea behind it is that a world in a Kripke model is a description of a possible way
the world could be, and this description is completely exhausted by the values of atomic sentences
and the information states assigned to each of the agents. If one takes this idea seriously, then one
can identify a possible world in a Kripke model with a possibility: a function that assigns to each
propositional variable a truth-value and to each agent a set of possibilities. To make this work
formally needs some rather sophisticated mathematical tools, with which I will not be concerned
here.
For example, the possible world v in the model of our example would correspond to the function
fv that assigns to each p the same value that it gets at v, and assigns to both a and b the state
that is the empty set, i.e. fv (a) = fv (b) = ;. The function fw corresponding to w assigns to a the
set fw (a) = ffv g, and to b the set fw (b) = ffu g. In a picture:
fw
a
J
Jb
J
^
J
fv
fu
It holds that fv and fu are the same functions. So, again, the intersection of fw (a) and fw (b) is
not empty: the distributed knowledge of a and b is not inconsistent.
As an intermediate conclusion we can say that the analysis of distributed knowledge in Kripke
models as `intersection of information states' is ontology dependent, in the sense that dierent
views on the role of possible worlds lead to dierent notions of distributed knowledge. I do not
think this is a very big problem, but the considerations above do show that the truth values of
sentences of the form D depend on the way one models the information states of the agents
involved. In particular, the truth values of such sentences depend on the view of `what a possible
world is.' In this respect, the semantical interpretation of D is not in the same way `ontologyindependent' as other epistemic operators, such as `common knowledge,' are.
Adding information as logical consequence.
I will now approach the question of distributed information in a more syntactic way. If we are given
two information states, we can combine them by taking the logical consequence of sentences that
115
are accepted in either state. With respect to distributed knowledge, this means that a sentence is
distributed knowledge if and only if it is a logical consequence of the sentences that are known by
the agents.
More formally, we can model this by taking all sentences of L that are accepted in either w(a)
or in w(b), and say that is distributed knowledge between a and b just in case it is a logical
consequence of this set of sentences. I.e. we can extend the semantical denition with the following
clause:2
Denition 1.5
(K; w) j= Dfa;bg i f 2 L j (K; w) j= Ka or (K; w) j= Kb g j= The reason for looking at consequences of sentences of L, as opposed to LD , is that in the latter
case, the denition would be circular: the right hand side of the denition would quantify over all
sentences believed by a or b in w, which includes the sentence DB . Since L does not contain the
operator D, then we can see it as a denition.
If we dene distributed knowledge in this way, then which sentences are distributed knowledge
depends on the expressive power of the language: the stronger the language, the more sentences will
be distributed knowledge. For example, suppose w(a) and w(b) are both singleton sets consisting
of wa and wb respectively. If wa and wb are indistinguishable in the language L but distinguishable
in a stronger language L+ , then relative to L, the distributed knowledge of a and b is consistent
(a and b consider the same sentences of L true), but relative to L+ , their distributed knowledge
is not consistent (there is a sentence of L+ of which a believes it is true, but b believes it is false).
If the notion of distributed knowledge between a and b is meant to capture the amount of
information a and b can communicate, using the language L, to a third agent, then the syntactic
approach seems to be the right one: this third agent will get exactly the information that can be
expressed by the language in which the agents are communicating. The fact that the meaning of
DB depends on the expressive power of the language is very natural in this case: if the agents
communicate using the language L, then the amount of information that they can communicate
depends on the expressive power of the language L. But this argument is only valid if we assume
that the agents do communicate within L: this is, in general, not the case in the model of Fagin
et al. (1995).
2 A comparison of the denitions
In this section I will compare the operator D of denition 1.4 and the operator D of denition
1.5. To make the discussion a bit easier, I will write use j=1 for the interpretation of LD as in
denition 1.4, and I will use j=2 to denote satisfaction relation dened in denition 1.5. Also, I
will extend the denition to apply to all groups B in the obvious way:
a v for each a 2 B ,
(K; w) j=1 DB i for all v such that w ,!
it holds that (K; v) j=1 (K; w) j=2 DB i f 2 L j (K; w) j= Ka for some a 2 Bg j=2 The comparison is divided in two subsections. Under the heading `truth,' I will show that the
two denitions do not assign the same truth-values to sentences, and identify two dierent classes
of Kripke models in which the dierences between the two denitions collapse. Under the heading
`logic,' there is a proof that if the the set of propositional variables is innite, then j=1 and j=2
give rise to the same logic, in the sense that the same sentences are valid under both conceptions.
Truth
If we consider the relations j=1 and j=2 , it is not hard to see they are not equivalent.
2
Humberstone (1985) characterizes what he calls `collective knowledge' like this.
116
The following proposition shows that the operator D of j=1 is, in a sense, weaker than the
operator D of j=2 : any sentence of L that is distributed knowledge under the second conception
is also distributed knowledge under the rst:
Proposition 2.1 For all of L:
If (K; w) j=2 DB then (K; w) j=1 DB , but not vice versa.
proof: In this proof and the ones that follow, I will write w instead of (K; w) when this is not
a v g.
likely to lead to confusion, and write w(a) for the set f(K; v) j w ,!
T
Assume w j=2 DB . Then f 2 L j w j= Ka for some a 2 Bg j= . Take any v 2 a2B w(a).
Clearly, for any such that w j= Ka for some a 2 B, it holds that v j= . So, by assumption,
v j= , and since v was arbitrary, it follows that w j=1 DB .
For the negative result, the model of our example on page 113 is an example of a model in
which (K; w) j=1 Dfa;bg ?, but (K; w) 6j=2 Dfa;bg ?.
2
In certain models, however, the operators are equivalent: in models that are full and in models
that are distinguishing, the dierences between the two operators collapse.
Proposition 2.2 (equivalence results)
1. A Kripke model K is full just in case for each w in K and each set of sentences , it holds
that if f 2 L j w j= Ka for some a 2 Bg , and , is j=2 -satisable
(i.e. there is a model
T
in which all sentences of , are j=2 -true) then there is a v 2 a2B w(a) such that v j=2 for
all 2 ,. It holds that if K is full, then:
(K; w) j=1 i (K; w) j=2 S
2. A Kripke model K is distinguishing just in case for each w in K , each v 2 a2A w(a) and
each a 2 A, there is a sentence a of L such that v j= a i v 2 w(a).
If K is distinguishing, then3
(K; w) j=1 i (K; w) j=2 .
proof: For the rst item, suppose that K is full. We prove the result by induction on , where
the only interesting case is when is of the form DB .
We show that for w in K it holds that:
(K; w) j=1 DB i (K; w) j=2 DB .
Assume that w 6j=2 DB . Then f 2 L j w j= Ka for some a 2 Bg 6j=2 . But then, the set
, := f 2 L j w j= Ka for some
T a 2 Bg [ f:g is j=2 -satisable, and since w is full this means
that there must be some w, 2 a2B w(a) in which all sentences of , are j=2 -true. But then in
particular, w, 6j=2 , so by induction hypothesis w, 6j=1 , and therefore w 6j=1 DB .
For the other direction, the reasoning is the same as in proposition 2.1: suppose w j=2 DB .
ThenT by denition, f 2 L j w j= 2a for some a 2 Bg j=2 . Then, a forteriori, for each
v 2 a2B w(a), v j=2 , and therefore, by induction hypothesis, v j=1 .
For the second item, the proof is by induction on the on the number of occurrences of DB operators in , with a subinduction on the structure of . The only interesting cases in the
induction is when is of the form DB .
So, suppose K is distinguishing, and that we have proven the result for all sentences that
contain at most as many occurrences of D as does. T
Suppose rst that w j=1 DB . Then, for each v 2 a2B (w)(a) it holds that v j=1 . Now take
any a 2 B. Since K is distinguishing, there is a sentence a of L such that for each u such that
3
van der Hoek et al. (to appear) prove a weaker result that gave me the idea for this one.
117
b u for some b, it holds that u j= i w ,!
a u. So, clearly, w j= K , and, since 2 L,
w ,!
a
1 a a
a
V
also w j=2 Kaa . Also, w j=1 Ka( b2B b !V). Since this sentence contains less occurrences of
D than DB does, it follows that w j=2 Ka ( b2B b ! ).
Note that it holds for all and a 2 B that
V if w j=2 Ka, then w j=2 DB . So, in particular,
w j=3 DB a for each a 2 B, and w j=3 DB ( a2B a ! ). It also holds that if w j=2 DB and
w j=2 DB ( ! 0 ), then w j=2 DB 0 . Combining all this, it follows that w j=2 DB , as we wanted
to prove.
The other direction is goes as before.
2
The condition of being `full' is subsumed under the property of Kripke models that information
states can be characterized by a set of sentences, i.e. that information states consist of all models
of a particular set of sentences. If we assume that the beliefs of the agent can be expressed
in the object language, then this is a natural consequence of the slogan that `the beliefs of an
agent are modeled by the set of possibilities compatible with his beliefs.' The condition of being
distinguishing is subsumed under the property that information states consist of all models of
some nite set of sentences.
In any case, the dierences between j=1 and j=2 are relevant only in models in which information
states cannot be characterized by a set of sentences: otherwise, the notions collapse.
Logic
We have seen above that when our set of propositional variables is nite, the logic of j=1 is not
the same as the logic of j=2 . When we have innitely many propositional variables, the situation
is dierent, however. In this case the logic of both relations are the same, in the sense that any
sentence that is valid under j=2 is also valid under j=1 , and vice versa. We can show this by using
the properties of fullness and being distinguishing of the previous section.
Proposition 2.3 For each sentence of LD :
For all (K; w): (K; w) j=1 i for all (K; w): (K; w) j=2 .
Moreover, this equivalence holds also if we restrict the quantication to all transitive, euclidean
and/or reexive models.
proof:
[)] Suppose (K; w) 6j=2 . Then, with lemma 2.4, we can nd a full model (K 0 ; w0 ) such that
0
(K ; w0 ) 6j=2 . But then, (K 0 ; w0 ) 6j=1 .
[(]. Suppose there is a K such that (K; w) 6j=1 . Then can use lemma 2.5 and nd a
distinguishing model (K 0 ; w0 ) such that (K 0 ; w0 ) 6j=1 . But if (K 0 ; w0 ) is distinguishing, this
implies that (K 0 ; w0 ) 6j=2 .
2
Lemma 2.4 For each (K; w) there is a full model (K; w)0 such that (K; w) j=2 i (K; w)0 j=2 .
Moreover, we can nd a (K; w)0 that is euclidean, transitive and/or reexive just in case (K; w)
is.
proof: The proof is just a simple variation on the canonical model construction of the standard
completeness proof (cf. proposition 3.1. Dene K 0 as follows. For its domain, K 0 has all maximal
j=2 -satisable sets, i.e. all sets of sentences for which there is a model in which all sentences of
are true, and which is maximal in the sense that adding any sentence
to results in a set that
a , i for all , if K 2 ,
is not satisable. The accessibility relations of K 0 are given by: ,!
a
then 2 ,, and the valuation function assigns to each and p the value 1 just in case p 2 .
For a euclidean, transitive and/or reexive model, we construct K 0 with sets that are satisable
in the appropriate models.
It holds that:
(K 0 ; ) j=2 i 2 118
The proof works by rst showing the result for all sentences of L by induction on , and then
showing that the result holds for all sentences of LD , again by induction on . The details of
the proof are similar to those in the standard Henkin proof of the completeness of classical modal
logic. The case where is of the form DB runs as follows:
(K 0 ; ) j=2 DB i
f 2 L j (K 0 ; ) j= Ka for some a 2 Bg j=2 i (induction hypothesis)
f 2 L j Ka 2 for some a 2 Bg j=2 i (since is maximal)
DB 2 Now, clearly, the set of sentences that are true in (K; w) is satisable. Let be this set. Then
(K 0 ; ) is a full model such that (K; w) j=2 i (K 0 ; ) j=2 for each 2 LD .
2
Lemma 2.5 For each (K; w) such that (K; w) j=1 , there is a distinguishing model (K 0; w0 ) such
that (K 0 ; w0 ) j=1 .
Moreover, it K is transitive and euclidean (and reexive), then we can choose K 0 to be transitive
and euclidean (and reexive) as well.
proof: Suppose (K; w) j=1 . We know from the completeness proof that we can nd a countable
model (K; w)0 such that (K; w)0 j=1 . Let, for each v in the domain of K 0 , pa;w be a propositional
variable that does not occur in . Here we use the fact that the set of propositional variables is
innite.
a v.
Now let K 00 be exactly as K , except that V (v)(pw;a ) = 1 i w ,!
Clearly, K 00 is distinguishing: it holds for each v and w in the domain of K that v 2 w(a) i
v j= pa;w . Of course, if K 0 is transitive, euclidean and/or reexive, then so is K 00 .
Since does not contain any of the propositional variables pa;w , it follows by a standard
argument that (K 0 ; w) j= i (K 00 ; w) j= . So, (K 00 ; w) is the model we are looking for.
2
So, we have proven that j=1 and j=2 give rise to the same notion of validity. I state here, without
proof, that this result can be extended: the set of sentences of LD true in all Hintikka models, or
in all possibilities, is the same set as the set of j=1 (or j=2 ) valid sentences.
3 Completeness.
We have seen that, if our language has innitely propositional variables, the two notions of validity
j=1 and j=2 are the same. In the following, we will write j= i (K; w) j=1 for each (K; w),
and j=DK45 and j=DS5 for their transtive and euclidean, and transitive, euclidean and reexive
counterparts.
The minimal modal logic K has the following axioms and rules:
K1 ` whenever is a truth-functional tautology.
K2 ` (Ka ^ Ka( ! )) ! Ka .
MP From ` and ` ! , conclude that ` .
Nec From ` conclude that ` Ka
Adding the following two axioms to the logic of K provides a sound and complete axiomatization
of all valid sentences of LD .
D1 Dfag $ Ka.
D2 (DB ( ! ) ^ DC ) ! DD if B D and C D.
119
We let DK consist of the axioms of K together with D1 and D2.
Axiom D1 says that the knowledge that is distributed in the `group' consisting of a only is
just the knowledge of a. Axiom D2 says that if a certain group has distributed knowledge of ,
and another group has distributed knowledge that implies , then both groups together have
distributed knowledge of as well.
To consider one implication of these axioms: if B C , then ` DB ! DC for all .4 This
validity corresponds with the intuition that if a sentence is distributed knowledge in a group B, it
will also be distributed knowledge in any group larger then B. One can see this as a generalization
of the maxim that `two know more than one.'
Also, these axioms imply that DB is a normal modal operator, in the sense that a necessitation
rule for DB is a derived rule.5 and that D distributes over implication.6
The logic DK45 is given by adding the following two axioms to DK:
D4 DB ! DB DB .
D5 :DB ! DB :DB .
While the logic DS5 is given by adding the following axiom to DK45:
DT DB ! .
Proposition 3.1 (Completeness)
DK is a sound and complete axiomatization of j=.
DK45 is a sound and complete axiomatization of j=DK45
DS5 is a sound and complete axiomatization of j=DS5 .
proof: I'll present a sketch of a completeness proof that is based on the completeness proof Fagin
B ) ; V ), and
et al. (1992) give for DS5. Let a `pseudomodel' be a model of the form (K; (,!
BA
D
dene `pseudo-satisability' as a relation between sentences of L and pseudo-models by treating
B -accessible worlds. With respect to pseudo-satisability,
the operators DB as quantifying over ,!
the operators DB are just classical modal operators. We can just use the standard techniques to
show that any DK (or DK45 or S5) consistent theory can be pseudo-satised in a pseudo-model
(which is transitive, euclidean and reexive, if necessary). It is also not very hard to check that
B ,!
C . The
the rules are sound for pseudo-models that have the property that if B C , then ,!
canonical model has this property. That means that the logic is sound and complete with respect
B ,!
C . Since D is just another classical
to pseudo-models the property that if B C , then ,!
B
modal operator in these models, it follows that we can construct nite canonical models, and
therefore that the logic is decidable.
We can also use standard techniques to show that any pseudo-model can be unraveled into a
model that looks like a tree, and in which the same sentences are true.
Now turn the unraveled pseudo-model
into a model for LD by taking the same worlds and
a
valuation function, and setting w ,! v in the new model just in case there is a B such that a 2 B
B v in the pseudo-model. We can now show that any sentence that is pseudo-satised at
and w ,!
some world w in the restored unraveled pseudo-model is satised at w in the new model.
We have now proven that DK is complete with respect to all models. Since we need only an
unraveled pseudo-model of nite `depth' to satisfy a given sentence (the length of the longest path
in the tree need not be greater than the maximal nesting of modal operators in the sentence), DK
has the nite model property.
The new model does not have the properties associated with belief or knowledge. To show that
every DK45-consistent set is satisable in a belief model, we can simply take the transitive and
4 If B C , and a 2 B , then ` (D
fag ( ! ) ^ DB ) ! DC is a special case of D 2. With axiom D 1, it follows
that Dfag ( ! ), so, by propositional logic, we can conclude that ` DB ! DC
5 Assume that a 2 B . Then `
implies that ` Ka , which implies by D1 that ` Dfag , which implies by D2
that ` DB .
6 This is a special case of D 2, where the sets B , C and D are all the same.
120
euclidean closure of the accessibility relations in the tree model, and for DS5, we take the reexive
transitive and euclidean closure. Lemma 3.2 guarantees that this can be done: the new model is
safe.
2
a ) ) be a model such that for each w 2 K , the set of sentences
Lemma 3.2 Let K = (K; (,!
a2B
B , for each B A relative to K, as
that are true at w is DK45-consistent. We dene the relation ;
follows:
B t i there are s : : : s and s0 : : : s0 such that for each a 2 B: s ,!
a s ,!
a : : : ,!
a s ,
s;
0
n
0
1
n
1
n
a
a
a
a
s0 ,! s01 ,! : : : ,! s0m (with n 0 and m 1) and sn = s and s0m = t. Note that ; is the
a .
transitive and euclidean closure of B,!
a t. Models that look like a tree are
We say that K is safe when s ; t i for each a 2 B, s ;
safe, for example, and also models that are transitive and euclidean.
It now holds that:
a ) ; V ); w) j= i (K; (,!
a ) ; V ); w) j= .
(K; (;
a2B
a2B
proof: By induction on . The interesting case is when is of the form Ka , where we use the
fact that the theory of w is DK45-consistent. Once we have this, the case where is of the form
DB is straightforward with the assumption that K is safe.
2
I only have a partial answer to the question whether each consistent sentence is satisable in a
nite model. Note that the question loses some of its urgency by the fact that we have already
shown that the logics are decidable.
Proposition 3.3 (nite model property)
Every DK-consistent sentence is true in some nite model.
Suppose contains is such that if an occurrence of DB is in the scope of an occurrence of DD ,
then either B C , C B, or B\C = ;. Then, if is DK45-consistent, there is a nite introspective
model in which is true, and if is DS5-consistent, there is a nite reexive and introspective
model in which is true.
proof: I suggested in the proof of proposition 3.1 how each DK-consistent set is satised in a nite
model; this is also proven in Gargov and Passy (1990). One can also prove this more directly. My
proof of the second statement is rather complicated: it would take me several pages to go through
all the details. I have chosen to simply omit it.
2
The literature contains a whole range of proofs for modal logics `with intersection.' The following is
an attempt at an overview. There is a very elegant completeness proof in Gargov and Passy (1990)
for a logic they call `Boolean modal logic,' of which the language LD is only a small fragment.
They also show that DK has the nite model property. Their proof works just as well for our case.
Passy and Tinchev (1991) also study a richer language than the one considered here. They give
a completeness proof that may also work for DK. Since the two last-mentioned articles get their
inspiration from propositional dynamic logic as opposed to epistemic logic, these proofs apply
only to the logic DK; the authors are not concerned with proving completeness with respect to
transitive or euclidean models that are typical of epistemic semantics. And it is exactly this that
makes the completeness proofs of DK45 and DS5 so dicult. Fagin et al. (1992) give a proof of
the completeness of DS5; Fagin et al. (1995) claim that the logic is complete also for the language
with all DB -operators for each B A, but do not give a proof. van der Hoek and Meyer (1992)
and van der Hoek and Meyer (1997) contain a completeness proofs for DK45 and DK as well, but
only for a language with a single operator DA , with A the set of all agents. Since their proofs
are rather long and opaque, it is not immediately obvious how their techniques can be used to
work for the full language LD . Finally, Yde Venema (personal communication) has a proof of the
completeness of DK using the `step-by-step' method, that can be extended to cover the cases of
DK45 and DS5 as well.
121
4 Conclusions
In this article, I have compared two dierent schemes for dening the semantics of an operator that
expresses distributed knowledge in a Kripke model. The rst denition, which denes operator
as quantifying over the intersection of the accessibility relations of the agents involved has been
shown to be ontology dependent. The second denition, where distributed knowledge is dened in
terms of logical consequence, is shown to be dependent on the expressive power of the language.
Moreover, the dierent denitions are not interchangeable salva veritate. However, under certain
aspects the dierences between the denitions can be disregarded: in particular, when information
states always consist of all models of a certain set of sentences. It is shown that the dierent
denitions have the same weakly sound and complete axiomatization. This shows that the issue
of what a good denitions of distributed knowledge is cannot be decided on the basis of logic
alone (i.e. we cannot choose the one denition over and above another one on the basis of our
intuitions about which sentences logically follow from others, because in this respect, the denitions
are equivalent), but must be decided at the semantical level (which sentences are true in which
models).
References
Barwise, J. (1989). On the model theory of common knowledge. In The Situation in Logic,
number 17 in CSLI Lecture notes, pages 201{220. CSLI Publications, Stanford.
Barwise, J. and Moss, L. S. (1996). Vicious Circles. CSLI Publications, Stanford.
Fagin, R., Halpern, J. Y., Moses, Y., and Vardi, M. (1995). Reasoning about Knowledge. The
MIT Press, Cambridge (Mass.).
Fagin, R., Halpern, J. Y., and Vardi, M. Y. (1992). What can machines know? on the properties
of knowledge in distributed systems. Journal of the Association for Computing Machinery,
39(2):328{376.
Gargov, G. and Passy, S. (1990). A note on Boolean modal logic. In Petkov, P. P., editor,
Mathematical Logic, Proceedings of the Summer School and Conference on Mathematical Logic,
honourably dedicated to the ninetieth anniversary of Arend Heyting (1989{1980) held September
13{23, 1988, in Chaika (near Varna), Bulgaria, pages 299{309. Plenum Press.
Gerbrandy, J. (1997). Dynamic epistemic logic. ILLC preprint LP{1997{04. To appear in the
proceedings of the Second Conference on Information-Theoretic Approaches to Logic, Language,
and Computation.
Gerbrandy, J. and Groeneveld, W. (1997). Reasoning about information change. Journal of Logic,
Language, and Information, 6:147{169. Also available as an ILLC Report LP{1996-10.
Halpern, J. Y. (1987). Using reasoning about knowledge to analyze distributed systems. In Traub,
J., Grosz, B., Lampson, B., and Nilsson, N., editors, Annual review of computer science, Vol.
2, pages 37{68. Annual Reviews Inc., Palo Alto, California.
Halpern, J. Y. and Moses, Y. (1990). Knowledge and common knowledge in a distributed environment. Journal of the Association for Computing Machinery, 37(3):549{587.
Hintikka, J. (1962). Knowledge and Belief. Cornell University Press.
Humberstone, I. L. (1985). The formalities of collective omniscience. Philosophical Studies, 48:401{
423.
Kripke, S. A. (1963). A semantical analysis of modal logic I, normal propositional calculi.
Zeitschrift fur mathematische Logik und Grundlagen der Mathematik, 9:63{96.
122
Passy, S. and Tinchev, T. (1991). An essay in combinatory modal logic. Information and Computation, 93:263{332.
van der Hoek, W. and Meyer, J.-J. C. (1992). Making some issues of implicit knowledge explicit.
International Journal of Foundations of Computer Science, 3(2):193{223.
van der Hoek, W. and Meyer, J.-J. C. (1997). A complete epistemic logic for multiple agents:
combining distributed and common knowledge. In Bacharach, M. O. L., Gerbrand-Varet, Z. A.,
Mongin, P., and Shin, H. S., editors, Epistemic Logic and the theory of Games and Decisions,
pages 35{68.
van der Hoek, W., van Linder, B., and Meyer, J.-J. (to appear). Group knowledge isn't always
distributed. To appear in: Mathematics for the Social Sciences.
123
124