Download Approximating Common Knowledge with Common Beliefs

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Probability interpretations wikipedia , lookup

Transcript
GAMES
AND
ECONOMIC
BEHAVIOR
1,
170-190 (1989)
Approximating Common Knowledge
with Common Beliefs
Dov MONDERERAND Dov SAMET
Department
School
of Managerial
of Management,
Economics
and Decision
Sciences,
J. L. Kellogg
Graduate
Northwestern
University,
Evanston,
Illinois
60208
Strict common knowledge seems almost impossible; we can never be sure what
others know. It is shown that common knowledge can be approximated by the
weaker and more easily obtained condition of common belief. This approximation
justifies the standard assumption in game theory that the description of the game is
common knowledge. Aumann’s result on the impossibility of agreeing to disagree
can also be approximated when common knowledge is replaced by common belief. 0 1989 Academic Press, Inc.
INTRODUCTION
In most models in game theory it is assumed, either explicitly or implicitly, that common knowledge regarding certain facts is shared by the
agents. How crucial is this assumption? Do these theories collapse when
common knowledge is absent, or is there any concept weaker than common knowledge that is sufficient to sustain the conclusions of the theories
or at least some closely related conclusions? We show that the weaker
concept of “common belief” can function successfully as a substitute for
common knowledge in the theory of equilibrium of Bayesian games. We
also show that Aumann’s (1976) no-agreement theorem can be generalized to common belief.
To illustrate the need for a notion weaker than common knowledge we
look at the following cases. The most frequent cases of common knowledge are public announcements. Consider, for example, an auction. Once
the auctioneer has publicly announced a price, it is assumed by most
suppliers of auction models to be common knowledge to the participants
of the auction. But is it really? One should always allow for some small
probability that a participant was absentminded or deaf at the time of the
170
0899-8256/89 $3.00
Copyright 0 1989 by Academic Press, Inc.
All rights of reproduction in any form reserved.
COMMON
BELIEFS
171
announcement.
No matter how small that probability is, the price is
not common knowledge.
Nobody knows for sure that the others
know the price. We cannot even build the first story of this formidable
tower of hierarchies of knowledge, “I believe that you believe that 1 believe . . . ,” that is required for common knowledge. One might say, if
the probability is very small, why not simply assume it is zero? Such a
procedure would be valid only if, by making such an assumption, the
conclusions regarding the outcomes of the process do not change dramatically. A stimulating example by Rubinstein (1987) indicates that this might
not be the case. In this example, which is a game theoretic formulation of
the “coordinating
attack” problem (see Halpem, 1986), Nature chooses
one of the two two-person games G’ and G2 with equal probabilities.
Player 1 is informed which game is to be played. He then sends a message
to player 2 telling him this information. They then send messages back
and forth acknowledging, each in his turn, the previously received message. Each message has a small probability T of being lost, in which case
the whole process stops. When the process stops, neither of the players
knows whether it was his message that got lost or his partner’s acknowledgment. In this example the players climb, with high probability, the
tower of knowledge hierarchy and seem to be closer to common knowledge than the participants of our auction example. Unfortunately,
as optimizing agents the players cannot behave as if the games G1 and G2 were
common knowledge; i.e., they cannot play the “natural” Nash equilibria
in G’ or G2. Indeed, one can show that even if the players were only Eoptimizers (they do not care about pennies) they still cannot pretend that
the games G ’ and G2 are common knowledge.
How different then are auctions and public announcements in general
from the game theoretic version of the coordinated attack problem? Are
public announcements stylized but useless fiction? To answer these questions we develop a precise measure of their proximity to common knowledge.
When one examines a truncated hierarchy of knowledge one observes
that all is not lost. After several iterations of “I know that you know that I
know. . . ,” one stops and “he does not know that . . .” from then on.
But certainly in this case he may still “believe that . . .” with some
certainty. And then he may believe that the other does, and so on. So
hierarchies never die; they always extend ad infinitum where beliefs replace knowledge. This opens the door to an iterative definition of a common belief similar to that of common knowledge. The shortcomings of the
iterative definitions are that they lack some explanatory and descriptive
power. Clearly not even homo rationalis checks the validity of infinitely
many statements one by one. Still everybody understands public announcements to common knowledge. We seek to capture this intuitive
172
MONDERER
AND
SAMET
understanding with our definition of an “evidently known” event, as
reflected in Aumann’s original definition of common knowledge.
We use exactly the same type of events, evidently believed events, to
define common beliefs. We find this definition more compelling and eventually most useful in deriving theorems. The difference between these
definitions may be of some importance from the viewpoint of the interpretation of the theory of common knowledge and common beliefs. From the
formal point of view we show that the two are equivalent.
The model we use is the same as Aumann’s, in which knowledge is
given by a partition and beliefs are simply posterior probabilities. The
definitions of common knowledge and common belief in such a model are
strikingly similar. The similarity is that they are both “common.”
The
property of common, which is the same in both models, is expressed by
the hierarchies being infinite and by the similarity of evidently known and
evidently believed events. The difference between the notions is that
belief replaces knowledge.
In the same way the intensity of beliefs can be quantified, using probabilities, common belief can be quantified. A common p-belief will be
defined for each p in [0, 11. For p = 1, common l-belief is almost the same
as common knowledge. For our purposes the differences are irrelevant
(common l-beliefs were studied by Brandenburger and Dekel (1987)). So
the concept of common p-belief generalizes the concept of common
knowledge. Notions similar to belief and common belief were studied in
the framework of logical deductive systems by Gaifman (1986) and Fagin
and Halpern (1988).
Does common p-belief approximate common knowledge when p approaches l? We answer this in the affirmative in Sections 4 and 5.
In Section 4 we consider Aumann’s (1976) agreeing to disagree theorem. If agents have the same prior distribution and their posterior probabilities for a certain event are common knowledge then these posteriors
must coincide. The theorem can be generalized. If the posteriors are
common p-belief then they may differ by at most 2(1 - p).
In Section 5 we examine the standard assumption in game theory that
for the players to play a Nash equilibrium, the description of the game
must be common knowledge. We do this by studying Bayesian games.
Nature selects a state. In each state of the world one of finitely many
games in normal form is played. Information structures determining what
the players know about Nature’s choice are given by partitions. If it is
common knowledge among players which game is to be played then players can play any Nash equilibrium in each of them. We examine the case
where there is only a common p-belief of what game is to be played and
where such common beliefs do not necessarily exist in each state but only
in a “big” set of measure 1 - 6. We show that in this case if players are E-
COMMON BELIEFS
optimizers
to whom
strategies
that would
173
at each state w, they can almost mimic the behavior of players
the game played is common knowledge. That is, both their
and their payoffs are very close to the strategies and payoffs
be played if the games played were common knowledge.
1. PRELIMINARIES: COMMON KNOWLEDGE
The results of this section are well known in the literature of common
knowledge (see, e.g., Binmore and Brandenburger, 1987; Brandenburger
and Dekel, 1987; Geanakoplos, 1988; Tan and Werlang, 1988).
Let Z be a finite set of agents and let (a, C, p) be a probability space,
where fl is the space of states, 2 is a a-field of events, and p is a probability measure on C. For each i E I, Iii is a partition of Sn into measurable
sets with positive measure, and therefore a countable partition. For w E fI
we denote by ITi
the element of Iii containing w. ffi is interpreted as the
information available to agent i; II,(o) is the set of all states which are
indistinguishable to i when w occurs. We denote by 9i the c-field generated by Iii. That is, Si consists of all unions of elements of J&. We say
that i knows event E at w, if II,(o) C E. Let Ki(E) be the event “i knows
E.” That is,
Ki(E) = (01 II,(o)
C E}.
Recall that an event C is common knowledge at w, if there exists an event
E such that w E E c C, and E E 9i for each i E Z (Aumann, 1976).
Alternatively, one can show that C is common knowledge at o iff for each
nLlandagentsir,i2,.
. . ,i,
That is, when o occurs then for all n 1 1 and for all agents il, i2, . . . , i, it
is true that
(i, knows that [i2 knows that
[. . . in-, knows that [i, knows Cl] . . .I).
A different way of expressing the iterative nature of common knowledge is by considering the intersections of the following events:
Cl: Every agent knows C.
C2: Every agent knows Cr.
C3: Every agent knows C2, etc.
174
MONDERERAND
PROPOSITION
1. Let E(C) = n,&?,
all n I 1, and Co = C. Then,
SAMET
where Cn = f7iE,Ki(C*-1)for
C is common knowledge at o iff o E E(C).
Proof. Note the following properties of the knowledge operators Ki:
For all events A c B, Ki(A) c A, Ki(A) = A iff A E B;, and Ki(A) c
K,(B). Also, for all sequences of events (A”), Ki(n,(A”))
= n, Ki(A”).
Suppose C is common knowledge at w. Then there exists w E E C C
such that E E Bi for all i E I. Let i E I. AS E c C, E = Ki(E) C Ki(C).
Therefore,
E C n Ki(C)
iEI
= C’.
Continue to prove by induction on 12that E c C” for all n 2 1. That is, E c
E(C). Therefore o E E(C).
Conversely, it suffices to prove that E(C) 5 C and that E(C) E 9i for all
i E I. Obviously, E(C) c C’ C Ki(C) C C. TO show that E(C) E 9;
observe that for all it 2 1
E(C) C Cn+’ C Ki(C”).
Hence,
.
E(C)
CnnrlKi(C”)
=Ki(fi> Cn)=Ki(E(C))
CE(C).
We say that the event E is evident knowledge if for each i E I
E C Ki(E).
(2)
An event is evident knowledge if whenever it occurs all agents know it.
The typical evident knowledge events are public announcements. When
the event E is “the teacher says in class that at least one student has a red
spot on his head,” then this event, just by its occurrence, implies that
every agent (student) knows it. Note that (2) is equivalent to Ki(E) = E,
but it is the one-sided implication of (2) that captures the nature of evident
knowledge. The other implication Ki(E) c E is a property of knowledge
not related to evident knowledge. It is easy to see now that the following
proposition holds.
PROPOSITION.
C is common knowledge at w iff there exists an evident
knowledge event E such that w E E and for all i E Z
E c Ki(C).
(3)
COMMON
175
BELIEFS
The interpretation of common knowledge as stated in (3) is the following: The event C is common knowledge in a certain state of affairs w if a
certain event (E) occurs in this state, which is evidently known, and
which implies that everyone knows C. As an illustration, let C be the
event “the price of the picture is $1000,” and let E be the event “the
auctioneer announces that the price of the picture is $1000.” o is one of
the states of the world in which E occurs. Obviously, when E occurs
everyone knows that it has occurred (E C Ki(E)). Moreover, once E
occurs everyone knows C (E C Ki(C)). Thus, C is common knowledge at
each state of the world o in which E occurs (o E E). Note that C itself is
not evident knowledge; the fact that the price is $1000 is not necessarily
known to all agents, whenever this is indeed the price. The two alternative definitions of common knowledge, the iterative definition given in
Proposition 1 (or in (1)) and that given in (3), represent two different
aspects of this notion. The iterative definition is considered by many as
the natural definition of common knowledge. On the other hand, it is hard
to believe that people identify common knowledge by checking the infinitely many conditions imposed by (1) one by one, or even any finite
number of them beyond n L 3 (see, e.g., Clark and Marshall, 1981). The
definition given in (3) (which is basically Aumann’s definition), with its
economic means, is the one that is used to recognize and understand
common knowledge.
2.
BELIEFS
AND COMMON
BELIEFS
As before, (a, C, ZL)is a probability space, Z is a finite set of agents, and
the information available to agent i is given by a partition fIi of fi to
measurable sets with positive probabilities. We start by replacing the
phrase “i knows E at w” by the phrase “i believes E with probability at
least p at 0,” where 0 I p % 1, abbreviated to “i p-believes E at w.”
Formally, ip-believes E at w if p(ElIIi(o))
I p, i.e., if the posterior of E
given that w has occurred is at least p (for agent i). We denote by By(E)
the event “i p-believes E.” That is,
BP(E) = (0: p(ElIIi(w))
2 p},
We now record the following properties of Z3: for later use. The proof of
these properties is straightforward,
and therefore will be omitted.
PROPOSITION
2.
For each 0 I p I 1, i E I, and E, F E 2,
By(E) E 9i.
(4)
If E E 4i then BP(E) = E.
(5)
176
MONDERER
B;(B;(E))
AND
SAMET
= BP(E).
(6)
ZfE C F then By(E) c BP(F).
(7)
Zf (E”) is a decreasing sequence of events then
(8)
By ((7
P(EIWEN
En) = n Bf(E”).
n
n
2 P.
(9)
Note that (4)-(8) of Proposition 2 hold if one substitutes Ki for Br. Ki
also has the property Ki(E) C E for each E, which is the hallmark of
knowledge; whenever one knows E then E (is true). Property (9) generalizes this axiom of knowledge. It states that given that one believes E with
a probability of at least p (this is the event BP(E)) then the probability of E
is indeed at least p.
The case p = 1 is very similar to knowledge. We say that E = F a.s.
(respectively E C F a.s.) if p(EAF) = 0 (respectively p(F\E) = 0). With
this notation, (9) implies that
B!(E)
c E
a.s.
(10)
While in finite models there is no point in distinguishing between knowledge and l-belief, one may wish to preserve the distinction in continuous
models. For example, we l-believe that a number picked at random from
the interval [0, l] will be irrational, but we do not know it.
E is an evident p-belief if for each i E Z
E L B;(E).
(11)
That is, whenever E occurs everyone assigns a probability of at least p to
its occurrence (cf. (1)).
We now define common p-belief, generalizing the definition of common
knowledge in (3).
DEFINITION
1. An event C is common p-belief at w if there exists an
evident p-belief event E such that w E E, and for all i E I,
E c BP(C).
(12)
To illustrate the notions of evident p-belief and common p-belief let us
reconsider auctions. Let C be the event “the price of the picture is
$1000.” Let E be the event “the auctioneer announces that the price of
the picture is $1000.” If everyone must hear the announcements of the
177
COMMON BELIEFS
auctioneer, then E is evident knowledge. That is, whenever E occurs
everyone knows that it has occurred. However, if there is some positive
(possibly small) probability E that not all the audience are hearing, then E
is not evident knowledge. Moreover, E may not be evident p-belief for
high p. This is the case, for example, if E occurs and one of the agents
who assigns a low probability to C does not hear the announcement.
However, if F is the event “the audience are all hearing, and the auctioneer announces that the price of the picture is $1000,” then F is evident
(1 - &)-belief. Therefore, C is common (1 - &)-belief at each state of the
world in which F occurs.
Common p-beliefs also have the following iterative interpretation:
DEFINITION 2.
For every event C and every 0 I p 5 1 let
EP(C) = n C”,
n2l
(13)
where Co = C, and, for n 2 1, C” = f)ieI Bf(C”-‘).
The following
proposition
PROPOSITION 3.
(I)
(II)
is a counterpart
of Proposition
1.
For every event C and for every 0 5 p I 1,
Ep(C) is evident p-belief, and H(C) c Bf(C).for
C is common p-belief at w ifjo E Ep(C).
all i E I.
Proof.
(I) First we show that (Cfl)Ecl is a decreasing sequence. Indeed, for each i and for each n 2 1, C” C By(C”-9. Therefore by (7) and
(6)
By(P)
L B;(Bf(C-l))
= Bf(P-1).
Hence for all n 2 1,
cn+1
=
(7
iEl
LetiEI.Foralln?
q(P)
c
n
B;(C”-‘)
iEI
1,
EJ’(C) c Cn+’ L B;(P).
Therefore,
=
C”.
178
MONDERERAND
SAMET
which implies (by (8)) that
EP(C) c By (n
n
P)
= zq(EP(C)).
Thus, we have proved (11).
Clearly, ZP(C) C C* C By(C).
(II) If o E ZP(C), then by (I) C is common p-belief at w. As for the
converse, suppose C is common p-belief at o. Let w E E and E satisfies
(11) and (12). We show by induction on 12that E C C” for all IZ 2 1. This
will imply that o E EP(C). By (12) E c Cl. If E C C”, then by (11) and (7)
E C BP(E) C B+‘(C”). As this is true for each i E I,
E c f-j
BP(P)
= C”+‘.
n
it/
By (II), EP(C) is the event “C is common p-belief,” and by (I) we find
that this event itself is common p-belief at each of its states.
At this point it is useful to illustrate the new definitions with an example.
EXAMPLE
1. There are two agents 1 and 2 (I = (1, 2)). The agents
either hear (H) or do not hear (D) a given announcement. The probability
that an agent hears is 1 - E. Thus (assuming independence between the
players),
Cl = {HH, HD, DH, DO},
= (1 - E)~, p(HD) = p(DH) = ~(1 - E), and p(DD)
Each agent knows his type only. That is,
p(HH)
= c2.
II, = {WH, HD), {DH, DDH
and
II2 = {{HH, DH}, {HD, DD}}.
Let A = {HH}. That is, A is the event “everyone hears.” A is not common knowledge at any w E Q (as there does not exist a o in which even
one of the agents knows that A occurs). However, A is common (1 - E)belief at HH. To verify this, note that By(A) = A for i = 1,2 and p = 1 - E,
and therefore EP(A) = A.
Although the notion of common p-belief is defined for every 0 5 p I 1 it
COMMON
179
BELIEFS
is not of much interest unless p > 0.5. When we know that an event
occurs with probability 0.1 we are not likely to say that we believe in this
event. Indeed, if p I 0.5 we may p-believe (and even common p-believe)
in two contradictory events. For example, in the current example let E =
0.5 = p, and let E = {HD, DH} and F = {HH, DD}. Then, J!?(E) = Ep(F)
= 0. That is, at every w E LR it is common p-belief that both agents have
the same type, and it is also common p-belief that both agents do not have
the same type.
We end this section with simple observations about the relations between common knowledge and common belief. If C is common knowledge at w, then C is common p-belief at o for every 0 I p I 1. If C is
common l-belief at o, then there exists an event D which is common
knowledge at w such that C = D a.s. However, note that for a given event
C, the event “C is common knowledge” does not necessarily coincide
a.s. with the event “C is common l-belief.”
3.
A
DIGRESSION:
THE
AXIOMATIC
APPROACH
In this section we view the previous results from a different perspective. In Sections 1 and 2 the operators Ki and BP were derived from a
given partition. Now we assume that these operators are given as primitives and that they are referred to as knowledge and belief because of
some axioms they satisfy. Analysis of abstract knowledge operators on
events is used by Bacharach (1985) and by Brown and Geanakoplos
(1987). A different axiomatic approach is suggested by Samet (1987) and
by Shin (1987).
Let (0, C) be a measurable space. An operator B: 2 + C is called
a belief operaror if the following conditions are satisfied for all E, F, El,
E2, . . . , En, . . . in Z,
NW))
WE)
C
B(F)
B ((I
C B(E),
(14)
whenever E c F,
(19
En) = n
”
n
B(E”L
(16)
whenever (En) is a decreasing sequence.
B(E) is interpreted as the event that one believes E.
Several remarks are in order: (14) states that if one believes that he
believes E, then he believes E. That is, one’s beliefs concerning his beliefs
are always correct. Equation (15) states that if one believes in the event E
180
MONDERERANDSAMET
and E implies F, then he believes F. For finite models (16) is equivalent to
(15). Therefore (16) is just a continuity axiom.
Let I be a finite set of agents. For each i E Z let Bi be a belief operator
interpreted as the belief operator of i. Let E E 2. E is evident belief if E c
Bi(E) for all i E I. Let C E E and let w E Q. C is common belief at o if
there exists an evident belief event E such that w E E and E C Bi(C) for all
i E I.
PROPOSITION 4. Let E(C) = n,&?,
n 2 1, C” = flie,Bi(C”-I).
Then,
where Co = C and for all
C is common belief at o iff w E E(C).
Proof.
The proof is very similar to the analogous proof of Proposition
3 and therefore will be omitted.
n
The p-beliefs are less general than abstract belief operators, but they
allow us to quantify beliefs. In the following sections we show that in
many applications common p-belief is a good approximation to common
knowledge when p is close to 1.
4. AGREEING TO DISAGREE: THE CASE OF BELIEFS
Aumann (1976) has shown that two agents with common priors cannot
agree to disagree. That is, if the posteriors of a certain event are common
knowledge, then these posteriors must coincide. We will show that if the
posteriors are common p-belief for large enough p, then these posteriors
cannot differ significantly.
Fix an event X and define functions 5 for all agents i by
Let Vi, i E I, be numbers in the interval [0, I], and consider the event C =
niE1 {u E a: A(U) = ri}. We say that the posteriors of X are common pbelief at o if C is common p-belief at w.
THEOREM A. Zf the posteriors of the event X are common p-belief at
some w E 0, then any two posteriors can differ by at most 2(1 - p).
Proof.
As C is common p-belief at w, there exists an event E such that
w E E, E c BP(E), and E C B:(C) for all i E I. Since h is IL-measurable, fi
has a constant value on each S E Iii. But 5 is constantly ri on C and
therefore J;(U) = ri for every u E S whenever S E Iii and S (l C # 0. Since
for every u E S:(C), II,(u) n C f 0 (since p(C(IIi(u)) L p), it follows that
COMMON
181
BELIEFS
J = ri on By(C). Therefore fi = Ti on E. Applying the previous arguments
once more yields J; = ri on By(E).
AS BP(E) is a union of sets S from pi for which &Xl,!?) = ri, we have
p(XIBP(E))
= ri.
Let
x = p(XIE)
= AX n El
P.(E) ’
Then,
x = /-dx n B;(E))
P(E)
_ /.4X I-J [BP(E)\EI)
t-49
Hence,
X =
Since @r(E))
ri
/-@f(E)) _ /-4X n [WEWI)
/J(E)
P(E)
2 p(E) L J+@;(E))
we get
or
-X(1 - p) 5 ri - X 5G 1 - p+
Hence,
lri Since the last inequality.holds
XI
I 1 - p.
for all i E I,
Iri - rj( 5 2(1 - p)
for all i, j E I.
n
.
182
MONDERERANDSAMET
5.
COMMON~BELIEFS
IN BAYESIAN GAMES
Let $ = (1, 2, . . . , m}. Let %‘, YP, . . . , 9” be finite games in
strategic (normal) form. For every j E $ the players set is I = { 1,2, . . . ,
n}, the pure and mixed strategies sets of player i are Ai and Ai, respectively, and the payoff function of player i is H{: A + R, where A = @iel Ai
and R is the set of real numbers. Let (a, 2, p) be a probability space. For
a random variable f we will denote the integral jn f(u)&(u)
either by
E(f) or by E,(f). Similarly, for S c R with p.(S) > 0 we will denote the
expression (l/p(S)) ss f(u)dp(u) by E(flS) or by E,(flS).
Let .I: SL-+
(1, 2, . . . 2 m} be a random variable that determines the game to be
played. That is, if J(w) = j, then %j is played when w occurs. A partition I-I
of R into measurable subsets with positive probabilities is called an information structure.
Given information structures ff;, i E I, we define a game I- as follows:
The players set is I. For each i E I the strategies set of player i is A;,
where Ai is the set of all II ;-measureable functions (+;: R + A;. For each
ui E Ai and each S E fIi we denote C;(S) = ci(w), where w is an arbitrary
state in S. Let A = @E-l Ai be the set of strategy profiles. The payoff of
player i in the game r’ is the function F;: A + R defined by
where for o E a, (T(O) = (uI(w), CZ(O), . . . , (TV).
Fi(u)
=
J,,
Hf(W)(~I(W)~
~2(~),
.
.
.
3 ~n(W)MP(~).
That is,
(17)
Let oi E Ai, let o-i E E-i = @k+i Ak, and let w E iR. We say that oi is a
best response strategy (b.r.s) against u-; at o if
maximizes
over all si E Ai.
Obviously, u; is a b.r.s against u-i iff it is a b.r.s at each w E R. That is,
u is an ex ante equilibrium point iff u is an ex post equilibrium point.
COMMON
We now consider s-equilibrium
equilibrium point if for all i E Z
E,(Hf(Ui(ti)y
183
BELIEFS
points. We say that u is an ex ante E-
U-J) L Eu(Hi(Si,
for all si E Ai.
We say that v is an ex post e-equilibrium
wEcl
U-i)) - E
point if for all i E Z and for all
for all Si E hi.
Clearly every ex post e-equilibrium point is an ex ante s-equilibrium
point, but not vice versa.
Let Gj be the set of all states o in which the game played is %j. That is
Gj = {o E 0: J(o) =j}.
LetG=
{G’,G2,.
. . , G*‘} and let p > 0.5. We say that the game 99 is
common knowledge (common p-belief) at o, if the event Gj is common
knowledge (common p-belief) at o. Note that if the game %j is common
knowledge at W, then %j is played at w (i.e., J(o) = j), while if %j is
only common p-belief at o, then the game played at w may be different
from %j.
Set pj = p(Gj). For each j E 9 let sj E A be an equilibrium point in the
game %j with the associated payoff
Hj = (Hi(&),
Define a function
H{(sj),
. . . , Hj,(sj)).
(not necessarily a strategy profile!) (T*: fl+
u*(w) = (U’(W), cQ(w), . . . ) u,(w)) = (s:(u), sp,
A by
. . , ) s;y.
(19)
Obviously,
E(Hj(a*))
= c pjH.4
If the game played is common knowledge at each w E s1, then each of
the partitions II;, i E I, refines G, and therefore UT is a strategy for player
i (i.e., UT is &measurable).
Obviously u* turns out to be an equilibrium
point with the associated payoffs F(u*) = Cj pjHj.
We now examine the common p-belief case.
184
MONDERER
THEOREM
B.
AND
SAMET
Let p > 0.5. Set
1 - 6 = p{a~ E Q: For some j, Gj is common p-belief at w},
and let
M = maxIZZj(s)J.
(20)
i,j,s
Then there exists R’ c fl with ~(a’) 1 (2p - I)(1 - 6) such that the
following holds: For any selection (sj)jE.g of equilibrium points in the
games (W)jEs there exists a strategy profile u E i satisfying:
B-l:
B-2:
B-3:
o = sJ on R’.
IFi
- xj pjZZ<(sj)l I 2M[l
(T is an ex post e-equilibrium
- (2p - I)(1 - a)] for all i E I.
point whenever E > 2M(I -
@P - 1)).
Proof.
For each j E 9 denote Ej = Er(Gj).
That
is,
by Proposition
3
Ej = {w E a: Gj is common p-belief at w}.
Let i E I. If o E B;(Ej), then ~(EjlIIi(w)) 3 p > 0.5. Therefore, for If j,
t~(Et(I&(o)) < 0.5, which implies w Q Bf(Et). Thus the sets Z$‘(Ej), j E 8,
are mutually disjoint. Set Ri = Uj Br(Ej). Define ai
= s{ whenever w
E Bf (Ej). Later we will define vi(w) for w in the complementary set fif of
CRi, but whatever our definition may be, we are now able to prove B-l and
B-2 with s2’ = Uj(Ej fl Gj). Indeed, since for all i E I, Ej C Bp(Ej), then
for each w E Ej tl Gj, a(w) = So. As for all i E I, Ej c Bf(Gj) and Ej C
S;(Ej), it is easily verified that Ej C Z3frp-l(Ej n Gj) and therefore Ej rl Gj
c @-‘(Ej
fl Gj). Hence by (9),
p(Ej n GilBy-‘(D
n Gj)) 2 zp - 1.
Therefore,
j.@
n Gj) 2 (2~ - l)p(Bf-p-l(Ej
n Gj)) L (2~ - l)p(Ej).
Thus,
/-W)
= x pu(Ej n G-9 2 (2~ - 1) 2 p(Ej) = (2p - l)(l - 6).
j
J
(21)
COMMON
185
BELIEFS
This proves B-l. To prove B-2 note that
= IE(HJ(u))
- E(z-zj(sJ))( 5 2M[l
- (2p - I)(1 - S)]
since o = sJ on CR’.
We now define oi on a;. Define a new game IO as follows. The players’
set is I’ = {i E I: af # S}. A strategy for player i is a I&-measurable
function Ti: SZF+ A;. Given a strategy profile T = (ri)iEr the payoff for
player i is Fi(?), where for k E I, 7,(w) = Q(W) for OJE flak, and Y,(o) =
Q(O) for k E ai. Let 7 be an equilibrium point in the game TO (such an
equilibrium necessarily exists because the strategy sets are convex compact metric spaces, and the payoff functions are multilinear). Extend oi to
Of by letting vi(w) = Ti(w) for all o E Rf..We now proceed to prove B-3:
If w E Lnf, then oi is a b.r.s. of player i at o (not merely &-b.r.s.).
Suppose o E fii and let si E hi. For u E Ej n Gj we have
HJ(')(Ui(U)y U-i(U)) - HJcu)(Siy U-i(U)) = W(d)
- ZZj(Si, SCi) 22 0
because sj is an equilibrium point in the game 9.
As p(Ln’IIIi(w))
2 2p - 1 (because w E B;(Ej) and therefore
Z$‘(Gj), which implies that w E Bb-‘(Ej fl Gj)), we have
E,(H~(u)III;(w))
Alternatively,
2 Eu(ZZl(si, a-i)lIIi(w))
- 2M(1 - (2p - 1)).
n
we can state Theorem B as follows:
THEOREM
B*. Let E > 0 and let A4 > 0. Then there exist
13~> 0 such that for all p > p” and for all 6 < a0 the following
every n 2 1 and a set of n players I, for every m 2 1 and
games %I, S2, . .. . , %” with maxi,j,, IZZi(s)l 5 44, for every
equilibrium points (sj) in the games (W), for every probability
Z, CL) and a random variable J: R + {I, 2, . . . , m}, and
information structures (II,), if
p” < 1 and
holds: For
m n-person
selection of
space (a,
for every n
u{w E IR: For some j, Gj is common p-belief at W} > 1 - 6,
then there exists a strategy profile u E zi such that:
B*-I:
B*-2:
B*-3:
k(u(o)
IFi
w E
= sJ@‘))> 1 - E.
- Cj pjH{(sj)l
< c for all i E I.
u is an ex post n-equilibrium
point.
186
MONDERER
AND
SAMET
We say that the game 9 (or the event Gj) is p-believed by the players at
w, if Gi is p-believed at OJby each i E I.
THEOREM
C*. Let E > 0 and let M > 0. Then there exist p” < 1 and
a0 > 0 such that for all p > p” and for all 6 < ijo the following holds: For
every n 2 1 and a set of n players I, for every m 2 1 and m n-person
games %I, 9P, . . . , %” with maxi,j,, IH{(s)I
5 M, for every selection of
equilibrium points (sj) in the games (W), for every probability space (a,
X, p) and a random variable J: R + {I, 2, . . . , m}, and for every n
information structures (II,), if
Z.L{WE a: For some j, Gj is p-believed
by the players at o} > 1 - 6,
then there exists a strategy profile u E i such that:
B*-1:
B*-2:
Z&(w)
= sJ@)) > 1 - E.
C*-3:
(T is an ex ante e-equilibrium
JFi(o) - Ej p’H{(sj)J
< e for all i E I.
point.
Proof.
For all i E Z let fii = Uj B;(Gj). Define oi(w) = si whenever
o E B;(Gj) and define ui arbitrarily for o E fir.
It is obvious that a(w) = sJcO)for w E R*, where
R* = u
j
[Gj n ( q
B:(W)].
If I - p and 6 are sufficiently small, then ~(a*)
simultaneously proves B*-1, B*-2, and C*-3. n
is close to 1, which
As there are only finitely many players, Theorem C* ensures that if the
partition to games is p-believed by each of the players at a large set, where
p is sufficiently close to 1, then the players (using ex ante a-equilibrium
strategies) can almost correlate their strategies according to a given selection of equilibrium points in the games 9, j E 8.
Theorem B* ensures that if the partition to games is common p-belief at
a large set, then the players can almost correlate their strategies using ex
post s-equilibrium strategies.
We conclude this section by pointing out the obvious but interesting
fact that we can dispose of the assumption of common priors in Theorems
B, B*, and C*. First note that one can define the belief operators Z3: by
using a different prior for each agent i. Then one can use the results of
Section 3 to define common p-belief. Finally, instead of requiring that
with high probability the games be common p-belief, we can require that
each of the players, using his own priors, assigns a high probability to the
event that the games are common p-belief.
187
COMMON BELIEFS
To illustrate the results of this section we adopt (with minor changes) an
example of Rubinstein ( 1987).
EXAMPLE 2. Nature chooses in equal probabilities one of two twoperson games %* and Y? with the same strategies sets. Player 1 is informed
which game is to be played. If the game chosen is S2 he sends a message
to player 2 telling him that %* is to be played. They then send messages
back and forth acknowledging, each in his turn, the previously received
message. Each message has a probability 7 < 0.5 of being lost, in which
case the whole process stops, and the players knowing the number of
messages they sent choose their strategies. Note that the process stops
with probability 1 after a finite number of steps and that the whole process
is not part of the strategic decisions of the players, but is given to them.
We now analyze the game I, just described with
(10)
1)co:
(10)
w=[(o:
0)I
and
1
(0 0) (0 -2)
%* = [(A, 0) (i, 1) *
The set of states is s1 = (0, 1, 2, . . .}, where each k in R represents the
event “k messages had been sent until the process stopped.” Therefore,
~(0) = 0.5 and, fork L 1, p(k) = O&(1 - T)~-‘. Also, %’ is played iff k =
0. That is, the partition to games is G = {G’, G*}, where G1 = (0) and G*
= (1, 2, 3, . . .} (i.e., J(0) = 1, and, for k 2 1, J(k) = 2).
When the process stops neither of the players knows whether it was his
message that got lost or his partner’s acknowledgment.
Therefore the
information structures are
and
n2
= {HA 11, 12, 31, (4, 9, . . .>.
Let p7 = l/(1 + 7). It is easily verified that
By(G*)
II IIF
= (0)
and that
B’;“(G2) n BP’(G2) = (2, 3, . . .}.
Therefore the game played is p,believed by the players at each k f 1. Let
S’ = (Rl, Cl) and s* = (R2, C2), where Ri and Cj stand for row i and
column j, respectively. Obviously S* and s* are equilibrium points in the
games %I and %*, respectively. As bothp,+
1 and ~((1)) = 0.5~+ 0 when
188
MONDERER
AND
SAMET
r * 0, Theorem C* ensures (for any F > 0 and sufficiently
existence of an ex ante .+equilibrium point (T with
small 7) the
p.({k E Q: u(k) = sJ(k)}) > 1 - E.
(22)
Since ~((0)) = 0.5, (22) implies that ~((0)) = s1 and that o(k) = s* for k E
0’ c (1, 2, 3, . . .}, with ~(a’) > 0.5 - E.
Indeed, such an equilibrium is obtained for 7 < E if both players choose
sj whenever they p,-believe Gj. That is, a,(O) = o*(O) = u2( 1) = s*,
u*(l) = s*, and c(k) = (s*, s*) for all k 1 2.
However, for E < $ the above selection of strategies u is not an ex post
s-equilibrium point. To verify this consider the information set { 1, 2) of
player 1. If (T was an ex post e-equilibrium point, then
H:(R2,
Cl)
1
2 _ 7
> H:(Rl,
+ H:(R2,
C2) l--T
2-7
1
C2) 2 _ 7 + H:(Rl,
1-r
C2) 2--7
- E.
Therefore (1 - 7)/(2 - 7) < E, and as 7 < 0.5 we get E > 4.
Moreover, it can be shown (using similar arguments) that if E < &, then
for every ex post e-equilibrium U, the players will (at any state k) choose
row 2 and column 2, respectively, with a probability of at most Q. That is,
if for all k 2 0
then for all k 2 0, p$ I Qand q$ I Q. So, by Theorem B*, there exists po <
1 and cro < 1 such that no matter how small T is, for p > pot the amount of
common p-belief at the game I, is at most (Ye.Indeed, for p > B it is easily
verified that ZP(G*) = 0 and Ep(G’) c {0}, and therefore
p{k E a: For some j, Gj is common p-belief at k} 9 0.5.
However, if player 2 is not supposed to acknowledge, then the situation is
totally different. In this case fl = (0, lNR, lR}, where 1R stands for
“player 1 sends a message and it is received by player 2,” and 1NR stands
for the case that the message has not been received. Hence, ~(0) = 0.5,
p(lNR) = 0.57, p(lR) = 0.5(1 - 7), and the information structures are III
COMMON
189
BELIEFS
= I(O), {INR, lR}} and II2 = ((0, INR}, {lR}}. It is easily checked that
EP(G’) = (0) and that P(G2) = {lR}, and hence
I,L{~ E 0: For some j, Gj is common p-belief at k} = I - 0.5~ + 1
when r + 0.
Thus, by Theorem B*,
an ex post E-equilibrium
and s2 in a probability of
given by u(0) = S’ and
profile.
for any E > 0, for sufficiently small r we can find
point (+ in which the players play according to S’
at least 1 - E. Indeed, such a strategy profile (T is
u(lR) = s2, which uniquely defines a strategy
6.
CONCLUSION
We proposed the notion of common p-belief as an approximation to
common knowledge and showed that theories that use common knowledge can be approximated
when common p-belief replaces common
knowledge. The concept of common p-belief can be considered a formal
definition of almost common knowledge. A priori there are two possible
ways to define it. We have chosen to weaken the “knowledge”
part of
common knowledge, requiring that it be only a belief that is shared by the
agents. Thus we defined almost common knowledge to be common almost-knowledge,
leaving intact the “common,”
that is, the way agents
share what they share. We could instead weaken the “common”
part of
common knowledge, defining almost common knowledge to be almostcommon knowledge. Viewing common knowledge from the iterative
viewpoint suggests that this can be done by truncating the hierarchy of
knowledge. The coordinating attack game shows that as far as equilibrium
theory is concerned this approach fails. The reason seems to be that a
complete analysis cannot be carried out on the basis of truncated hierarchies. Two situations in which truncation takes place after the same number of iterations may differ dramatically because different beliefs are held
beyond the truncation point. One is necessarily led to consider an infinite
hierarchy of beliefs which results in the approach we adopted.
REFERENCES
AUMANN,
R. J. (1976).
“Agreeing
BINMORE,
K., AND BRANDENBURGER,
ory,”
LSE, mimeo.
BRANDENBURGER,
Math. Econ.
A., AND DEKEL,
16.
to Disagree,”
Ann.
A. (1987).
E. (1987).
Statist.
“Common
“Common
4,
1236-1239.
Knowledge
Knowledge
with
and Game
Probability
The1,” J.
190
MONDERER
AND SAMET
BROWN, D., AND GEANAKOPLOS, J. (1987). “Common Knowledge without Partitions,”
Yale mimeo.
CLARK, H. H., AND MARSHALL, C. R. (1981). “Definite Reference and Mutual Knowlof Discourse
Understanding
(A. K. Joshi, B. L. Webber, and I. A.
edge, ” in Elements
Jay, Eds.). London/New York: Cambridge Univ. Press.
FAGIN, R., AND HALPERN, J. Y. (1988). “Reasoning about Knowledge and Probability,” in
Reasoning
about Knowledge
(M. Y. Vardi, Ed.), pp. 277-293. IBM publication.
GAIFMAN, H. (1986). “A Theory of Higher Order Probabilities,” in Theoretical
Aspects
of
Reasoning
about Knowledge:
Proceedings
of the 1986 Conference
(J. Y. Halpern, Ed.),
pp. 275-292. Kaufmann.
GEANAKOPLOS, J. (1988). “Common Knowledge, Bayesian Learning, and Market Speculations with Bounded Rationality,” Yale mimeo.
HALPERN, J. Y. (1986). “Reasoning about Knowledge: An Overview,” in Theoretical
Aspects
of Reasoning
about
Knowledge:
Proceedings
of the 1986 Conference
(J. Y.
Halpern, Ed.), pp. l-18.
RUBINSTEIN, A. (1987). “A Game with “Almost Common Knowledge”: An Example,” in
Theoretical
Economics,
D. P. 871165. London School of Economics.
SAMET, D. (1987). “Ignoring Ignorance and Agreeing to Disagree,” preprint. Dept. of
Managerial Economics and Decision Sciences, J. L. Kellogg Graduate School of Management, Northwestern University.
SHIN, H. S. (1987). “Logical Structure of Common Knowledge. I. A Framework for the
Analysis of Differential Information,” mimeo. Oxford: Nuffield College.
TAN, T. C., AND WERLANG, S. (1988). “On Aumann’s Notion of Common Knowledge: An
Alternative Approach,” CARESS Working Paper No. 88-09.