Download a,b,c

Document related concepts

Nyaya wikipedia , lookup

Modal logic wikipedia , lookup

Nyāya Sūtras wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Transcript
ESSLLI 2009 Bordeaux
1
Dynamic Logics for Interactive Belief Revision
Sonja Smets, University of Groningen
Alexandru Baltag, Oxford University
website:
http://alexandru.tiddlyspot.com
ESSLLI 2009 Bordeaux
2
Combining two paradigms: DEL and BR
GENERAL PROBLEM: Develop logics for reasoning
about multi-agent belief revision, knowledge updates
and belief upgrades induced by various forms of
learning, communication and interaction.
Methodology: Extend the DEL (Dynamic Epistemic
Logic) setting and methods to integrate ideas from
classical BR (Belief Revision theory).
ESSLLI 2009 Bordeaux
3
PLAN OF THIS COURSE
1. Standard Epistemic, Doxastic, Dynamic and
Dynamic-Epistemic Logics: Logics of knowledge
and belief: S5, S4, K and KD45. Epistemic models.
“Standard DEL”: public and private announcements,
event models, the Product Update, reduction laws.
Cheating and the BR problem.
2. Non-monotonic Logics and Classical BR: From
default reasoning to BR. The AGM axioms. Models for
single-agent BR: plausibility models, Grove/Spohn
models, probabilistic models. Conditional belief. The
failure of AGM for revision with doxastic information.
ESSLLI 2009 Bordeaux
4
3. Doxastic Dynamics and its Epistemological
value: The Gettier problem. Doxastic attitudes:
strong belief, irrevocable knowledge, defeasible
knowledge (“safe belief”) etc. The logics of doxastic
attitudes. Dynamics: updates and upgrades; questions
and answers; “BR policies”; reduction laws. Dynamic
understanding of doxastic attitudes. The fixed-point
conception of “knowledge”.
4. Multi-agent Belief Revision. Multi-agent
plausibility models. Joint upgrades and private
upgrades. Interactive BR and its logic: doxastic event
models and the Action-Priority Update; reduction laws.
ESSLLI 2009 Bordeaux
5
5. Applications and Open Problems: Information
flow in Game Theory and epistemic analysis of solution
concepts. Communication strategies in dialogues,
cryptographic protocols, AI networks of interacting
robots, the Internet, social, economic and political
networks. Belief merge in Social Choice Theory.
Interactive Learning Theory: fixed points and cycles of
learning. Philosophy of Information and Social
Epistemology. Open questions.
ESSLLI 2009 Bordeaux
6
Lecture 1:
Standard Epistemic, Doxastic, Dynamic
and Dynamic-Epistemic Logics
ESSLLI 2009 Bordeaux
7
Plan of Lecture 1
1.1 Introduction: Examples, Stories and Puzzles.
1.2 Kripke Models, Epistemic Models, Doxastic Models.
Logics: S5, S4, KD45
1.3 Public and Private Announcements.
1.4 “Standard DEL”: Event Models, Product Update,
Reduction Laws.
1.5 The “BR Problem”: cheating and the failure of
“Standard DEL”.
ESSLLI 2009 Bordeaux
8
1.1. Examples, Stories and Puzzles
Examples of Multi-agent Systems:
1. Computation: a network of communicating
computers; the Internet
2. Games: players in a game, e.g. chess or poker
3. AI: a team of robots exploring their environment and
interacting with each other
4. Cryptographic Communication: some
communicating agents (“principals”) following a
cryptographic protocol to communicate in a private
and secret way
ESSLLI 2009 Bordeaux
9
5. Economy: economic agents engaged in transactions in
a market
6. Society: people engaged in social activities
7. Politics: “political games”, diplomacy, war.
ESSLLI 2009 Bordeaux
10
“Dynamic” and “informational” systems
Such multi-agent systems are dynamic: agents “do” some
actions, changing the system by interacting with each other.
E.g. of actions: moves in a game, communicating (sending,
receiving or intercepting) messages, buying/selling etc.
On the other hand, these systems are also informational
systems: agents acquire, store, process and exchange
information about each other and the environment. This
information may be truthful, and then it’s called knowledge.
Or the information may be only plausible (or probable),
well-justified, but still possibly false; then it’s called
(justified) belief.
ESSLLI 2009 Bordeaux
11
Nested Knowledge
Chuangtze and Hueitse had strolled onto a bridge
over the Hao, when the former observed, “See how
the small fish are darting about! That is the
happiness of the fish”.
“You are not fish yourself ”, said Hueitse, “so how
can you know the happiness of the fish?”
“You are not me”, retorted Chuangtse, “so how can
you know that I do not know?”
Chuangtse, c. 300 B. C.
ESSLLI 2009 Bordeaux
12
Self-Nesting: (Lack of ) Introspection
As we know,
There are known knowns.
There are things we know we know.
We also know
There are known unknowns.
That is to say We know there are some things
We do not know.
But there are also unknown unknowns,
The ones we don’t know
We don’t know.
Feb. 12, 2002, Department of Defense news briefing
ESSLLI 2009 Bordeaux
13
... And Belief ?
“Man is made by his belief. As he believes, so he
is.”
(Bhagavad Gita, part of the epic poem Mahabharata)
“Myths which are believed in tend to become true.”
(George Orwell)
“To succeed, we must first believe that we can.”
(Michael Korda)
ESSLLI 2009 Bordeaux
14
“By believing passionately in some thing that does
not yet exist, we create it.”
(Nikos Kazantzakis)
“The thing always happens that you really believe
in; and the belief in a thing makes it happen.”
(Frank Lloid Wright)
ESSLLI 2009 Bordeaux
15
Oh, really?! But this is a Lie!
So what?
Everyone lies online. In fact, readers expect you to
lie. If you don’t, they’ll think you make less than
you actually do. So the only way to tell the truth is
to lie.
(Brad Pitt’s thoughts on lying about how much money you
make on your online dating profile; Aug 2009 interview to
“Wired” magazine)
ESSLLI 2009 Bordeaux
16
Well, but all this believing, lying and cheating interactions
can only end up in extreme skepticism:
“I don’t even believe the truth anymore.”
(J. Edgar Hoover, the founder of the FBI)
Though even this was already anticipated centuries ago by
the most famous pirate of the Carribean:
ESSLLI 2009 Bordeaux
Mullroy: What’s your purpose in Port Royal, Mr.
Smith?
Murtogg: Yeah, and no lies.
Jack Sparrow: Well, then, I confess, it is my
intention to commandeer one of these ships, pick
up a crew in Tortuga, raid, pillage, plunder and
otherwise pilfer my weasely black guts out.
Murtogg: I said no lies.
Mullroy: I think hes telling the truth.
Murtogg: Don’t be stupid: if he were telling the
truth, he wouldn’t have told it to us.
Jack Sparrow: Unless, of course, he knew you
wouldn’t believe the truth even if he told it to you.
17
ESSLLI 2009 Bordeaux
18
Back to the real world!
The only escape from these infinite loops seems to be the
solid ground of the real world.
“Reality is that which, when you stop believing in
it, doesn’t go away.”
(Philip K. Dick)
But how to get back to reality, from the midst of
our mistaken beliefs??
ESSLLI 2009 Bordeaux
19
Answer: Belief Revision!
Dare to confront your mistakes! Learn to give up!
“It is not bigotry to be certain we are right; but it is
bigotry to be unable to imagine how we might be
wrong.”
(G. K. Chesterton)
ESSLLI 2009 Bordeaux
20
Belief revision is action!
True belief revision is “dynamic”: a sustained,
self-correcting, truth-tracking action. True
knowledge can only be recovered by effort.
So, finally, we get to what we could call the “Motto” of
Dynamic-Epistemic Logic:
“The wise sees action and knowledge as one. They
see truly.”
(Bhagavad Gita, once again)
ESSLLI 2009 Bordeaux
21
Uncertainty
Uncertainty is a corollary of imperfect knowledge (or
“imperfect information”). A game of imperfect information
is one in which some moves are hidden, so that the players
don’t know all that was going on: they only have a partial
view of the situation.
Example: poker (in contrast to chess).
A player may be uncertain about the real situation of the
game at a given time: e.g. they simply cannot distinguish
between a situation in which another player has a winning
hand and a situation in which this is not the case. For all
our player knows, these situations are both “possible”.
ESSLLI 2009 Bordeaux
22
Evolving Knowledge
The knowledge a player has may change in time, due to his
or other players’ actions. For instance, he can do some
move that allows him to learn some of the cards of the
other player. As a general rule, players try to minimize
their uncertainty and increase their knowledge.
ESSLLI 2009 Bordeaux
23
Wrong Beliefs: Cheating
In their drive for more knowledge and less uncertainty,
players may be induced to acquire a false “certainty”: they
will “know” things that are not true.
Example: bluffing (in poker) may induce your opponent to
believe you have a winning hand, when in fact you don’t.
Notice that such a wrong belief, once it becomes
“certainty”, might look just like knowledge (to the
believer): your opponent may really think he “knows” you
have a winning hand.
ESSLLI 2009 Bordeaux
24
“Everybody knows...”
Suppose that, in fact, everybody knows the road rules in
France. For instance, everybody knows that a red light
means “stop” and a green light means “go”. And suppose
everybody respects the rules that (s)he knows.
Question: Is this enough for you to feel safe, as a driver?
Answer: NO.
Why? Think about it!
ESSLLI 2009 Bordeaux
25
Common Knowledge
Suppose the road rules (and the fact they are respected)
are common knowledge: everybody knows (and respects)
the rules, and everybody knows that everybody knows (and
respects) the rules, and... etc.
Now, you can drive safely!
ESSLLI 2009 Bordeaux
26
Epistemic Puzzle no. 1: To learn is to falsify
Our starting example concerns a “love triangle”: suppose
that Alice and Bob are a couple, but Alice has just started
an affair with Charles.
At some point, Alice sends to Charles an email, saying:
“Don’t worry, Bob doesn’t know about us”.
But suppose now that Bob accidentally reads the message
(by, say, secretely breaking into Alice’s email account).
Then, paradoxically enough, after seeing (and believing)
the message which says he doesn’t know..., he will know !
ESSLLI 2009 Bordeaux
27
So, in this case, learning the message is a way to
falsify it.
As we’ll see, this example shows that standard
belief-revision postulates may fail to hold in such
complex learning actions, in which the message to be
learned refers to the knowledge of the hearer.
ESSLLI 2009 Bordeaux
28
Epistemic Puzzle no. 2: Self-fulfilling falsehoods
Suppose Alice becomes somehow convinced that Bob knows
everything (about the affair).
This is false (Bob doesn’t have a clue), but nevertheless
she’s so convinced that she makes an attempt to warn
Charles by sending him a message:
”Bob knows everything about the affair!”.
As before, Bob secretely reads (and believes) the message.
While false at the moment of its sending, the message
becomes true: now he knows.
ESSLLI 2009 Bordeaux
29
So, communicating a false belief (i.e. Alice’s action) might
be a self-fulfilling prophecy: Alice’s false belief, once
communicated, becomes true.
In the same time, the action of (reading and) believing a
falsehood (i.e. Bob’s action) can be self-fulfilling: the false
message, once believed, becomes true.
ESSLLI 2009 Bordeaux
30
Epistemic Puzzle no. 3: Self-enabling falsehoods
Suppose that in fact Alice was faithful, despite all the
attempts made by Charles to seduce her. Out of despair,
Charles comes up with a “cool” plan of how to break up the
marriage: he sends an email which is identical to the one in
the second puzzle (bearing Alice’s signature and warning
Charles that Bob knows about their affair.) Moreover, he
makes sure somehow that Bob will have the opportunity to
read the message. Knowing Bob’s quick temper, Charles
expects him to sue for a divorce; knowing Alice’s fragile,
volatile sensitivity, he also expects that, while on the
rebound, she’d be open for a possible relationship with
ESSLLI 2009 Bordeaux
31
The plan works: as a result, Bob is mislead into
“knowing” that he has been cheated.
He promptly sends Alice a message saying: ”I’ll see you in
court”.
After divorce, Charles makes his seductive move, playing
the friend-in-need. Again, the original message
becomes true: now, Alice does have an affair with
Charles, and Bob knows it.
Sending a false message has enabled its validation.
ESSLLI 2009 Bordeaux
32
Epistemic Puzzle no. 4: Muddy Children
Suppose there are 4 children, all of them being good
logicians, exactly 3 of them having dirty faces. Each can
see the faces of the others, but doesn’t see his/her own face.
The father publicly announces:
“One of you is dirty”.
Then the father does another paradoxical thing: starts
repeating over and over the same question “Do you know
if you are dirty or not, and if so, which of the two?”
ESSLLI 2009 Bordeaux
33
After each question, the children have to answer publicly,
sincerely and simultaneously, based only on their knowledge,
without taking any guesses. No other communication is
allowed and nobody can lie.
One can show that, after 2 rounds of questions and
answers, all the dirty children will come to know
they are dirty! So they give this answer in the 3rd round,
after which the clean child also comes to knows she’s
clean, giving the correct answer at the 4th round.
ESSLLI 2009 Bordeaux
34
Muddy Children Puzzle continued
First Question: What’s the point of the father’s first
announcement (”At least one of you is dirty”)?
Apparently, this message is not informative to any of the
children: the statement was already known to everybody!
But the puzzle wouldn’t work without it: in fact this
announcement adds information to the system! The
children implicitly learn some new fact, namely the fact
that what each of them used to know in private is now
public knowledge.
ESSLLI 2009 Bordeaux
35
Second Question: What’s the point of the father’s
repeated questions?
If the father knows that his children are good logicians,
then at each step the father knows already the answer to
his question, before even asking it! However, the puzzle
wouldn’t work without these questions. In a way, it seems
the father’s questions are “abnormal”, in that they don’t
actually aim at filling a gap in father’s knowledge; but
instead they are part of a Socratic strategy of
teaching-through-questions.
Third Question: How can the children’s statements of
ignorance lead them to knowledge?
ESSLLI 2009 Bordeaux
36
Puzzle no 5: Sneaky Children
Let us modify the last example a bit.
Suppose the children are somehow rewarded for answering
as quickly as possible, but they are punished for incorrect
answers; thus they are interested in getting to the correct
conclusion as fast as possible.
Suppose also that, after the second round of
questions, two of the dirty children “cheat” on the
others by secretly announcing each other that they’re
dirty, while none of the others suspects this can happen.
ESSLLI 2009 Bordeaux
37
Honest Children Always Suffer
One can easily see that the third dirty child will be
totally deceived, coming to the “logical” conclusion
that... she is clean!
So, after giving the wrong answer, she ends up by being
punished for her credulity, despite her impeccable logic.
ESSLLI 2009 Bordeaux
38
Clean Children Always Go Crazy
What happens to the clean child?
Well, assuming she doesn’t suspect any cheating, she
is facing a contradiction: two of the dirty children
answered too quickly, coming to know they’re dirty before
they were supposed to know!
If the third child simply updates her knowledge
monotonically with this new information (and uses classical
logic), then she ends up believing everything: she goes
crazy!
ESSLLI 2009 Bordeaux
39
1.2. Epistemic-Doxastic Models and Logics
Epistemic Logic was first formalized by Hintikka (1962),
who also sketched the first steps in formalizing doxastic
logic.
They were further developed and studied by both
philosophers (Parikh, Stalnaker etc.) and
computer-scientists (Halpern, Vardi, Fagin etc.)
ESSLLI 2009 Bordeaux
40
Models for Single-Agent Information
We are given a set of “possible worlds”, meant to represent
all the relevant epistemic/doxastic possibilities in a
certain situation.
EXAMPLE: a coin is on the table, but the (implicit) agent
doesn’t know (or believe he knows) which face is up.
º¹ ¸·
³´ H µ¶
º¹ ¸·
³´ T µ¶
ESSLLI 2009 Bordeaux
41
Knowledge or Belief
The universal quantifier over the domain of possibilities is
interpreted as knowledge, or belief, by the implicit agent.
So we say the agent knows, or believes, a sentence ϕ if ϕ
is true in all the possible worlds of the model.
The specific interpretation (knowledge or belief) depends
on the context.
In the previous example, the agent doesn’t know (nor
believe) that the coin lies Heads up, and neither that it lies
Tails up.
ESSLLI 2009 Bordeaux
42
Learning: Update
Suppose now the agent looks at the upper face of the coin
and he sees it’s Heads up.
The model of the new situation is now:
º¹ ¸·
³´ H µ¶
Only one epistemic possibility has survived: the agent now
knows/believes that the coin lies Heads up.
ESSLLI 2009 Bordeaux
43
Update as World Elimination
In general, updating corresponds to world
elimination:
an update with a sentence ϕ is simply the operation of
deleting all the non-ϕ possibilities
After the update, the worlds not satisfying ϕ are no longer
possible: the actual world is known not to be among them.
ESSLLI 2009 Bordeaux
44
Truth and Reality
But is ϕ “really” true (in the “real” world), apart from
the agent’s knowledge or beliefs?
For this, we need to specify which of the possible worlds is
is the actual world.
ESSLLI 2009 Bordeaux
45
Real World
Suppose that, in the original situation (before learning), the
coin lied Heads up indeed (though the agent didn’t know,
or believe, this).
We represent this situation by
¨¤ ¡¥
H¢
£
§ ¦
¤ ¡
£T ¢
ESSLLI 2009 Bordeaux
46
Mistaken Updates
But what if the real world is not among the “possible”
ones? What if the agent’s sight was so bad that she only
thought she saw the coin lying Heads up, when in fact it
lied Tails up?
After the “update”, her epistemically-possible worlds are
just
º¹ ¸·
³´ H µ¶
but we cannot mark the actual world here, since it
doesn’t belong to the agent’s model!
ESSLLI 2009 Bordeaux
47
False Beliefs
Clearly, in this case, the model only represents the agent’s
beliefs, but NOT her “knowledge” (in any meaningful
sense): the agent believes that the coin lies Heads up, but
this is wrong!
Knowledge is usually assumed to be truthful, but in this
case the agent’s belief is false.
But still, how can we talk about “truth” in a model
in which the actual world is not represented?!
ESSLLI 2009 Bordeaux
48
Third-person Models
The solution is to go beyond the agent’s own model, by
taking an “objective” (third-person) perspective:
the real possibility is always in the model, even if the
agent believes it to be impossible.
To point out which worlds are considered possible by the
agent we now use an arrow that points to them:
¨¤ ¡¥
¤ ¡
/ T
H
£ ¢
§£ ¢¦
Belief now quantifies only over worlds pointed by the arrow:
so (in the real world H of this model) the agent believes T.
ESSLLI 2009 Bordeaux
49
Beliefs about Beliefs
How can we represent beliefs about beliefs?
E.g. does the agent believe that she believes that
the coin lies Tails up?
Intuitively, the answer is yes, at least if our agent is
introspective enough. But, formally, this is unaccounted
by the arrows: in the world (T) that the agent believes
herself to be, there are no arrows pointing to any other
world! So the agent doesn’t believe that she believes
anything! Or rather (according to the semantics of
universal modalities over empty domains) she believes that
she believes everything!
ESSLLI 2009 Bordeaux
50
More Arrows
That can’t be right: it just means we didn’t finish our
modeling. To represent : the agent’s beliefs about her own
beliefs etc, we must continue drawing arrows coming, not
only out of the actual world, but also out of the other
possible worlds, and pointing to yet other possibilities.
In the model
¨¤ ¡¥
H¢
£
§ ¦
¤ ¡
/ T
£ ¢j
the agent is fully introspective: in the real world H she
believes T and she doesn’t believe H, but she also believes
that she believes T and that she doesn’t believe H.
ESSLLI 2009 Bordeaux
51
KD45
So doxastic models will be “finished” models, i.e. in which
from every possible world there are outgoing arrows
pointing to other (or the same) world(s). They are also
introspective models (to be defined).
These requirements will give us the standard KD45
semantic conditions on a doxastic model.
ESSLLI 2009 Bordeaux
52
Knowledge
We can now represent the original knowledge situation, in
which the agent simply didn’t know which face is up (and
she was introspectively aware of this lack of knowledge) in
the “arrow” style:
¨¤ ¡¥
o
H
. §£ ¢¦
¤ ¡
/ T
£ ¢j
The arrows describe now an equivalence relation: these
are the standard S5 conditions on epistemic models.
ESSLLI 2009 Bordeaux
53
Multi-agent Models
Suppose we have two agents, each with his/her own
knowledge/beliefs. We can then represent the situation
using labeled arrows (labeled with the name of each
agent).
In this way, we arrive at multi-agent Kripke models.
ESSLLI 2009 Bordeaux
54
Scenario 1: the concealed coin
Two players a, b and a referee c play a game. In front of
everybody, the referee throws a fair coin, catching it in his
palm and fully covering it, before anybody (including
himself) can see on which side the coin has landed.
a,b,c
¤ ¡
o
H
4£ ¢
a,b,c
¤ ¡
/ T
£ ¢j
a,b,c
ESSLLI 2009 Bordeaux
55
Kripke Models
For a set Φ of facts and a finite set A of agents, a
Φ-Kripke model is a triple
S
=
A
, k.k )
(S, →
consisting of
1. a set S of ”worlds”
a
2. a family of binary accessibility relations →⊆ S × S, one
for each agent a ∈ A
3. and a valuation k.k : Φ → P(S), assigning to each
p ∈ Φ a set kpkS of states
ESSLLI 2009 Bordeaux
56
The valuation is also called a truth map. It is meant to
express the factual content of a given world, while the
A
arrows →
express the agents’ uncertainty between various
worlds.
A Kripke model is called a state model whenever we think
of its ”worlds” as possible states. In this case, the elements
p ∈ Φ are called atomic sentences, being meant to represent
basic “ontic” (non-epistemic) facts, which may hold or
not at a given state.
Write s |=S ϕ for the satisfaction relation: ϕ is true at
world s in model S.
ESSLLI 2009 Bordeaux
57
Modalities
For every sentence ϕ, we can define a sentence 2ϕ by
(universally) quantifying over accessible worlds:
a
s |=S 2a ϕ iff t |=S ϕ for all t such that s → t.
2ϕ may be interpreted as knowledge (in which case we
use the notation Ka ϕ instead) or belief (in which case we
use Ba ϕ instead), depending on the context.
Its existential dual
3a ϕ := ¬2a ¬ϕ
denotes a sense of “epistemic/doxastic possibility”.
ESSLLI 2009 Bordeaux
58
“Common” Modalities
The sentence C2ϕ is obtained by quantifying over all
worlds that are accessible by any concatenations of arrows:
s |=S C2ϕ iff t |=S ϕ for every t and every a finite chain
a1
a2
an
(of length n ≥ 0) of the form s = s0 → s1 → s2 · · · → sn = t.
C2ϕ may be interpreted as common knowledge (in
which case we use the notation Ckϕ instead) or common
belief (in which case we use Cbϕ instead), depending on
the context.
ESSLLI 2009 Bordeaux
59
Doxastic Models
A doxastic model (or KD45-model) is a Φ-Kripke model
satisfying the following properties:
• (D) Seriality: for every s there exists some t such
a
that s → t;
a
a
a
• (4) Transitivity: If s → t and t → w then s → w
a
a
a
• (5) Euclideaness : If s → t and s → w then t → w
In a doxastic model, 2a is interpreted as belief, and
denoted by Ba .
ESSLLI 2009 Bordeaux
60
EXERCISE
The following are valid in every doxastic model:
1. Consistency of Beliefs:
¬Ba (ϕ ∧ ¬ϕ)
2. Positive Introspection of Beliefs:
Ba ϕ ⇒ Ba Ba ϕ
3. Negative Introspection of Beliefs:
¬Ba ϕ ⇒ Ba ¬Ba ϕ
ESSLLI 2009 Bordeaux
61
Epistemic Models
An epistemic model (or S5-model) is a Kripke model in
which all the accessibility relations are equivalence
relations, i.e. reflexive, transitive and symmetric (or
equivalently: reflexive, transitive and Euclidean).
In an epistemic model, 2a is interpreted as knowledge,
and denoted by Ka .
ESSLLI 2009 Bordeaux
62
EXERCISE
The following are valid in every doxastic model:
1. Veracity of Knowledge:
Ka ϕ ⇒ ϕ
2. Positive Introspection of Knowledge:
Ka ϕ ⇒ Ka Ka ϕ
3. Negative Introspection of Knowledge:
¬Ka ϕ ⇒ Ka ¬Ka ϕ
ESSLLI 2009 Bordeaux
63
S4 Models for weak types of knowledge
Many philosophers deny that knowledge is introspective,
and in particular deny that it is negatively
introspective. Both common usage and Platonic dialogues
suggest that people may believe they know things that
they don’t actually know.
An S4-model for knowledge is a Kripke model satisfying
only reflexivity and transitivity (but not necessarily
symmetry or Euclideaness). This gives a model to a
weaker notion of “knowledge”, one that is truthful and
positively introspective, but not necessarily negatively
introspective.
ESSLLI 2009 Bordeaux
64
1.3. Logics of public and private announcements
PAL (the logic of public announcements) was first
formalized (including Reduction Laws) by Plaza (1989) and
independently by Gerbrandy and Groeneveld (1997).
The problem of completely axiomatizing PAL in the
presence of the common knowledge operator was
first solved by Baltag, Moss and Solecki (1998).
A logic for “secret (fully private) announcements”
was first proposed by Gerbrandy (1999).
A logic for “private, but legal, announcements”
(what we will call “fair-game announcements”) was
developed by H. van Ditmarsch (2000).
ESSLLI 2009 Bordeaux
65
Scenario 2: The coin revealed
The referee C opens his palm and shows the face of the
coin to everybody (to the public, composed of A and B,
but also to himself): they all see it’s Heads up, and they
all see that the others see it etc.
So this is a “public announcement” that the coin lies
Heads up.
We denote this event by !H. Intuitively, after the
announcement, we have common knowledge of H, so the
model of the new situation is:
¤ ¡
£H ¢
ESSLLI 2009 Bordeaux
66
Public Announcements are (Joint) Updates!
But this is just the result of updating with H, as defined
above: deleting all the non-H-worlds.
So, in the multi-agent case, updating captures public
announcements.
From now on, we denote by !ϕ the operation of deleting the
non-ϕ worlds, and call it public announcement with ϕ,
or joint update with ϕ.
ESSLLI 2009 Bordeaux
67
Scenario 3: ’Legal’ Private Viewing
Instead of Scenario 2: in front of everybody, the referee (c)
uncovers the coin, so that (they all see that) he, and only
he, can see the upper face. This changes the initial
model to
a,b,c
¤ ¡
o
H
4£ ¢
a,b
¤ ¡
/ T
£ ¢j
a,b,c
Now, c knows the real state. E.g. if it’s Heads, he knows
it, and disregards the possibility of Tails. A and B don’t
know the real state, but they know that C knows it. C’s
viewing of coin is a ”legal”, non-deceitful action, although a
ESSLLI 2009 Bordeaux
68
Fair-Game Announcements
Equivalently: in front of everybody, an announcement of
the upper face of the coin is made, but in such a way that
(it is common knowledge that) only c hears it.
Such announcements (first modeled by H. van Ditmarsch)
are called fair-game announcements, they can be
thought of as “legal moves” in a fair game: nobody is
cheating, all players are aware of the possibility of this
move, but only some of the players (usually the one who
makes the move) can see the actual move. The others know
the range of possible moves at that moment, and they know
that the “insider” knows his move, but they don’t
ESSLLI 2009 Bordeaux
69
Scenario 4: Cheating
Suppose that, after Scenario 1, the referee c has taken a
peek at the coin, before covering it. Nobody has
noticed this. Indeed, let’s assume that c knows that a
and b did not suspect anything.
This is an instance of cheating: a private viewing which is
”illegal”, in the sense that it is deceitful for a and b. Now, a
and b think that nobody knows on which side the coin is
lying. But they are wrong!
ESSLLI 2009 Bordeaux
70
The Model after Cheating
c
º¹¨§ · ¦¸·¥
³´¡¢ H £µ¶¤ @
}
@
}
@@a,b
a,b }
}
@@
}
}
@Â
}
~
º¹ ¸·
º¹ ¸·
o
/ T
H
³´ J µ¶
³´ T µ¶
a,b,c
a,b,c
a,b,c
We indicated the real world here. In the actual world
(above), a and b think that the only possibilities are the
worlds below. That is, they do not even consider the ”real”
world as a possibility.
ESSLLI 2009 Bordeaux
71
Scenario 5: Secret Communication
After cheating (Scenario 4), c engages in another ”illegal”
action: he secretely sends an email to his friend a,
informing her that the coin is Heads up. Suppose the
delivery and the secrecy of the message are guaranteed: so
a and c have common knowledge that H, and that b doesn’t
know they know this.
Indeed, b is completely fooled: he doesn’t suspect that c
could have taken a peek, nor that he could have been
engaged in secret communication.
ESSLLI 2009 Bordeaux
72
The model is
a,c
º¹¨§ · ¦¸·¥
³´¡¢ H £µ¶¤ @
}
@@
}
b }
@@b
}
}
@@
}
º¹ ¸·
º¹ ¸·~}
o
/ T
H
³´ J µ¶
³´ T µ¶
a,b,c
a,b,c
a,b,c
ESSLLI 2009 Bordeaux
73
Private Announcements
Both of the above actions were examples of completely
private announcements
!G ϕ
of a sentence ϕ to a group G of agents: in the first case
G = {c}, in the second case G = {a, c}.
The “insiders” (in G) know what’s going on, the
“outsiders” don’t suspect anything.
ESSLLI 2009 Bordeaux
74
Scenario 5’: Wiretapping?
In Scenario 50 , everything goes on as in Scenario 5, except
that in the meantime b is secretely breaking into c’s
email account (or wiretapping his phone) and reading c’s
secret message. Nobody suspects this illegal attack on c’s
privacy. So both c and a think their secret communication
is really secret and unsuspected by b: the deceivers are
deceived.
What is the model of the situation after this action?!
Things are getting rather complicated!
ESSLLI 2009 Bordeaux
75
Scenario 6
This starts right after Scenario 2, when it was common
knowledge that c knew the face. c attempts to send a secret
message to a announcing that H is the case. c is convinced
the communication channel is fully secure and reliable;
moreover, he thinks that b doesn’t even suspect this secret
communication is going on. But, in fact, unknown and
unsuspected by C, the message is intercepted, stopped and
read by b. As a result, it never makes it to a, and in fact a
never knows or suspects any of this. As for b, he knows all
of the above: not only now he knows the message, but he
knows that he “fooled” everybody, in the way described
above.
ESSLLI 2009 Bordeaux
76
The Update Problem
We need to find a general method to solve all the above
problems, i.e. to compute all these different kinds of
updates.
ESSLLI 2009 Bordeaux
77
1.4.“Standard DEL”
• studies the multi-agent information flow of “hard
information”: unrevisable, irrevocable, absolutely
certain, fully introspective “knowledge”;
• gives an answer to the Update Problem, based on the
BMS (Baltag, Moss and Solecki) setting: logics of
epistemic actions;
• it arose from generalizing previous work on logics for
public/private announcements.
• this dynamics is essentially monotonic (no belief
revision!), though it can model very complex forms of
communication.
ESSLLI 2009 Bordeaux
78
Models for ‘Events’
Until now, our Kripke models capture only epistemic
situations, i.e. they only contain static information: they all
are state models. We can thus represent the result of each of
our Scenarios, but not what is actually going on. Our
scenarios involve various types of changes that may affect
agents’ beliefs or state of knowledge: a public
announcement, a ’legal’ (non-deceitful) act of private
learning, ’illegal’ (unsuspected) private learning etc.
We want to use now Kripke models to represent such types
of epistemic events, in a way that is similar to the
representations we have for epistemic states.
ESSLLI 2009 Bordeaux
79
Event Models
An event model (or “action model ”)
Σ
=
A
(Σ, →
, pre)
is just like an Kripke model, except that its elements are
now called actions (or “simple events”) and instead of the
valuation we have a precondition map pre, associating a
sentence preσ to each action σ.
ESSLLI 2009 Bordeaux
80
Epistemic/Doxastic Event Models
An event model is epistemic, or respectively a doxastic,
event model if it satisfies the S5, or respectively the KD45,
conditions.
ESSLLI 2009 Bordeaux
81
Interpretation
We call of the simple events σ ∈ Σ as deterministic actions
of a particularly simple kind: they do not change the
”facts” of the world, but the agents’ beliefs. In other words,
they are “purely epistemic” actions.
For σ ∈ Σ, we interpret preσ as giving the precondition of
the action σ: this is a sentence that is true in a world iff σ
can be performed. In a sense, preσ gives the implicit
information carried by σ.
Finally, the accessibility relations express the agents’
knowledge/beliefs about the current action taking
place.
ESSLLI 2009 Bordeaux
82
The update product
A
Given a state model S = (S, →
, k.k) and an action model
A
Σ = (Σ, →
, pre), we define their update product
S⊗Σ
=
A
(S ⊗ Σ, →
, k.k)
to be a new state model, given by:
1. S ⊗ Σ is
{(s, σ) ∈ S × Σ : s |=S preσ ) }.
A
A
A
2. (s, σ) →
(s0 , σ 0 ) iff s →
s0 and σ →
σ0 .
3. kpkS⊗Σ = {(s, σ) ∈ S ⊗ Σ : s ∈ kpkS }.
ESSLLI 2009 Bordeaux
83
Product of Pointed Models
As before, we can consider pointed event models, if we
want to specify the actual event taking place.
Naturally, if initially the actual state was s and then the
actual event is σ, then the actual output-state is (s, σ).
ESSLLI 2009 Bordeaux
84
Interpretation
The product arrows encode the idea that: two
output-states are indistinguishable iff they are the
result of indistinguishable actions performed on
indistinguishable input-states.
This comprises two intuitions:
1. “No Miracles”: knowledge can only gained from (the
epistemic appearance of) actions;
2. “Perfect Recall once gained, knowledge is never lost.
The fact that the valuation is the same as on the input-state
tells us that these actions are purely epistemic.
ESSLLI 2009 Bordeaux
85
Examples: Public Announcement
The event model Σ!ϕ for public announcement !ϕ consists
of a single action, with precondition ϕ and reflexive arrows:
a,b,c...
º¹¨§ · ¦¸·¥
ϕ
³´¡¢ £µ¶¤
EXERCISE: Check that, for every state model S, S ⊗ Σ!ϕ
is indeed the result of deleting all non-ϕ worlds from S.
ESSLLI 2009 Bordeaux
86
More Examples: Taking a Peek
The action in Scenario 4: C takes a peek at the coin and
sees the Head is up, without anybody noticing.
c
¨¤ ¡¥
. §£H ¢¦
a,b
¤
¡
/ true
£
¢x
a,b,c
There are two actions in this model: the real event (on the
left) is the cheating action of C ”taking a peek”. The
action on the right is the apparent action skip, having any
tautological sentence true as its precondition: this is the
action in which nothing happens. This is what the
outsiders (A and B) think it is going on: nothing, really.
ESSLLI 2009 Bordeaux
87
The Product Update
We can now check that the product of
¨¤ ¡¥
¤ ¡
a,b,c
o
/ T
a,b,c
H
£ ¢j
. §£ ¢¦
and
c
¨¤ ¡¥
. §£H ¢¦
a,b
¤
¡
/ true
£
¢x
is indeed what intuitively should be:
a,b,c
a,b,c
ESSLLI 2009 Bordeaux
88
c
º¹¨§ · ¦¸·¥
³´¡¢ H £µ¶¤ @
}
@@ a,b
a,b }}
@@
}
}
@@
}
º¹ ¸·
º¹ ¸·~}
o
/ T
H
³´ J µ¶
³´ T µ¶
a,b,c
a,b,c
a,b,c
ESSLLI 2009 Bordeaux
89
Private Announcements
More generally, a fully private announcement !G ϕ of ϕ
to a subgroup G is described by the action on the left in
the event model
G
¨¤ ¡¥
ϕ
. §£ ¢¦
A\G
¤
¡
/ true
£
¢x
A
This subsumes both taking a peak (Example 4) and the
secret communication in Example 5.
ESSLLI 2009 Bordeaux
90
Fair-Game Announcements
The following event model represents the situation in which
it is common knowledge that an agent c privately learns
whether ϕ or ¬ϕ is the case:
A
¨¤
ϕ
. §£
¡¥
o
¢¦
A\{c}
¤
¡
/ ¬ϕ
£
¢p
A
This is a “fair-game announcement” F aira ϕ.
The case ϕ := H represents the action in Example 3 (“legal
viewing” of the card by c).
ESSLLI 2009 Bordeaux
91
Solving Scenario 5’: Wiretapping
Recall Scenario 5: the supposedly secret message from c to
a is secretly intercepted by b. This is an instance of a
private announcements with (secret) interception by a group
of outsiders.
¨¤ ¡¥
H¢
£
§ N¦
b
a,c
¤ ¡
/ H
£ ¢T
a,c
b
¤
¡
/ true
£
U ¢
a,b,c
ESSLLI 2009 Bordeaux
92
Dynamic Modalities
For any action σ ∈ Σ, we can consider the corresponding
dynamic modality [σ]ϕ. This is a property of the original
model, expressing the fact that, if action σ happens, then ϕ
will come to be true after that.
We can easily define the epistemic proposition [σ]ϕ by:
s |=S [σ]ϕ iff (s, σ) ∈ S ⊗ Σ implies (s, σ) |=S⊗Σ ϕ
ESSLLI 2009 Bordeaux
93
Appearance
For any agent a and any action σ ∈ Σ, we define the
appearance of action σ to a, denoted by σa , as:
σa
=
0
a
{σ ∈ Σ : σ → σ 0 }
When σ happens, it appears to a as if either one of the
actions σ 0 ∈ σa is happening.
ESSLLI 2009 Bordeaux
94
Examples
(!ϕ)a = {!ϕ} for all a ∈ A,
(!G ϕ)a = {!G ϕ} for all insiders a ∈ G,
(!G ϕ)a = {skip} = {!(true)} for all outsiders a 6∈ G,
(F aira ϕ)a = {F aira ϕ}
(F aira ϕ)b = {F aira ϕ, F aira ¬ϕ} for b 6= a.
ESSLLI 2009 Bordeaux
95
Reduction Laws
If σ ∈ Σ is a simple epistemic action, then we have the
following properties (or “axioms”):
• Preservation of “Facts”. For all atomic p ∈ Φ :
[σ]p
=
preσ ⇒ p
• Partial Functionality:
[σ]¬ϕ
=
preσ ⇒ ¬[σ]ϕ
• Normality:
[σ] (ϕ ∧ ψ)
=
[σ]ϕ ∧ [σ]ψ
ESSLLI 2009 Bordeaux
96
• “Action-Knowledge Axiom”:
[σ]Ba ϕ
=
preσ ⇒
^
Ba [σ 0 ]ϕ
σ 0 ∈σa
This Action-Knowledge Axiom helps us to compute the
beliefs (or ”knowledge”) of the agents after a program is
run, in terms of the initial beliefs and of the programs’
appearance.
The Reduction laws allow us to eliminate dynamic
modalities from all sentences that do not contain common
knowledge (or common belief) operators.
ESSLLI 2009 Bordeaux
97
Instances of Action-Knowledge Axiom
If a ∈ G, b 6∈ G, c 6= a, then:
[!θ]Ba ϕ
=
θ ⇒ Ba [!θ]ϕ
[!G θ]Ba ϕ
=
θ ⇒ Ba [!G θ]ϕ
[!G θ]Bb ϕ
[F aira θ]Ba ϕ
[F aira θ]Bc ϕ
=
=
=
θ ⇒ Bb ϕ
θ ⇒ Ba [F aira θ]ϕ
θ ⇒ Bc ([[F aira θ]ϕ ∧ [F aira ¬θ]ϕ)
ESSLLI 2009 Bordeaux
98
EXERCISES
• Solve Scenario 5’, by computing the update product of
the state model obtained in Scenario 4 with the event
model on the previous slide.
• Solve Scenario 6 using update product.
• Solve the Muddy Children puzzle, using repeated
updates. Encode the conclusion of the puzzle in a DEL
sentence. Prove this sentences using the Reductio
Laws, the S5 axioms and propositional logic.
• Do the same for the “Cheating Muddy Children”, using
repeated update products. Notice anything funny?
ESSLLI 2009 Bordeaux
99
1.5. Cheating and the Failure of Standard DEL
Our update product works very well when dealing with
knowledge, or even with (possibly false) beliefs as long as
these false beliefs are never contradicted by new
information. However, in the latest case, update product
gives unintuitive results: if an agent A is confronted with
a contradiction between previous beliefs and new
information she starts to believe the contradiction, and so
she starts to believe everything!
In terms of epistemic models, this means that in the
updated model, there are no A-arrows originating in the
real world.
ESSLLI 2009 Bordeaux
100
Counterexample
Recall the state model immediately after taking a peek,
i.e. the output of Scenario 4:
c
º¹¨§ · ¦¸·¥
³´¡¢ H £µ¶¤ @
}
@@ a,b
a,b }}
@@
}
}
@@
}
º¹ ¸·
º¹ ¸·~}
/ T
o
H
³´ J µ¶
³´ T µ¶
a,b,c
a,b,c
a,b,c
So, now, c privately knows that the coin lies face up.
ESSLLI 2009 Bordeaux
101
Counterexample Continued
In Scenario 5 (happening after the cheating in Scenario 4),
agent c sends a secret announcement to his friend a (who
has not suspected any cheating till now!), saying:
“I know that H ”.
This is a fully private communication !a,c ϕ (from c to a)
of the sentence
ϕ := Kc H,
i.e. with event model
a,c
¨¤
ϕ
. §£
¡¥
¢¦
b
¤
¡
/ true
£
¢x
a,b,c
ESSLLI 2009 Bordeaux
102
Recall that, according to our intuition, the updated model
for the situation after this private announcement should be:
a,c
º¹¨§ · ¦¸·¥
³´¡¢ H £µ¶¤ @
}
@@
}
b }
b
@
}
@@
}
}
@Â
}
~
º¹ ¸·
º¹ ¸·
/ T
o
H
³´ J µ¶
³´ T µ¶
a,b,c
a,b,c
a,b,c
ESSLLI 2009 Bordeaux
103
However, the update product gives us (something bisimilar
to):
c
º¹¨§ · ¦¸·¥
³´¡¢ H £µ¶¤ @
}
@@
}
b }
@@b
}
}
@@
}
º¹ ¸·
º¹ ¸·~}
o
/ T
H
³´ J µ¶
³´ T µ¶
a,b,c
a,b,c
a,b,c
There are no surviving a-arrows originating in the real
world. According to our semantics, a will believe everything
after this communication: encountering a contradiction,
agent a simply gets crazy!
ESSLLI 2009 Bordeaux
104
The Belief Revision Problem
Fixing this problem requires modifying update product by
incorporating ideas from Belief Revision Theory.