Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Logic & Knowledge Representation I
Foundations of Artificial Intelligence
Logic & Knowledge Representation
Introduction to Knowledge Representation
Knowledge-Based Agents
Logical Reasoning
Propositional Logic
Syntax and semantics
Proofs and derivations
First-Order Predicate Logic
Syntax and semantics
Proof Theory and the Notion of Derivation
Resolution Mechanism
Forward and Backward Chaining
Foundations of Artificial Intelligence
2
Knowledge Representation
Intended role of knowledge representation in AI is to reduce problems
of intelligent action to search problems. --Ginsberg, 1993
An Analogy between AI Problems and Programming
Programming
Artificial Intelligence
1. Devise an algorithm to solve the
problem
1. Identify the knowledge needed to
solve the problem
2. Select a programming language in
which the algorithm can be encoded
2. Select a language in which the
knowledge can be represented
3. Capture the algorithm in a program
3. Write down the knowledge in the
language
4. Run the program
4. Use the consequences of the
knowledge to solve the problem
It is the final step that usually involves search
Foundations of Artificial Intelligence
3
Logical Reasoning
The goal is find a way to
state knowledge explicitly
draw conclusions from the stated knowledge
Logic
A "logic" is a mathematical notation (a language) for stating knowledge
The main alternative to logic is "natural language" i.e. English, Swahili, etc.
As in natural language the fundamental unit is a “sentence” (or a statement)
Syntax and Semantics
Logical inference
Soundness and Completeness
Foundations of Artificial Intelligence
4
Knowledge-Based Agent Architecture
Recall the simple reflex agent
This agent keeps track of the
state of the external world using
its "update" function.
loop forever
Input percepts
state Update-State(state, percept)
rule Rule-Match(state, rules)
action Rule-Action[rule]
Output action
state Update-State(state, action)
end
A knowledge-based agent represents the state of the world using a set
of sentences called a knowledge base.
loop forever
Input percepts
KB tell(KB, make-sentence(percept))
action ask(KB, action-query)
Output action
KB tell(KB, make-sentence(action))
end
Foundations of Artificial Intelligence
At each time instant, whatever the
agent currently perceived is stated
as a sentence, e.g. "I am hungry".
5
“Tell” and “Ask” Operations
There are two fundamental operations on a knowledge base:
"tell" it a new sentence
"ask" it a query
These are NOT simple operations. For example:
the "tell" operation may need to deal with the new sentence contradicting a
sentence already in the knowledge base
the "ask" operation must be able to answer "wh" queries like "which action
should I take now?" as well as yes/no queries
there may be uncertainty involved in the result of queries
Fundamental Requirements
the “ask” operation should give an answer that follows from the knowledge
base (i.e., what has been told)
it is the inference mechanism that determines what follows from the
knowledge base
Foundations of Artificial Intelligence
6
Inference and Entailment
The knowledge representation language provides a declarative
representation of real-world objects and their relationships
sentences
sentences
entails
Representation
Semantics
(interpretation)
Real World
facts
follows
facts
Entailment
KB entails a sentence s: KB s
KB derives (proves) a sentence s: KB
s
Soundness and Completeness
Soundness: KB s KB
s, for all s
Completeness : KB s KB
s, for all s
Foundations of Artificial Intelligence
Validity: true under all
interpretations
Satisfiability: true under
some interpretation, i.e.,
there is at least one model
7
Propositional Logic: Syntax
Sentences
represented by propositional symbols (e.g., P, Q, R, S, etc.)
logical constants: True, False
Connectives
, , , ,
Only really need , ,
Examples:
P (Q R)
P (Q Q) (R P)
( P Q) (Q P)
( P Q) (P Q)
Foundations of Artificial Intelligence
8
Propositional Logic: Semantics
In propositional logic, the semantics of connectives are specified by truth tables:
P
Q
P
P Q
P Q
F
F
T
T
F
T
F
T
T
T
F
F
F
F
F
T
F
T
T
T
P Q P Q
T
T
F
T
T
F
F
T
Truth tables can also be used to determine the validity of sentences:
P
Q
P
P Q
F
F
T
T
F
T
F
T
T
T
F
F
T
T
F
T
Foundations of Artificial Intelligence
P Q (P Q) ( P Q)
T
T
F
T
T
T
T
T
9
Interpretations and Models
A world in which a sentence s is true under a particular
interpretation is called a model for s
Entailment is defined in terms of models:
a sentence s is entailed by KB if any model of KB is also a model of s
i.e., whenever KB is true, so is s
Models as mappings:
we can think of the models for a sentence s as those mappings (from variables to
truth values) which make s true
each such mapping is an interpretation; thus models of s are interpretations that
make s true
in propositional logic, each interpretation corresponds to a row of the truth table
for s, and models are those rows for which s has the value true
s is satisfiable if there is at least one model (i.e., one row that makes s true)
s is valid if all rows of the table make s true (s is a tautology)
s is unsatisfiable if it is false for all interpretations (s is inconsistent); alternatively,
s is inconsistent, if there is a sentence t such that s entails both t and t.
Foundations of Artificial Intelligence
10
Some Useful Tautologies
( P Q) (P Q)
( P Q) (P Q)
Conversion between => and \/
(( P Q) R) (P Q R)
and more generally:
( P Q) (P Q)
( P Q) (P Q)
P (Q R) ( P Q) ( P R)
P (Q R) ( P Q) ( P R)
Foundations of Artificial Intelligence
DeMorgan’s Laws
Distributivity
11
Model Theoretic Definition of Semantics
Let F and G be Propositional Formulas, and M be any
interpretation
F G is true in M iff both F and G are true in M
F G is true in M iff at least one of F or G is true in M
F is true in M iff both F is false in M
F G is true in M iff either F is false in M or G is true in M
F G is true in M iff both F and G are true in M or both are false in M
Venn diagram view of models:
Example:
P
Foundations of Artificial Intelligence
Q
PQ
(everything except
)
12
Logical Equivalence
How do we show that two sentences are logically equivalent?
Sentences s and t are equivalent if they are true in exactly the same models
In propositional logic, interpretations correspond to truth-value assignments
(i.e., rows of the truth table)
models of s are those rows that make s True
check equivalence by examining all rows for s and t: s logically implies (entails) t,
if whenever s is True, so is t; s and t are equivalent, if they are True in exactly the
same rows (i.e., columns for s and t are identical). (enumeration method)
Alternatively (and in general), we can prove using model theoretic arguments
Example: prove p q is equivalent to p q:
proof: let M be an interpretation in which p q holds (i.e., M is a model for p q).
Then by definition of semantics for , either p is true in M or q is true in M. If p
is true in M, then p is false in M (by def. of semantics for ). So, p q is true in M
(by def. of semantics for ). If q is true in M, then again p q is true in M (by def.
of semantics for ). Thus, M is also a model for p q.
Next we need to show, in a similar way, that for a model M of p q, M is also a
model of p q.
Foundations of Artificial Intelligence
13
Propositional Inference: Enumeration Method
Let
a A B and KB ( A C) ( B C)
Does KB entail a?
check all possible models; a must be true whenever KB is true
A
B
C
A C
B C
KB
a A B
F
F
F
F
T
T
T
T
F
F
T
T
F
F
T
T
F
T
F
T
F
T
F
T
F
T
F
T
T
T
T
T
T
F
T
T
T
F
T
T
F
F
F
T
T
F
T
T
F
F
T
T
T
T
T
T
Again, from a model theoretic point of view, we can also argue that for any
model M of KB, M is also a model of a.
Foundations of Artificial Intelligence
14
Normal Forms
Other approaches to inference use syntactic operations on
sentences (often expressed in a standardized form)
Conjunctive Normal Form (CNF)
conjunction of disjunction of literals
E.g.,
( A B) ( B C D)
Disjunctive Normal Form (DNF)
disjunction of conjunction of literals
E.g.,
clauses
terms
( A B) ( B C) (C D A)
Horn Form
conjunction of Horn clauses (clauses with at most 1 positive literal)
E.g., ( A B) (C B D)
often written as a set of implications:
B A
Foundations of Artificial Intelligence
B DC
15
Inference Rules for Propositional Logic
(MP) Modes Ponens (Implication-elimination)
a , a
(AI) And-introduction
(OI) Or-introduction
(AE) And-elimination
(NE) Negation-elimination
Foundations of Artificial Intelligence
a
a
16
Inference Rules for Propositional Logic
(UR) Unit Resolution
(R) General Resolution
a ,
a
a ,
a
Notes:
Resolution is used with knowledge bases in CNF (or clausal form), and is
complete for propositional logic
Modes Ponens (the general form)
is complete for Horn knowledge bases, and can be used in both forward and
backward chaining.
Foundations of Artificial Intelligence
17
Using Inference Rules
Given KB {(( A B) C) ( B D), A D}
prove ( B D) C
1. ( A B) C (using KB and AE rule)
2. A D (from KB)
3. A (using 2 and AE rule)
Note: in each of the steps in the
proof we could have applied other
rules to derive new sentences, thus
the inference problem is really a
search problem:
initial state = KB
goal state = conclusion to be proved
operators = ?
4. A B (using 3 and OI rule)
5. C (using 1, 4, and MP)
6. B D (using KB and AE rule)
7. D (using 2 and AE rule)
8. B (using 6, 7, and UR rule)
9. B D (using 8 and OI rule)
10. ( B D) C (using 5, 9, and AI rule)
Foundations of Artificial Intelligence
18
Exercise: The Island of Knights & Knaves
We are in an island all of whose inhabitants are either knights or
knaves
knights always tell the truth
knaves always lie
So, here are some facts we know about this world:
(1) says(A,S) /\ knave(A) => ~S
(2) says(A,S) /\ knight(A) => S
(3) ~knight(A) => knave(A)
(4) ~knave(A) => knight(A)
Problem:
you meet inhabitants A and B, and A tells you “at least one of us is a knave”
can you determine who is a knave and who is a knight?
Foundations of Artificial Intelligence
19
Exercise: The Island of Knights & Knaves
Suppose A is a knave:
knave(A)
says(A, “knave(A) \/ knave(B))
by (1) and MP we can conclude: ~(knave(A) \/ knave(B))
by DeMorgan’s Law: ~knave(A) /\ ~knave(B)
by AE: ~knave(A)
this is a contradiction, so our assumption that “knave(A)” was false
therefore it must be the case that ~knave(A) which my MP and (4) results in
knight(A).
But, what is B?
we know from above that knight(A)
says(A, “knave(A) \/ knave(B))
by (2) and MP we conclude: knave(A) \/ knave(B)
but we know form above that ~knave(A)
so, by the resolution rule we conclude: knave(B).
Foundations of Artificial Intelligence
20
Exercise: The Island of Knights & Knaves
Problem 1”
you meet inhabitants A and B. A says: “We are both knaves.”
what are A and B?
Problem 2:
you meet inhabitants A, B, and C. You walk up to A and ask: "are you a
knight or a knave?" A gives an answer but you don't hear what she said.
B says: "A said she was a knave." C says: "don't believe B; he is lying.”
what are B and C?
can you tell something about A?
Foundations of Artificial Intelligence
21
First-Order Predicate Logic
Constants
represent objects in real world
john, 0, 1, book, etc. (notation: a, b, c, …)
Functions
names for objects not individually identified (notation: f, g, h, …)
successor(1), sqrt(successor(3)), child_of(john, mary), f(a, g(b,c))
Predicates
represent relations in the real world (notation: P, Q, R, …)
likes(john, mary), x > y, valuable(gold)
special predicate for equality: =
Variables
placeholders for objects (notation: x, y, z, …)
Connectives and Quantifiers
, , , , "$
Foundations of Artificial Intelligence
22
First-Order Predicate Logic
Atomic Sentences (atomic formulas)
predicate (term1, term2, …, termk)
where
term = function(term1, term2, …, termk) or constant, or variable
Compound Formulas
"n[number (n) natural (n) natural ( successor (n))]
"x , y[ grandp( x , y ) $z( parent ( x , z) parent ( z, y ))]
"x , y[ parent (mary , x ) parent (mary , y ) sibling ( x , y )]
Foundations of Artificial Intelligence
23
Transformation to FOPC
Mary got good grades in courses CS101 and CS102
goodgrade(mary, cs101) goodgrade(mary, cs102)
John passed CS102
pass( john, cs102)
Student who gets good grades in a course passes that course
"x, y[ student ( x) course( y) goodgrade( x, y) pass( x, y)]
Students who pass a course are happy
"x[ student ( x) $y[course( y) pass( x, y)] happy( x)]
A student who is not happy hasn’t passed all his/her courses
"x[ student ( x) happy( x) $y[course( y) pass( x, y)]]
Only one student failed all the courses
$x[ student ( x) "y[course( y) pass( x, y)]
"z[( student ( z) ( x z)) $y[course( y) pass( z, y)]]
Foundations of Artificial Intelligence
24
Transformation to FOPC:
Dealing with Quantifiers
Usually use with ":
e.g., "x human( x ) mortal ( x )
says, all humans are mortal
but, "x human( x ) mortal ( x )
say, everything is human and mortal
Usually use with $:
e.g., $x bird ( x ) flies( x )
says, there is a bird that does not fly
but, $x bird ( x ) flies( x )
is also true for anything that is not a bird
"x$y is not the same as $y"x :
e.g., $x"y loves( x , y )
says, there is someone who loves everyone
but, "y$x loves( x , y )
says, everyone is loved by at least one person
Foundations of Artificial Intelligence
25
Quantifiers
"can be thought of as “conjunction” over all objects in domain:
e.g., "x bird ( x )
can be interpreted as
bird (tweety ) bird ( sam) bird ( fred )
$can be thought of as “disjunction” over all objects in domain:
e.g., $x bird ( x )
can be interpreted as bird (tweety ) bird ( sam) bird ( fred )
Quantifier Duality
each can be expressed using the other
this is an application of DeMorgan’s laws
examples:
"x loves( x , tweety ) is equivalent to $xloves( x , tweety )
$x likes( x , broccoli ) is equivalent to "xlikes( x , broccoli )
Foundations of Artificial Intelligence
26
Example: Axiomatizing the Knights and
Knaves Domain
"x inhabitant ( x) (knight ( x) knave( x))
"x inhabitant ( x) knight ( x) knave( x)
"x inhabitant ( x) knave( x) knight ( x)
"x"s knave( x) says ( x, s ) s
"x"s knight ( x) says ( x, s ) s
inhabitant ( A)
inhabitant ( B)
...
Question: can an inhabitant say “I am a knave”?
$x inhabitant ( x) says ( x," knave( x)")?
Foundations of Artificial Intelligence
27
Interpretations & Models in FOPC
Definition: An interpretation is a mapping which assigns
objects in domain to constants in the language
functional relationships in domain to function symbols
relations to predicate symbols
usual logical relationships to connectives and quantifiers: , , , , "$
Definition: Models
An interpretation M is a model for a set of sentences S, if every sentence in S is
true with respect to M (if S is a singleton {s}, then we say that M is a model for
s).
Notation:
S
M
If there is a model M for S, then S is satisfiable
If S is true in every interpretation M (every interpretation is a model for S), then
S is valid
Foundations of Artificial Intelligence
28
Interpretations & Models in FOPC
Example: s "x N ( x ) L( x , f ( x ))
where N, L are predicate symbols, and f a function symbol
interpretation 1
domain = positive integers
N(x) = “x is a natural number”
L(x,y) = “x is less than y”
f(x) = “predecessor of x” (i.e., x-1)
then s says: “any natural number is a less than its predecessor” (of course this is
false, so this interpretation is not a model for s)
interpretation 2
domain = all people
N(x) = “x is a person”
L(x,y) = “x likes y”
f(x) = “mother of x”
then s says: “everyone likes his/her mother”
Foundations of Artificial Intelligence
29
Models as Sets of Atomic Formulas
If we assume the language has no quantifiers and variables, then
models can be represented as sets of atomic formulas
note that we can eliminate quantifiers and variables by completely expanding
conjunctions of ground formulas (formulas without variables)
let A be the set of all ground atomic formulas in the language, then a model M
can be expressed as a subset of A (M A)
for an atomic formula s, s M, means M is a model of s, otherwise s is false in M
Example: Consider KB consisting of
{"x bird ( x ) flies( x ), bird (tweety ), bird ( sam)}
if we assume that the named constants are the only objects in the domain, then
A = {bird(sam), bird(tweety), flies(sam), flies(tweety)}
then, M = {bird(tweety), bird(sam), flies(sam)} is a model for flies(sam),
"x(bird(x)), $x(bird(x) flies(x)), but M is not a model for flies(tweety),
"x(flies(x)), or $x( bird(x))
Note that if there is a function symbol in the language, then A is infinite
Foundations of Artificial Intelligence
30
Semantics of FOPC Operators
Let F and G be FOPC Formulas, and M be any interpretation
F G is true in M iff both F and G are true in M
F G is true in M iff at least one of F or G is true in M
F is true in M iff both F is false in M
F G is true in M iff either F is false in M or G is true in M
F G is true in M iff both F and G are true in M or both are false in M
So far this is the same as propositional; how about quantifiers:
"x F is true in M iff for any object d in the domain, F[d] is true in M, where
F[d] is the result of replacing every free occurrence of x in F with d
$x F is true in M iff for some object d in the domain, F[d] is true in M, where
F[d] is the result of replacing every free occurrence of x in F with d
Example: Again consider KB = {"x bird ( x ) flies( x ), bird (tweety ), bird ( sam)}
$x(bird(x) flies(x)) is entailed by KB, since bird(tweety) flies(tweety), is
true in every model of KB (taking d = tweety)
Foundations of Artificial Intelligence
31
Proof Theory of FOPC
The rules of inference for propositional logic still apply
in the context of FOPC:
And-Introduction (AI)
And-Elimination (AE)
Or-Introduction (OI)
Negation-Elimination(NE)
Modes Ponens (MP)
The formula F is derivable (provable)
from KB, if:
1. F is already in KB (a fact or axiom)
2. F is the result of applying a rule of
inference to sentences derivable
from KB
In addition we have inference rules
for quantifiers:
"x F
Universal Instantiation (UI)
F[t ]
where, t is a term replacing free occurrences
of x in F (x must not occur in t)
Existential Instantiation (EI)
where, f is a new function symbol, and y
is a free variable (not quantified in F)
Foundations of Artificial Intelligence
$x F
F[ f ( y )]
32
Universal / Existential Instantiation
Universal Instantiation (UI)
where, t is a term replacing free occurrences
of x in F (x must not occur in t)
"x F
F[t ]
Example:
From "y(likes(jean,y)) we can infer: likes(jean,joe), likes(joe, mother_of(joe)), etc
Existential Instantiation (EI) $x F
where, f is a new function symbol, and y
F[ f ( y )]
is a free variable (not quantified in F)
Example:
Consider $y(likes(x,y)); we can infer: likes(x,f(x)), where f is a new function symbol
representing an object that satisfies $y(likes(x,y)) (f is called a Skolem function)
Note:
If there are no free variables in F, then we can use a new constant symbol (a function
with no arguments):
Consider $y"x(likes(x,y)); we can infer: "x(likes(x,a), where a is a new constant symbol
(a is called a Skolem constant)
Foundations of Artificial Intelligence
33
Example of Derivation
Let KB = { parent(john,mary), parent(john,joe),
"x"y [ $z ( parent ( z , x ) parent ( z , y ) sibling ( x , y ) ) ]}
1. "x"y [ $z ( parent ( z , x ) parent ( z , y ) sibling ( x , y ) ) ] (from KB)
2. "y [ $z ( parent ( z , mary ) parent ( z , y ) sibling (mary , y ) ) ] (1, UI)
3. $z ( parent ( z , mary ) parent ( z , joe) sibling (mary , joe) ) (2, UI)
4.
parent ( john, mary ) parent ( john, joe) sibling (mary , joe)
5.
parent ( john, mary )
6.
parent ( john, joe)
7.
parent ( john, mary ) parent ( john, joe)
8. sibling (mary , joe)
(from KB)
(from KB)
(5, 6, AI)
(4, 7, MP)
This derivation shows that KB
Foundations of Artificial Intelligence
(3, EI)
sibling (mary, joe)
34
Soundness and Completeness of FOPC
Soundness of FOPC
given a set of sentences KB and a sentence s, then
KB
s implies KB
s
note that if s is derived from KB, but KB does not entail s, then at least one of
the inference rules used to derive s must have been unsound
Completeness of FOPC
given a set of sentences KB and a sentence s, then
KB
s implies KB
s
note that if s is entailed by KB, but we cannot derive s from KB, then our
inference system (set of inference rules) must be incomplete
However, note that entailment for FOPC is semi-decidable
Foundations of Artificial Intelligence
35
Logical Reasoning Agents
Recall the general template for a knowledge-based agent
loop forever
Input percepts
time = 0
KB tell(KB, make-sentence(percept))
action ask(KB, action-query)
Output action
KB tell(KB, make-sentence(action))
time = time + 1
end
Water-Jug Problem:
• percepts may be in the form
Precept([x, y], t), where x, y represent
contents of 4 and 3 gallon jugs and t
represents the current time instance
• actions may be of the form:
fill(4-gal), fill(3-gal), empty(4-gal),
empty(3-gal), dump(4-gal, 3-gal), etc.
• e.g., agent tries to determine what is the
best action at time 7, by ASKing if
$x Action(x,7), which might give an
answer such as {x = fill(3-gal)}.
In the simple reflex agent, the KB might include rules that directly (or
indirectly) connect percepts with actions
e.g., Percept([x,y], t) (x+y 4) (y > 0) Action(dump(3-gal, 4-gal), t)
However, for the agent to be able to reason about the results of its actions in a
reasonable manner, it must be able to specify a model of the world and how it
changes
Foundations of Artificial Intelligence
36
Next
Resolution Rule of Inference
Resolution provides a single complete rule of inference for first order predicate
calculus if used in conjunction with a refutation proof procedure (proof by
contradiction)
requires that formulas be written in clausal form
to prove that KB a, show that KB a is unsatisfiable
i.e., assume the contrary of a, and arrive at a contradiction
each step in the refutation procedure involves applying resolution to two clauses, in
order to get a new clause (until nothing is left)
Forward and Backward Chaining
Forward Chaining: Start with KB, infer new consequences using inference rule(s), add new
consequences to KB, continue this process (possibly until a goal is reached)
Backward Chaining: Start with goal to be proved, apply inference rules in a backward
manner to obtain premises, then try to solve for premises until known facts (already in KB)
are reached
Foundations of Artificial Intelligence
37