Download Ch.2 Propositional Logic

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Modal logic wikipedia , lookup

Intuitionistic logic wikipedia , lookup

Statistical inference wikipedia , lookup

Inquiry wikipedia , lookup

Law of thought wikipedia , lookup

Argument wikipedia , lookup

Propositional calculus wikipedia , lookup

Natural deduction wikipedia , lookup

Truth-bearer wikipedia , lookup

Transcript
Ch.2 Propositional Logic
Hantao Zhang
http://www.cs.uiowa.edu/∼ hzhang/c145
The University of Iowa
Department of Computer Science
Artificial Intelligence – p.1/46
Components of Propositional Logic
Logic constants: true(1), and false(0)
Propositional variables: X = {p1 , q2 , ...}, a set of
Boolean variables
Logical connectives: F = {true, false, ¬, ∧, ∨, →, ⇔, ...}
Propositional sentences: L(X, F ), expressions built
from X and F
Propositional interpretations: a mapping θ from X to
{true, false}
Logical evaluations: a process of applying a mapping θ
to sentences in L(X, F ) (to obtain a value in
{true, false})
Artificial Intelligence – p.2/46
Propositional Variables
Also called Boolean variables, 0-1 variables.
Every statement can be represented by a propositional
variable:
p1 = “It is sunny today”
p2 = “Tom went to school yesterday”
p3 = “f (x) = 0 has two solutions”
p4 = “point A is on line BC ”
p5 = “place a queen at position (1, 2) on a 8 by 8
chessboard”
...
Artificial Intelligence – p.3/46
Properties of Logical Connectives
∧ and ∨ are commutative
ϕ1 ∧ ϕ2 ≡ ϕ2 ∧ ϕ1
ϕ1 ∨ ϕ2 ≡ ϕ2 ∨ ϕ1
∧ and ∨ are associative
ϕ1 ∧ (ϕ2 ∧ ϕ3 ) ≡ (ϕ1 ∧ ϕ2 ) ∧ ϕ3
ϕ1 ∨ (ϕ2 ∨ ϕ3 ) ≡ (ϕ1 ∨ ϕ2 ) ∨ ϕ3
∧ and ∨ are mutually distributive
ϕ1 ∧ (ϕ2 ∨ ϕ3 ) ≡ (ϕ1 ∧ ϕ2 ) ∨ (ϕ1 ∧ ϕ3 )
ϕ1 ∨ (ϕ2 ∧ ϕ3 ) ≡ (ϕ1 ∨ ϕ2 ) ∧ (ϕ1 ∨ ϕ3 )
∧ and ∨ are related by ¬ (DeMorgan’s Laws)
¬(ϕ1 ∧ ϕ2 ) ≡ ¬ϕ1 ∨ ¬ϕ2
¬(ϕ1 ∨ ϕ2 ) ≡ ¬ϕ1 ∧ ¬ϕ2
Artificial Intelligence – p.4/46
Properties of Logical Connectives
∧, →, and ⇔ are actually redundant:
ϕ1 ∧ ϕ2 ≡ ¬(¬ϕ1 ∨ ¬ϕ2 )
ϕ1 → ϕ2 ≡ ¬ϕ1 ∨ ϕ2
ϕ1 ⇔ ϕ2 ≡ (ϕ1 → ϕ2 ) ∧ (ϕ2 → ϕ1 )
We keep them all mainly for convenience.
Artificial Intelligence – p.5/46
Properties of Logical Connectives
Simplification Rules:
¬ vs true and false
¬true ≡ false
¬false ≡ true
∧ vs true and false
ϕ ∧ true ≡ ϕ
ϕ ∧ false ≡ false
ϕ ∧ ¬ϕ ≡ false
∨ vs true and false
ϕ ∨ true ≡ true;
ϕ ∨ false ≡ ϕ
ϕ ∨ ¬ϕ ≡ true
Artificial Intelligence – p.6/46
Exercise
Use the truth tables to prove all the logical equivalences
seen so far.
Artificial Intelligence – p.7/46
Negation Normal Form (NNF)
A propositional sentence is said to be in Negation Normal
Form if it contains no connectives other than ∨, ∧, ¬, and
if ¬(α) appears in the sentence, then α must be a
propositional variable.
¬p, p ∨ q, p ∨ (q ∧ ¬r) are NNF.
p ⇔ q, ¬(p ∨ q) are not NNF.
Artificial Intelligence – p.8/46
Negation Normal Form (NNF)
Every propositional sentence can be transformed into an
equivalent NNF.
ϕ1 ⇔ ϕ2
≡ (ϕ1 → ϕ2 ) ∧ (ϕ2 → ϕ1 )
ϕ1 → ϕ2
≡ ¬ϕ1 ∨ ϕ2
¬(ϕ1 ∧ ϕ2 ) ≡ ¬ϕ1 ∨ ¬ϕ2
¬(ϕ1 ∨ ϕ2 ) ≡ ¬ϕ1 ∧ ¬ϕ2
¬¬ϕ
≡ ϕ
¬true
≡ false
¬false
≡ true
Artificial Intelligence – p.9/46
Conjunctive Normal Form (CNF)
A propositional sentence φ is said to be in Conjunctive
Normal Form if φ is true, false, or a conjunction of αi ’s:
α1 ∧ α2 ∧ · · · ∧ αn ,
where each α is a clause (a disjunction of βj ’s):
α = (β1 ∨ β2 ∨ · · · ∨ βm ),
where each β is called a literal, which is either a
variable or the negation of a variable: β = p or β = ¬(p).
¬p, p ∨ q , p ∧ (q ∨ ¬r) are CNF.
p ∨ (q ∧ ¬r) are not CNF.
Every CNF is an NNF but the opposite does not hold.
Artificial Intelligence – p.10/46
Existence of CNF
Every propositional sentence can be transformed into an
equivalent CNF. That is, from NNF, using
(ϕ1 ∧ ϕ2 ) ∨ ϕ3 ≡ (ϕ1 ∨ ϕ3 ) ∧ (ϕ2 ∨ ϕ3 )
plus the Simplification Rules involving true and false.
Artificial Intelligence – p.11/46
Example: The 8-Queen Problem
1 ≤ i, j ≤ 8. qij is true iff a queen is placed in the
square at row i and column j.
Variables: qi,j ,
Clauses:
Every row has a queen: For each 1 ≤ i ≤ 8,
qi,1 ∨ qi,2 ∨ qi,3 ∨ qi,4 ∨ qi,5 ∨ qi,6 ∨ qi,7 ∨ qi,8
Two queens from the same row cannot see each other:
¬qi,1 ∨ ¬qi,2 , ...
Two queens from the same column cannot see each other:
¬q1,j ∨ ¬q2,j , ...
Two queens on the same diagonal cannot see each other:
¬q1,1 ∨ ¬q2,2 , ...
Artificial Intelligence – p.12/46
Interpretations and Models
A (partial) interpretation is a mapping from variables to {0, 1}.
A complete interpretation is when it maps every variables to {0, 1}.
An interpretation θ can be extended to be a mapping from
propositional sentences to {0, 1} if it obeys the following rules:
θ(¬p) = 1 if and only if θ(p) = 0;
if θ(p) = 1 or θ(q) = 1 then θ(p ∨ q) = 1;
if θ(p) = 1 and θ(q) = 1 then θ(p ∧ q) = 1;
...
Computing this interpretation is called evaluation.
For any propositional sentence Φ, if there exists an interpreation θ
such that θ(Φ) = 1, then θ is a model of Φ and Φ is said to be
satisfiable.
If every complete interpretation is a model for Φ, then Φ is said to be
valid or a tautology.
Artificial Intelligence – p.13/46
Validity vs. Satisfiability
A sentence is
satisfiable
valid
if it is true in some interpretation,
if it is true in every possible interpretation.
Reasoning Tools for Propositional Logic:
A tool to prove a sentence is valid needs only to return
yes or no.
A tool to prove a sentence is satisfiable often returns an
interpretation under which the sentence is true.
A sentence φ is valid if and only if ¬φ is unsatisfiable.
Artificial Intelligence – p.14/46
Inference Systems
By convention, a set of sentences, Γ = {α1 , . . . , αm }, is
the same as a conjunction of these sentences, i.e.,
Γ = α1 ∧ . . . ∧ αm .
In practice, an inference system I for PL is a procedure
that given a set Γ = {α1 , . . . , αm } of sentences and a
sentence ϕ, may reply “yes”, “no”, or run forever.
If I replies “yes” on input (Γ, ϕ), we say that Γ derives ϕ
in I , Or, I derives ϕ from Γ, or, ϕ is deduced (derived)
from Γ in I . and write
Γ ⊢I ϕ
Intuitively, I should be such that it replies “yes” on input
(Γ, ϕ) only if ϕ is in fact deduced from Γ by I .
Artificial Intelligence – p.15/46
All These Fancy Symbols!
A∧B →C
is a sentence (an expression built from variables and
logical connectives) where → is a logical connective.
A ∧ B |= C is a mathematical abbreviation standing for
the statement: “every interpretation that makes A ∧ B
true, makes C also true.” We say that the sentence A ∧ B
entails C.
A ∧ B ⊢I C
is a mathematical abbreviation standing for the statement:
“I returns yes on input (A ∧ B, C)” [C is deduced from
A ∧ B in I].
Artificial Intelligence – p.16/46
All These Fancy Symbols!
In other words,
→ is a formal symbol of the logic, which is used by the
inference system.
|= is a shorthand for the entailment of formal sentences
that we use to talk about the meaning of formal
sentences.
⊢I is a shorthand for the inference procedure that we use
to talk about the output of the inference system I.
The formal symbol → and the shorthands |=, ⊢I are related.
Artificial Intelligence – p.17/46
All These Fancy Symbols!
The sentence ϕ1 → ϕ2 is valid (always true) if and only
if ϕ1 |= ϕ2 .
Example: A → (A ∨ B) is valid and A |= (A ∨ B)
1.
2.
3.
4.
A
false
false
true
true
B
A ∨ B A → (A ∨ B)
false false
true
true true
true
false true
true
true true
true
Artificial Intelligence – p.18/46
Soundness and Completeness
An inference system I is sound if it derives only
sentences that logically follow from a given set of
sentences:
if Γ ⊢I ϕ then Γ |= ϕ.
An inference system I is complete if it derives all
sentences that logically follow from a given set of
sentences:
if Γ |= ϕ then Γ ⊢I ϕ.
or equivalently,
if Γ 6⊢I ϕ then Γ 6|= ϕ.
Artificial Intelligence – p.19/46
Inference in Propositional Logic
There are two (equivalent) types of inference systems of
Propositional Logic:
one based on truth tables (T T )
one based on derivation rules
Truth Tables The inference system T T is specified as
follows:
{α1 , . . . , αm } ⊢T T ϕ
iff
all the values in the truth table of
(α1 ∧ · · · ∧ αm ) → ϕ are true.
Artificial Intelligence – p.20/46
Inference by Truth Tables
The truth-tables-based inference system is sound:
α1 , . . . , αm ⊢T T ϕ
implies
truth table of (α1 ∧ · · · ∧ αm ) → ϕ all true
implies
(α1 ∧ · · · ∧ αm ) → ϕ is valid
implies
(α1 ∧ · · · ∧ αm ) |= ϕ
implies
α1 , . . . , αm |= ϕ
It is also complete (exercise: prove it).
Its time complexity is O(2n ) where n is the number of propositional
variables in α1 , . . . , αm , ϕ.
We cannot hope to do better because a related, simpler problem
(determining the satisfiability of a sentence) is NP-complete.
However, the really hard cases of propositional inference when we
need O(2n ) time are somewhat rare.
Artificial Intelligence – p.21/46
Rule-Based Inference System
An inference system in Propositional Logic can also be
specified as a set R of inference (or derivation) rules.
Each rule is actually a pattern premises/conclusion.
A rule applies to Γ and derives ϕ if
some of the sentences in Γ match with the premises
of the rule and
ϕ matches with the conclusion.
A rule is sound it the set of its premises entails its
conclusion.
Artificial Intelligence – p.22/46
Rule-Based Inference System
Inference Rules
And-Introduction
α
β
α∧β
And-Elimination
α∧β
α
α∧β
β
Or-Introduction
α
α∨β
α
β∨α
Artificial Intelligence – p.23/46
Rule-Based Inference System
Implication-Elimination
α→β
β
(aka Modus Ponens)
α
Unit Resolution
α∨β
¬β
α
Resolution
α∨β
¬β ∨ γ
α∨γ
or, equivalently,
¬α → β,
β→γ
¬α → γ
Artificial Intelligence – p.24/46
Rule-Based Inference Systems
Double-Negation-Elimination
¬¬α
α
false-Introduction
α ∧ ¬α
false
false-Elimination
false
β
Artificial Intelligence – p.25/46
Inference by Proof
We say there is a proof of ϕ from Γ in R if we can derive ϕ
by applying the rules of R repeatedly to Γ and its derived
sentences.
Example: a proof of P from {(P ∨ H) ∧ ¬H}
1. (P ∨ H) ∧ ¬H
by assumption
2. P ∨ H
by ∧-elimination applied to (1)
3. ¬H
by ∧-elimination applied to (1)
4. P
by unit resolution applied to (2),(3)
Artificial Intelligence – p.26/46
Inference by Proof
We can represent a proof more visually as a proof tree:
Example:
(P ∨ H) ∧ ¬H
(P ∨ H) ∧ ¬H
P ∨H
¬H
P
Artificial Intelligence – p.27/46
Rule-Based Inference System
More formally, there is a proof of ϕ from Γ in R if
1. ϕ ∈ Γ or,
2. there is a rule in R that applies to Γ and produces ϕ or,
3. there is a proof of each ϕ1 , . . . , ϕm from Γ in R and
a rule that applies to {ϕ1 , . . . , ϕm } and produces ϕ.
Then, the inference system R is specified as follows:
Γ ⊢R ϕ iff there is a proof of ϕ from Γ in R
Artificial Intelligence – p.28/46
Soundness of Rule-Based Inferences
R is sound if all of its rules are sound.
Example: the Resolution rule
α ∨ β,
¬β ∨ γ
α∨γ
α
β
γ
¬β
α∨β
¬β ∨ γ
α∨γ
1.
false
false
false
true
false
true
false
2.
false
false
true
true
false
true
true
3.
false
true
false
false
true
false
false
4.
false
true
true
false
true
true
true
5.
true
false
false
true
true
true
true
6.
true
false
true
true
true
true
true
7.
true
true
false
false
true
false
true
8.
true
true
true
false
true
true
true
All the interpretations that make both α ∨ β and ¬β ∨ γ true (ie, 4,5,6,8)
make α ∨ γ also true.
Artificial Intelligence – p.29/46
The Rules of R
α
β
α∧β
α
α∨β
α∧β
α
α∧β
β
α→β
β
¬¬α
α
α
α∨β
α
β∨α
¬β
α
α ∧ ¬α
false
α∨β
¬β ∨ γ
α∨γ
false
β
Artificial Intelligence – p.30/46
One Rule Suffices!
Assumptions:
β
¬β
false
is a special case of
α∨β
¬β
α
when α
is false.
α∨β
¬β
α∨β
¬β ∨ γ
is a special case of
α
α∨γ
when γ is false.
To prove that Γ → ϕ is valid,
Convert Γ ∧ ¬(ϕ) into a set of clauses.
Repeatedly apply the Resolution Rule on the clauses.
If false is derived out, then (Γ → ϕ) is valid.
Artificial Intelligence – p.31/46
Resolution Proof
Example: A proof of P from {(P ∨ H) ∧ ¬H}
Convert {(P ∨ H) ∧ ¬H} ∧ ¬P into a set of clauses
(CNF):
{(1) (P ∨ H),
(2) ¬H,
(3) ¬P }
Apply the Resolution Rule:
1. (1) (P ∨ H) and (2) ¬H produces (4) P
2. (3) ¬P and (4) P produces false
Since false is produced, so ((P ∨ H) ∧ ¬H) → P is valid.
Artificial Intelligence – p.32/46
Proof by Contradiction
Suppose ⊢R is sound.
Γ ∧ ¬ϕ ⊢R false
implies
implies
implies
implies
(Γ ∧ ¬ϕ) → false is valid.
¬(Γ ∧ ¬ϕ) ∨ false is valid.
¬Γ ∨ ϕ is valid.
Γ → ϕ is valid.
Artificial Intelligence – p.33/46
An Example
Theorem: If (p ⇔ q) ⇔ r and p ⇔ r, then q.
Obtain clauses from the premises and the negation of the conclusion.
From (p ⇔ q) ⇔ r, we obtain:
(1) p ∨ q ∨ r, (2) ¬p ∨ ¬q ∨ r,
(3) p ∨ ¬q ∨ ¬r,
(4) ¬p ∨ q ∨ ¬r.
From p ⇔ r we obtain:
(5) ¬p ∨ r, (6) p ∨ ¬r
From ¬q, we obtain:
(7) ¬q.
Apply Resolution to the clauses:
From (7), (1): (8) p ∨ r
From (10), (6): (11) p
From (7), (4): (9) ¬p ∨ ¬r
From (10), (9): (12) ¬p
From (5), (8): (10) r
From (11), (12): (13) Artificial Intelligence – p.34/46
How to get CNF from (p ⇔ q) ⇔ r
(p ⇔ q) ⇔ r ≡ ((p ⇔ q) → r) ∧ (r → (p ⇔ q))
(p ⇔ q) → r ≡
≡
≡
≡
≡
¬(p ⇔ q) ∨ r
(¬p ⇔ q) ∨ r
((¬p → q) ∧ (q → ¬p)) ∨ r
((p ∨ q) ∧ (¬q ∨ ¬p)) ∨ r
(p ∨ q ∨ r) ∧ (¬q ∨ ¬p ∨ r)
r → (p ⇔ q) ≡
≡
≡
≡
¬r ∨ (p ⇔ q)
¬r ∨ ((p → q) ∧ (q → p))
¬r ∨ ((¬p ∨ q) ∧ (¬q ∨ p))
(¬p ∨ q ∨ ¬r) ∧ (¬q ∨ p¬r)
Artificial Intelligence – p.35/46
Misuse of |= in our book!
(page 23) Suppose we have an argument
consisting of the premises A1 , A2 , ..., An and the conclusion
B . Then A1 , A2 , ..., An |= B if and only if
A1 ∧ A2 ∧ · · · ∧ An ≡iB .
Theorem 2.3
From the above theorem, |= is the same as ≡i.
In Table 2.2 (page 26), an inference rule also uses the
symbol |=, e.g., A, B |= A ∧ B . It should be A, B ⊢ A ∧ B .
The relation is that if α ⊢ β is sound, then α |= β .
Artificial Intelligence – p.36/46
An Example
And now we come to the great question as to why. Robbery has
not been the object of the murder, for nothing was taken. Was it
politics, then, or was it a woman? That is the question which
confronted me. I was inclined from the first to the latter
supposition. Political assassins are only too glad to do their work
and fly. This murder had, on the contrary, been done most
deliberately, and the perpetrator had left his tracks all over the
room, showing he had been there all the time.
P: Robery was the reason for the murder.
Q: Something was taken.
R: Politics was the reason for the murder.
S: A woman was the reason for the murder.
T: The murderer left tracks all over the room.
Artificial Intelligence – p.37/46
Resolution Procedure
To show that Γ implies φ:
1. Let S = CN F (Γ ∧ ¬φ)
2. Pick two clauses c1 , c2 from S
3. c = Resolve(c1 , c2 )
4. If c is empty, return true
5. S = S ∪ {c}
6. goto 2.
Artificial Intelligence – p.38/46
Some Resolution Strategies
Unit resolution:
Unit resolution only.
Input resolution:
One of the two clauses must be an input
clause.
Set of support:
One of the two clauses must be from a
designed set called set of support. New resolvents are
added into the set of support.
In general, to show Γ implies φ, the clauses from ¬φ are
in the set of support.
The 3 strategies above are incomplete in general.
The Unit resolution strategy is equivalent to the Input
resolution strategy: a proof in one strategy can be
converted into a proof in the other strategy.
Note:
Artificial Intelligence – p.39/46
Knowledge-Based Systems
It consists of two components:
A data set called the knowledge base that is usually
represented by a set of clauses.
An inference engine that uses a special resolution
strategy.
Artificial Intelligence – p.40/46
An example
p: the car does not start
q: the engine does not turn over
r: the lights do not come on
s: the engine is getting enough gas.
t: there is gas in the fuel tank
u: there is gas in the carburetor
v: the problem is battery
w: the problem is spark plugs
x: the problem is the starter motor
Knowledge Base:
IF t AND u THEN s
IF p AND q AND r THEN v
IF p AND ¬q AND s THEN w
IF p AND q AND ¬r THEN x
Observation: p, ¬q, t, u
Artificial Intelligence – p.41/46
Two Strategies
Backward Chaining: The set of support is the negation of
the conclusion. Resolve on B where B is the conclusion
of a rule like
IF A1 ∧ · · · ∧ An THEN B or an obervation.
Forward Chaining: The set of support is the set of facts
from observation. Resolve on Ai where Ai is the
premises of a rule like
IF A1 ∧ · · · ∧ An THEN B.
Artificial Intelligence – p.42/46
Proof System for Satisfiability
The Davis-Putnam-Logemann-Loveland (DPLL) method
takes a set of input clauses and looks for a model of the
clauses if they are satisfiable.
Inference Rules:
S ∪ {α ∨ β, ¬β}
S ∪ {α, ¬β}
S ∪ {α ∨ β, β}
S ∪ {β}
S
S ∪ {α}
or
S ∪ {¬α}
: Unit Resolution
: Unit Subsumption
: Case Splitting
Artificial Intelligence – p.43/46
Proof System for Satisfiability
Example: {p ∨ q, p ∨ ¬q, ¬p ∨ ¬q}
is satisfiable.
{p ∨ q, p ∨ ¬q, ¬p ∨ ¬q} : Input
{p ∨ q, p ∨ ¬q, ¬p ∨ ¬q, p} : Case Splitting
{p ∨ ¬q, ¬p ∨ ¬q, p} : Unit Subsumption
{¬p ∨ ¬q, p} : Unit Subsumption
{¬q, p} : Unit resolution
The original clause set is true under the interpretation that p
is true and ¬q is true (or q is false).
Artificial Intelligence – p.44/46
DIMACS Input Format for SAT Solvers
p cnf 16 80
1 2 3 4 0
5 6 7 8 0
9 10 11 12 0
13 14 15 16 0
-12 -16 0
-8 -16 0
-1 -13 0
-1 -6 0
-2 -5 0
... ...
%
%
%
%
%
%
%
%
%
%
p=proposi., cnf=CNF, 16 var., 80 clauses.
1 = p11, 2 = p12, 3 = p13, 4 = p14
5 = p21, 6 = p22, 7 = p23, 8 = p24
9 = p31, 10= p32, 11= p33, 12= p34
13= p41, 14= p42, 15= p43, 16= p44
not p34 \/ not p44
not p24 \/ not p44
not p11 \/ not p41
not p11 \/ not p22
not p12 \/ not p21
Artificial Intelligence – p.45/46
Performance of SATO
n-queen
problem
time (seconds)
n solutions one soln. all soln.
8
92
0.00
0.01
10
724
0.00
0.04
20 >100,000
0.00
>240
40
−
0.16
−
60
−
1.30
−
80
−
4.30
−
100
−
11.20
−
There are 1646800 clauses for n=100.
Artificial Intelligence – p.46/46