Download Beginning Logic - University of Notre Dame

Document related concepts

History of the function concept wikipedia , lookup

Axiom wikipedia , lookup

Quantum logic wikipedia , lookup

Turing's proof wikipedia , lookup

Meaning (philosophy of language) wikipedia , lookup

Jesús Mosterín wikipedia , lookup

History of logic wikipedia , lookup

Mathematical logic wikipedia , lookup

Truth wikipedia , lookup

Catuṣkoṭi wikipedia , lookup

Analytic–synthetic distinction wikipedia , lookup

Inquiry wikipedia , lookup

First-order logic wikipedia , lookup

Curry–Howard correspondence wikipedia , lookup

Theorem wikipedia , lookup

Modal logic wikipedia , lookup

Combinatory logic wikipedia , lookup

Boolean satisfiability problem wikipedia , lookup

Sequent calculus wikipedia , lookup

Mathematical proof wikipedia , lookup

Laws of Form wikipedia , lookup

Intuitionistic logic wikipedia , lookup

Law of thought wikipedia , lookup

Syllogism wikipedia , lookup

Principia Mathematica wikipedia , lookup

Propositional formula wikipedia , lookup

Natural deduction wikipedia , lookup

Propositional calculus wikipedia , lookup

Truth-bearer wikipedia , lookup

Argument wikipedia , lookup

Transcript
Beginning Logic
Gregory Igusa
Contents
0.1
What is logic? What will we do in this course? . . . . . . . . . .
2
1 Propositional logic
1.1 Beginning propositional logic . . . . . . . . . . . . . . . . .
1.2 More translation, and a start on proofs . . . . . . . . . . . .
1.3 Negations, and more on proofs . . . . . . . . . . . . . . . .
1.4 Proving implications . . . . . . . . . . . . . . . . . . . . . .
1.5 More on CP , and new rules for conjunctions . . . . . . . .
1.6 Rules for disjunctions . . . . . . . . . . . . . . . . . . . . .
1.7 Completing the set of rules for propositional logic . . . . . .
1.8 More Examples . . . . . . . . . . . . . . . . . . . . . . . . .
1.9 Summary of all rules . . . . . . . . . . . . . . . . . . . . . .
1.10 Bi-conditional, and the use of proofs in analyzing arguments
1.11 Truth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.12 Analysis of arguments using truth tables . . . . . . . . . . .
1.13 Analyzing arguments in English . . . . . . . . . . . . . . . .
1.14 Puzzles . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.15 Soundness and Completeness . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
5
10
16
18
22
27
30
32
36
38
41
44
49
57
61
2 Predicate logic
2.1 Beginning predicate logic . . . . . . . . . . . . . .
2.2 Predicate Logic . . . . . . . . . . . . . . . . . . . .
2.3 Learning the idioms for predicate logic . . . . . . .
2.4 Free variables and sentences . . . . . . . . . . . . .
2.5 Toward proofs in predicate logic . . . . . . . . . .
2.6 Proving universal statements . . . . . . . . . . . .
2.7 More on proofs in predicate logic . . . . . . . . . .
2.8 More examples . . . . . . . . . . . . . . . . . . . .
2.9 What comes next? . . . . . . . . . . . . . . . . . .
2.10 Toward the analysis of arguments in predicate logic
2.11 Applying our techniques to analyze arguments . .
2.12 Rules of proof for equality . . . . . . . . . . . . . .
2.13 More on satisfaction and analysis of arguments . .
2.14 Advanced topics . . . . . . . . . . . . . . . . . . .
2.15 Final words . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
69
69
70
75
77
81
85
88
91
94
99
102
106
110
114
116
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0.1
What is logic?
course?
What will we do in this
The main goal of this course is to develop some formal tools for analyzing arguments. An argument has a set of premises, or assumptions, and a conclusion.
The argument is “valid” if, whenever the premises are all true, the conclusion is
also true. In logic, we look at the form of the argument. We try to say when the
form of the argument itself guarantees that any reasonable person who accepts
the premises must accept the conclusion.
Example 0.1.1.
All men are mortal.
Socrates is a man.
Therefore, Socrates is mortal.
This argument may be familiar. The first two statements are the premises.
The final statement is the conclusion.
Example 0.1.2.
All Notre Dame women are smart.
Susan is a Notre Dame woman.
Therefore, Susan is smart.
The second argument has the same form as the first. If we agree that the
first argument is valid, then we should believe the second as well.
As tools for analyzing arguments, we will develop formal languages, a formal
proof system, and a formal definition of truth.
I. Formal languages
Some arguments, in English, are difficult to analyze because of the length
and complexity of the sentences. It helps to break things into simpler parts, and
focus on the way that the parts are put together. We will translate arguments
from English into some formal languages. This process makes the form of an
argument transparent—we can see how the simple parts appear in different
statements. For an argument stated in English, there may be some ambiguous
statements—capable of being interpreted in more than one way. The process of
translating forces us to resolve ambiguities.
We will consider first propositional languages. These are useful for arguments
in which what is important is the way statements are combined by conectives
“and”, “or”, “implies”, etc. We will then consider predicate languages, with
predicate symbols for properties such as being mortal, and quantifiers “for all”
and “for some”. The sample arguments above require predicate logic for a
successful translation.
2
II. Proofs
A proof is a sequence of small steps leading from a set of premises to a
conclusion. The proof should convince anyone who believes the premises to
accept the conclusion. We will work with a specific formal system of proof. The
most important features, true of any good proof system, are “soundness” and
“completeness”. Together, these say that a statement C can be proved from a
set A of assumptions if and only if C is true whenever the assumptions in A are
all true. In other words, soundness and completeness say that the following two
statements are equivalent.
1. From the assumptions in A, one can prove C.
2. If all the assumptions in A are true, then C is true.
Our proof system may seem complicated at first, but we will develop it a
little at a time. If you put in enough effort, you can master the system.
III. Truth
We will define what it means for a statement in a propositional or predicate
language to be true in an appropriate formal setting. To show that an argument
is not valid, we will look for a “counter-example”, a setting in which the premises
are all true and the conclusion is false.
IV. Analysis of arguments
We will apply all three of the tools above to analyze arguments. We will
translate our argument into an appropriate language and then use proof and/or
truth analysis to determine whether the argument is valid. For an argument
that translates successfully into propositional logic, we can carry out the truth
analysis in a definite way to determine whether the argument is valid. For
an argument that requires predicate logic, we will search simultaneously for a
proof and for a counterexample. If the argument is valid, then there is a proof,
and there is no counterexample. If the argument is not valid, then there is a
counterexample, and there is no proof. It is a deep result that there is no definite
procedure for deciding whether an argument is valid. We must be creative and
learn from the failure of our attempts.
Students who have taken Beginning Logic in the past have said that the
course improved their ability to analyze complex material, and their ability to
write so as to make a point.
V. Further topics in logic
There are many further topics in logic. You will investigate one for your
paper. Here are just a few of the further topics you might consider.
3
1. Set theory. Are there different sizes of infinity? For example, how can
we compare the sizes of the set of counting numbers 1, 2, 3, . . . and the set
of real numbers between 0 and 1? Cantor developed a way to compare
these sets and then gave a surprising construction to show that the set of
real numbers is larger. Cantor’s idea, diagonalization, has been used in
many other settings as well.
2. Computability. What problems can a machine solve? To show that
there is a mechanical procedure for solving some kind of problem, we simply need to describe the procedure. To show that there is no mechanical
procedure for solving some kind of problem, we need to know exactly what
qualifies as a mechanical procedure. Turing, Church, Kleene, Post, and
others attempted to give definitions. While the definitions looked very
different, they turn out to be all equivalent. The Church-Turing Thesis
says that the definitions are “correct”—that they capture the idea of a mechanical procedure. Turing asked some questions which you might like to
consider. Can a machine think? If you are communicating electronically
with a machine and with a human, can you tell which is which?
3. Computer-aided proofs. There are some important mathematical results; in particular, the “four-color theorem”, on coloring maps, and “Kepler’s Conjecture”, on packing oranges efficiently, that have been proved
with the aid of a computer. Do these really count as proofs, if a human
can’t check them?
4. Logical paradoxes. One example of a logical paradox is the statement
“I am lying”. You see the paradox when you try to determine whether
the statement is true or false.
5. Incompleteness. Gödel showed that for the natural numbers with the
usual addition and multiplication, no “nice” set of axioms (true and recognizable) is enough to prove all true statements. The idea is to write a
sentence that refers to itself and says, in a coded way, “I am unprovable”.
This sentence turns out to be true and unprovable. (The statement was
inspired by the Liar Paradox stated above.)
Exercises 0.1
Talk with the members of your group about why you signed up for this
course, and what you hope to get out of it. Prepare a short statement (approximately one page), jointly written, summarizing this discussion. Your group may
designate one scribe, or each group member may write a paragraph, but all of
you should read and approve the whole statement before you turn it in. Using a computer will make it easy to fold separate paragraphs together or make
changes, but a handwritten statement is fine, too.
4
Chapter 1
Propositional logic
1.1
Beginning propositional logic
We shall consider two different kinds of formal languages. The first languages
are propositional. In propositional logic, we let single letters P , Q, etc., stand
for basic statements, and we combine them using symbols for “and”, “or”,
“implies”, “if and only if”, and “not”.
1.1.1
Symbols
Here are the symbols of propositional logic.
propositional variables: P , Q, R, etc.
logical connectives (and their intended meanings):
& (and)
∨ (or)
¬ (not)
→ (implies, or if · · · then · · ·)
↔ (if and only if)1
parentheses: (, ).
1.1.2
Well-formed formulas
An allowable statement is called a well-formed formula, or wff, for short. Below,
we say exactly what counts as a well-formed formula in propositional logic.
1 The symbol ↔ is included only temporarily among the formal symbols. Later, we will
think of it as an abbreviation.
5
1.1.1 (Well-formed formulas).
1. A propositional variable P , Q, R, etc., is a wff, just by itself.
2. If G is a wff, then so is ¬G.
3. If G and H are wffs, so are (G & H), (G ∨ H), (G → H), and (G ↔ H).
4. Nothing is a wff unless it can be gotten by finitely many applications of
Rules 1, 2, and 3.
The definition tells us how more complicated formulas are built up from
simpler ones. To show that something is a well-formed formula, we give a finite
sequence of steps, where each step is obtained by an application of Rule 1, or
the result of applying Rule 2 or Rule 3 to earlier steps. Such a sequence is called
a formation sequence.
Example 1.1.1. Show that the following is a wff: (((P & ¬Q)∨R) → (R & Q)).
We give a formation sequence.
1. P is a wff by Rule 1,
2. Q is a wff by Rule 1,
3. R is a wff by Rule 1,
4. ¬Q is a wff by Rule 2 applied to 2,
5. (P & ¬Q) is a wff by Rule 3 applied to 1, 4,
6. ((P & ¬Q) ∨ R) is a wff by Rule 3 applied to 5, 3,
7. (R & Q) is a wff by Rule 3 applied to 3, 2,
8. (((P & ¬Q) ∨ R) → (R & Q)) is a wff by Rule 3 applied to 6, 7.
We could give a different formation sequence for the same formula, changing
the order of certain steps or adding extra steps.
1. R is a wff by Rule 1,
2. Q is a wff by Rule 1,
3. (R & Q) is a wff by Rule 3 applied to 1, 2,
4. P is a wff by Rule 1,
5. (P ∨ Q) is a wff by Rule 3 applied to 4, 2,
6. ¬Q is a wff by Rule 2 applied to 2,
6
7. (P & ¬Q) is a wff by Rule 3 applied to 4, 6,
8. ((P & ¬Q) ∨ R) is a wff by Rule 3 applied to 7, 1,
9. (((P & ¬Q) ∨ R) → (R & Q)) is a wff by Rule 3 applied to 8, 3.
Example 1.1.2. The following is not a wff : P ¬Q → (
This is pretty obvious, but if we wanted to, we could argue “inductively” that
no wff ends in the symbol “(”. Think of a possible formation sequence, with
finitely many steps. For a step using Rule 1, there are no occurrences of “(”.
For a step using Rule 2 or Rule 3, the last symbol is “)”, not “(”. So, at each
step, in particular, at the last step, the formula does not end in “(”.
1.1.3
Translation
We are ready to do some translation. In the process, we will firm up the meaning
of our connectives, removing some ambiguities that are present in English.
Translate the following into English, letting P stand for “Peter walks the
dog”, letting D stand for “the dog barks”, and letting C stand for “the cat
makes trouble”.
1. (P ∨ D)
2. (C → D)
3. (C ↔ D)
4. ¬D
5. ¬(C ∨ D)
1. We may write “Either Peter walks the dog or it barks”. The symbol ∨ stands
for an “inclusive” or. That is, we allow the possibility that both things happen.
The word “unless” is similar to “or”. We could also write “The dog barks unless
Peter walks it”.
2. We may write, “If the cat makes trouble, then the dog barks”, or “When the
cat makes trouble, the dog barks”. Note that our “if. . . then” does not tell us
that the second clause actually holds. It means that if the first clause is true,
then the second is also true—if the first clause is false, then the second may or
may not be true.
3. We may write “The cat makes trouble if and only if the dog barks”, or
“The cat makes trouble exactly when the dog barks”. Note that “if and only
if” means that both implications hold. Either both things are true or both are
false.
7
4. We may write “The dog does not bark”, or “The dog is not barking”. There
is a subtle difference in English, which cannot be expressed in propositional
logic.
5. We may write “It is not the case that either the cat makes trouble or the dog
barks”, or “Neither does the dog bark nor does the cat make trouble”. Note
that the meaning of ¬(C ∨ D) is quite different from that of (¬C ∨ ¬D). In the
first statement, we mean that both C and D are false, while in the second, we
mean that at least one is false.
Consider the following argument.
If the river floods, then our entire wheat crop will be destroyed. If
the wheat crop is destroyed, then our community will be bankrupt.
The river will flood if there is an early thaw. In any case, there will
be heavy rains later in the summer. Therefore, if there is an early
thaw, then our entire community will be bankrupt unless there are
heavy rains later in the summer.
Looking at the argument while it is still in English, let us pick out the
premises, and the conclusion. Then let us translate.
Premises
1. If the river floods, then our entire wheat crop will be destroyed.
2. If the wheat crop is destroyed, then our community will be bankrupt.
3. The river will flood if there is an early thaw.
4. In any case, there will be heavy rains later in the summer.
Conclusion: If there is an early thaw, then our entire community will be
bankrupt unless there are heavy rains later in the summer.
We choose some basic propositional variables—symbols to stand for the important smaller pieces that occur in the statements.
Let F —river floods, D–wheat crop destroyed, T —early thaw, B—bankrupt,
R—heavy rains later.
The premises are:
1. (F → D)
2. (D → B)
3. (T → F )
8
4. R
The conclusion is:
(T → (B ∨ R))
If we moved the comma from after “thaw” to after “bankrupt”, we would probably change the translation to
((T → B) ∨ R).
Exercises 1.1
1. Show that ((P & Q) → ¬¬R) is a well-formed formula. Do this in two
different ways; that is, give two different formation sequences.
2. Translate each of the following from symbols to English, letting A stand for
“Alice minds the baby”, letting B stand for “The baby cries”, and letting
C stand for “The Cheshire cat makes trouble”. Make your translation
follow the propositional formula as closely as possible. Don’t worry if the
result is awkward English. Below you will be asked to translate the same
formulas again, aiming for more natural English.
(a) (A & (B & C))
(b) ((A & B) & C)
(c) (¬(A & B) & C)
(d) (A & ¬(B & C))
(e) (A & (B ∨ C))
(f) ((A → B) & C)
(g) (¬(A → B) & C)
(h) (A → ¬(B & C))
(i) (A → (B ∨ C))
(j) ((A → B) ∨ C)
You may find yourself using special wordings such as the following:
• “it is not the case that both . . . and . . . ”,
• “it is not the case that . . . implies . . . ”,
• “. . . and, in addition, either . . . or . . . ”,
• “. . . implies that either . . . or . . . ”.
Try to suggest the groupings given by parentheses, using commas or extra
words.
9
3. Translate the formulas above into more idiomatic English if you can do
it without losing the precise meaning. Some of the formulas may be too
complex to sound completely natural.
4. On her 100th birthday, a sweet old lady is asked, “What is the secret of
your long life?” “Oh”, she answers, “I strictly follow my diet”, and she
proceeds to give the following argument.
If I don’t drink beer for dinner, then I always have fish. Any
time I have both beer and fish for dinner, then I do without ice
cream. So, if I have ice cream or don’t have beer, then I never
eat fish.
Say what are the premises and what is the conclusion of this argument.
Then translate, letting B—drink beer, F —have fish, I—have ice cream.
Note that for now, we are not analyzing the validity of the argument, but
rather simply asking what the premises are and what the conclusion is.
Later, we will discuss how to check whether the conclusion follows logically
from the premises. If you like, you are welcome to start thinking about
how one would show an argument to be valid or invalid.
Further problems: Below are some questions that you might like to think
about. The answers will not be given in class. If you have ideas, write them
down and turn them in for extra credit.
Could you take an arbitrary finite sequence of symbols and figure out whether
it is a well formed formula? Could you explain to someone else how to do this?
Could a machine do it?
1.2
1.2.1
More translation, and a start on proofs
More translation
You have seen that propositional logic lacks the nuances of English. When you
translate, try to check that the statement you are translating would be true
in the same cases as your translation. When you translate from English into
propositional logic, you may find it helpful to first re-write the statement, still
in English, but closer to the idiom of propositional logic.
Certain words are tricky. The word “but” means roughly the same as “and”.
For example, the following are approximately the same:
(1) It is rainy but warm.
(2) It is rainy and warm.
10
Our ∨ is inclusive, where the English “or” is sometimes inclusive and sometimes not. There is ambiguity associated with the English “or”, and when we
use ∨, we are removing the ambiguity. The word “unless” means roughly the
same as “or”. For example, the following are approximately the same:
(1) You will not get credit for the course unless you register.
(2) Either you register for the course, or you will not get credit.
We talked about →. When we write (P → Q), we mean that if P is true,
then so is Q. Keeping this in mind, think how the statement (P → Q) could
fail to be true—we must have P true and Q false. In English, there is some
ambiguity associated with implications. When we say “if P , then Q”, or “P
implies Q”, we might simply mean that if P is true, then Q is true, or we might
mean that P is the cause of Q. Causes are awfully difficult to determine, and
that is one of the reasons we have chosen the simpler meaning for our formal
symbol →.
Inverting an implication
In English, we may write an implication with the “if” part at the front, or we
may put it later. We translate “If P , then Q” and “Q if P ” the same way.
Example 1.2.1. Classes are canceled if there is a tornado warning. Letting
C—classes canceled, and T —tornado warning, we would write (T → C).
Only if
Consider the following statements, which involve the phrase “only if”.
1. You can register for second semester Italian only if you can show that you
know the material from first semester Italian.
2. Classes are cancelled only if there is a tornado warning.
The first statement means “If you register for second semester Italian, then
you must show that you know the material from the first semester”. The second statement means “If classes are cancelled, then there must be a tornado
warning”. In general, P only if Q translates as (P → Q).
If and only if
Finally, we say a little about ↔. In general, the statement (P ↔ Q) is equivalent
to ((P → Q) & (Q → P )). Letting C—classes are cancelled and T —tornado
warning, we translate (C ↔ T ) by saying that classes are canceled if and only if
there is a tornado warning. If (C ↔ T ) is true, then we have (T → C), saying
that if there is a tornado warning, then classes are canceled, and we also have
(C → T ), saying that classes are canceled only if there is a tornado warning.
We will do more translation in propositional logic. Now, however, we begin
developing our formal system of proof.
11
Beginning proofs
Some people believe that writing proofs in a formal system is mental exercise
that gets the mind in shape for strenuous thinking of all kinds. Whether or
not you believe this, you will see that proofs written in a formal system can be
checked by a human with no special ingenuity—they can even be checked by a
machine.
1.2.1. A proof of a conclusion F , from a set A of premises, or assumptions, is
a finite sequence of statements F1 , . . . , Fn , ending with Fn = F , such that for
each i, Fi is obtained by applying a rule of inference. If the rule that gives Fi
requires some inputs, these must occur as earlier statements Fj for j < i. We
keep track of the assumptions behind each Fi , and the assumptions behind Fn
must all be in A. We write A ` F if there is a proof of F from assumptions in
A.
When we write proofs, we will give decorations at each step, where these
keep track of the assumptions behind the step, the rule being used, and specific
earlier steps to which the rule is being applied. Our system of proof may look
daunting at first, but we will spend enough time on it that you can all master it.
As we introduce the rules of proof, see if you believe that each rule is “sound”,
in the sense that if the assumptions are true, then the conclusion is also true.
The first rule is used a great deal.
Rule 1. Assumption, abbreviated A. This rule says that from F we can
conclude F . We write
F BF .
The rule A is clearly sound. If you believe F , then you must believe F .
We use the symbol “B” to denote that a specific deduction is a rule of
inference. The second rule is a little more complicated. It has a Latin name.
Rule 2. Modus Ponens (mode that affirms), or Modus Ponendo Ponens
(mode that by affirming affirms), abbreviated M P P . This rule says that from
F and (F → G), we can conclude G.
We write
F, (F → G) B G .
If you think about the meaning of an implication, you can see that the rule
M P P is sound. If you believe F and (F → G), you would also believe G.
There are more rules, but let us write a formal proof using just these two.
We introduce the decorations in the example. We number the steps. For each
step, we record the rule being used for the step, together with the previous steps
putting us in a position to apply the rule, and the full set of assumptions that
lie behind the step.
12
Example 1.2.2. We give a formal proof of S from the assumptions P , (P → Q),
(Q → R), and (R → S), showing that
P, (P → Q), (Q → R), (R → S) ` S .
Assumptions Result
1
1. P
2
2. (P → Q)
1, 2
3. Q
4
4. (Q → R)
1, 2, 4
5. R
6
6. (R → S)
1, 2, 4, 6
7. S
Rule and its inputs
A
A
M P P 1, 2
A
M P P 3, 4
A
M P P 5, 6
Look at the form of the proof. We have numbered the steps 1-7. Look at the
decorations on the right and also on the left and make sure that you understand
how they were arrived at. You will need to use decorations of these kinds when
you write your own proofs. On the right, we give the rule being applied on the
current step. In the case of M P P , we also give the previous steps to which we
are applying the rule, with the implication second. On the left, we keep track
of all of the assumptions behind the given step, listing their numbers.
Decorations for A:
On a step where we apply the A-rule, the decoration on the right is just the
rule A—the A-rule does not require any previous steps, so there are no numbers.
On the left, we write only the number of the current line, reflecting the fact that
the only assumption used to get this step is one we are currently making.
Decorations for M P P :
On a step where we apply the rule M P P , the decoration on the right consists
of the rule M P P and then two numbers. When we use M P P , we must have
something of the form F and (F → G) on earlier steps, and the current step
must be G. In the decorations on the right, we first write “M P P ”. Then we give
two numbers, first the step where F occurs, and then the step where (F → G)
occurs.
The order matters on the right. On the left, we put the numbers of the
assumptions behind the earlier steps referred to—F and (F → G). The order
does not matter in the decorations on the left. To help explain this, consider
the following slightly modified version of the same proof.
Example 1.2.3.
P, (P → Q), (Q → R), (R → S) ` S .
13
Assumptions Result
1
1. (P → Q)
2
2. P
1, 2
3. Q
4
4. (Q → R)
1, 2, 4
5. R
6
6. (R → S)
1, 2, 4, 6
7. S
Rule and its inputs
A
A
M P P 2, 1
A
M P P 3, 4
A
M P P 5, 6
The only difference between Example 1.2.2 and Example 1.2.3 is the order
of the assumptions: In Example 1.2.2, P is assumption 1, and (P → Q) is
assumption 2. In Example 1.2.3, P is assumption 2, and (P → Q) is assumption
1. When you apply Modus Ponens, you use first F and then (F → G), to
conclude G. Thus, in both examples, when we use M P P , we list the line where
we wrote P before the line where we wrote (P → Q), so in the first one, we
write 1,2, and in the second one, we write 2,1 on the right.
On the other hand, in both examples, the two assumptions needed to conclude line three are the assumptions from lines 1 and 2, and so in both, on the
left, we list 1, 2.
Our third rule also has a Latin name.
Rule 3. Modus Tollens (mode that denies), or Modus Tollendo Tollens
(mode that by denying denies), abbreviated M T T . This rule says that if we
have (F → G) and ¬G, then we may conclude ¬F . We write
(F → G), ¬G B ¬F .
We claim that M T T is sound. If you believe (F → G) and ¬G, you should
also believe ¬F .
Decorations for M T T :
On a step where we apply the rule M T T , the decoration on the right consists
of the rule M T T , in addition to two numbers. When we use M T T , we must have
something of the form (F → G) and ¬G on earlier steps, and the current step
must be ¬F . The first number represents the step where we have (F → G),
and the second number represents the step where we have ¬G. Again, the
order matters. We put the implication first. On the left, we give the numbers
corresponding to the assumptions behind the earlier steps referred to—(F → G)
and ¬G.
Let us see how M T T is used in an example.
14
Example: We show that ¬R, (P → Q), (Q → R) ` ¬P .
1
1. (Q → R) A
2
2. ¬R
A
1, 2
3. ¬Q
M T T 1, 2
4
4. (P → Q) A
1, 2, 4 5. ¬P
M T T 4, 3
The next rule is really two rules in one.
Rule 4. Double Negation, abbreviated DN .
(a) From ¬¬F , we may conclude F . In symbols, ¬¬F B F .
(b) From F , we may conclude ¬¬F . In symbols, F B ¬¬F
Decorations for DN : On the right, we give the rule, plus a single number—
the number of the step where we have ¬¬F in case (a), and where we have F
in case (b). On the left, we write the numbers of the assumptions behind that
earlier step.
Think about whether you believe that the two versions of DN are sound. We
shall adopt the “classical” view of truth, according to which, for any statement
G, either G is true or ¬G is true, and not both. Taking this point of view, if
you believe ¬¬F , then you would also believe F . Conversely, if you believe F ,
you would believe ¬¬F .
Example 1.2.4. Show that (P → ¬Q), Q ` ¬P .
1
1. (P → ¬Q) A
2
2. Q
A
2
3. ¬¬Q
DN 2
1, 2 4. ¬P
M T T 1, 3
It is tempting to skip step 3 here, but you need to resist the temptation. To
apply M T T , we must have something of the form (F → G) and ¬G, and if our
G is itself a negation, then ¬G will be the result of adding another negation.
Exercises 1.2
A. Proof problems
For each of the following, give a formal proof, with appropriate decorations on
both the left and right.
1. (P → (P → Q)), P ` Q
2. (Q → (P → R)), ¬R, Q ` ¬P
15
3. ¬R, (¬R → (P → R)) ` ¬P .
4. P ` ¬¬¬¬P
5. (P → ¬¬Q), P ` Q
6. (¬¬Q → P ), ¬P ` ¬Q
7. (¬P → ¬Q), Q ` P
B. Translations
1. Translate the following into English. Let W stand for “it is windy”, let
T stand for “there is a tornado warning”, and let C stand for “classes are
canceled”.
(a) (W ∨ T )
(b) (T → C)
(c) (T ↔ C)
(d) ¬W
(e) ¬(W ∨ T )
(f) ¬(C & T )
2. As above, define W —windy, T —tornado warning, C—classes canceled.
Translate from English into propositional formulas. You may find it helpful
to first rephrase the statement in English, and then translate.
(a) Classes are canceled only if there is a tornado warning.
(b) Classes are canceled if either it is windy or there is a tornado warning.
(c) If it is windy and there is a tornado warning, then classes are canceled.
1.3
1.3.1
Negations, and more on proofs
Negations
To translate ¬(F & G), the safest thing is to say “it is not the case that both F
and G”. Similarly, to translate ¬(F ∨ G), the safest thing is to say “it is not the
case that either F or G”. It is awfully tempting to believe that negations can
just be brought “inside” the other connectives. However, we must be careful,
or we will lose our intended meaning.
1. ¬(F & G) is not the same as (¬F & ¬G). The latter says that both F
and G are false. The former says that (F & G) is false. Now, (F & G) is
true just in case F and G are both true, so it is false just in case at least
one of F , G is false. We conclude that ¬(F & G) has the same meaning
as (¬F ∨ ¬G).
16
2. ¬(F ∨ G) is not the same as (¬F ∨ ¬G). The latter says that at least one
of F and G is false. The former says that (F ∨ G) is false. Now, (F ∨ G)
is true just in case at least one of F and G is true, so it is false just in
case both are false. We conclude that ¬(F ∨ G) has the same meaning as
(¬F & ¬G).
3. ¬(F → G) is not the same as (¬F → ¬G). The first is true just in case
(F → G) is false. Recall that (F → G) means that if F is true, then
so is G. The implication is false just when F is true and G is false.
We conclude that ¬(F → G) has the same meaning as (F & ¬G). The
statement (¬F → ¬G) is true just in case if ¬F is true, then ¬G is true.
In other words, (¬F → ¬G) is true if ¬G is true or ¬F is false. Find a
statement that is equivalent to (¬F → ¬G) as above. We will eventually
have methods to prove formally that these statements are equivalent.
1.3.2
More on proofs
We continue developing our system of proof. A proof is a sequence of steps,
numbered, such that each step is either an assumption, or the result of applying
one of our rules of inference to earlier steps. We decorate each step on the
right and on the left. On the right, the decorations indicate the rule being used
and the numbers of earlier steps (if any) that put us in a position to apply
the rule. Order matters on the right. On the left, the decorations indicate the
assumptions used for this step. Order does not matter on the left. For a proof
of conclusion C from premises A1 , . . . , An , the last step must be C, and the
assumptions behind this step must be among A1 , . . . , An , nothing more.
We have four rules of inference so far: A, M P P , M T T , and DN .
1. A says F B F —from F , we can conclude F . For decorations, on the right,
we write just “A”, and on the left, we put the just number of the current
step.
2. M P P says F, (F → G) B G—from F and (F → G), we can conclude
G. For decorations, on the right, we put the rule M P P , followed by
two numbers, first the number of the step where we have F and then
the number of the step where we have (F → G). On the left, we list all
assumptions behind those two steps.
3. M T T says (F → G), ¬G B ¬F —from (F → G) and ¬G, we can conclude
¬F . For decorations, on the right, we put “M T T ”, followed by two numbers, first the number of the step where we have (F → G) and then the
number of the step where we have ¬G. On the left, we list all assumptions
behind those two steps.
4. DN is really two rules: (a) ¬¬F B F —from ¬¬F , we can conclude F , and
(b) F B ¬¬F —from F , we can conclude ¬¬F . For decorations, in case
17
(a), on the right, we put “DN ”, followed by the number of the step where
we have ¬¬F , and on the left, we put the assumptions behind that step.
In case (b), we put “DN ”, followed by the number of the step where we
have F , and on the left, we put the assumptions behind that step.
Example 1.3.1. Here is a proof showing that P, (P → ¬R), (Q → R) ` ¬Q.
1
1. P
A
2
2. (P → ¬R) A
3
3. (Q → R)
A
1, 2
4. ¬R
M P P 1, 2
1, 2, 3 5. ¬Q
M T T 3, 4
Example 1.3.2. Here is a proof showing that (¬P → Q), ¬Q ` P .
1
1. (¬P → Q)
2
2. ¬Q
1, 2 3. ¬¬P
1, 2 4. P
A
A
M T T 1, 2
DN 3
Example 1.3.3. Here is a proof for (¬P → Q), (Q → R), ¬R ` P .
1
1. (¬P → Q)
2
2. (Q → R)
3
3. ¬R
2, 3
4. ¬Q
1, 2, 3 5. ¬¬P
1, 2, 3 6. P
A
A
A
M T T 2, 3
M T T 1, 4
DN 5
Example 1.3.4. What is wrong with the following proposed proof for
(P → ¬Q), Q ` ¬P ?
1
1. (P → ¬Q) A
2
2. Q
A
1, 2 3. ¬P
M T T 1, 2
On line 3, M T T does not apply. We need a pair of earlier steps of the form
(F → G) and ¬G, where the current step is ¬F . We can fix the proof as follows.
1
1. (P → ¬Q) A
2
2. Q
A
2
3. ¬¬Q
DN 2
1, 2 4. ¬P
M T T 1, 3
1.4
Proving implications
The next rule is for proving conditional statements, or implications. Recall that
when we write (F → G), we mean that if F is true, then G is also true. We are
18
not asserting that F is true. Moreover, if F is false, we make no claims about
the truth of G. When we argue that a conditional statement (F → G) is true,
it is natural to suppose that F is true and argue that G must then also be true.
The rule formalizes this.
5. Conditional Proof, abbreviated CP . The CP rule says that if, having
assumed F , we can arrive at a proof of G, then we may conclude (F → G),
without the assumption F . We may write F ` G B (F → G).
Earlier, we used in our proofs only assumptions that were actually given—
assumptions that we would allow on the last step of a proof. The strategy
associated with the CP rule calls for making an extra assumption, temporarily.
We get rid of the extra assumption when we actually apply the rule.
Strategy for using CP . To prove (F → G), using CP , we must do the
following.
1. Assume F . (To remind ourselves that this is an extra assumption, we use
the decoration A∗ , instead of A, on the right. On the left, we put ∗ on the
extra assumption.
2. Prove G. This may take several steps. We may use the assumption F ,
although if we don’t need it, that is fine as well.
3. Apply CP to get the implication (F → G). At this point, we discharge
the extra assumption.
Decorations for CP : On the right, we give first the rule CP , then the number
of the step where we assumed F , and then the step where we arrived at G. On
the left, we give the assumptions behind G, apart from F . If we used F in
proving G, we get rid of it here.
Example 1.4.1. Here is a proof of (P → Q) ` (¬Q → ¬P ) .
1
2∗
1, 2∗
1
1.
2.
3.
4.
(P → Q)
A
¬Q
A∗ (2 is an extra assumption, aim for ¬P and use CP )
¬P
M T T 1, 2
(¬Q → ¬P ) CP 2, 3 (extra assumption removed from left)
Example 1.4.2. P ` (Q → P )
1∗
2
2
1. Q
A∗ (extra assumption)
2. P
A
3. (Q → P ) CP 1, 2 (extra assumption gone)
This example has the feature that the extra assumption Q is not actually used
in getting P . It does not appear on the left in Step 2.
19
In the next example, we prove something from no assumptions. The argument is complex, but if we apply the CP strategy consistently, we should be
able to get through it.
Example 1.4.3. ` ((P → Q) → ((Q → R) → (P → R)))
1∗
1. (P → Q)
2∗
2.
3∗
3.
1∗ , 3∗
1∗ , 2∗ , 3∗
4.
5.
1∗ , 2∗
6.
1∗
7.
8.
A∗
Primary Plan: Prove
((Q → R) → (P → R)),
then use CP to get
what we want.
(Q → R)
A∗
Secondary Plan: Prove
(P → R),
then use CP to get
what we want
for the Primary Plan.
P
A∗
Tertiary Plan: Prove R,
then use CP to get
what we want
for the Secondary Plan.
Q
M P P 3, 1
R
M P P 4, 2
This was the goal
in the Tertiary Plan.
(P → R)
CP 3, 5
This was the goal
in the Secondary Plan.
((Q → R) → (P → R))
CP 2, 6
This was goal
in the Primary Plan.
We can finish the proof.
((P → Q) → ((Q → R) → (P → R))) CP 1, 7
We got rid of all three temporary assumptions, and we have proved the conclusion from nothing, as required.
The next example is so simple that it may look suspicious.
Example 1.4.4. ` (P → P )
1∗
1. P
A∗
2. (P → P ) CP 1, 1
The proof has the odd feature that the two lines we refer to when we apply
the CP rule are the same. The rule doesn’t say they have to be different.
The next example has a little more to it.
20
Example 1.4.5. (P → ¬Q) ` (Q → ¬P )
1
2∗
2∗
1, 2∗
1
1.
2.
3.
4.
5.
(P → ¬Q) A
Q
A∗
¬¬Q
DN 2
¬P
M T T 1, 3
(Q → ¬P ) CP 2, 4
You have seen how the CP rule works formally. It occurs often in informal
everyday arguments.
Example 1.4.6. Suppose we want to argue for the following statement:
If N D wins the first football game of the season, then they will go on to win
the National Championship.
We might argue as follows:
Suppose N D wins the first game. That means the team has confidence, and
the newer players have already learned their positions. The fact that the newer
players have learned their positions shows that the new assistant coaches are
effective. If the assistant coaches are effective, then the team will only get better
as the season progresses. In that case, they would be sure to win the National
Championship. Therefore, if they win the first game, then they will win the
National Championship.
The use of CP in this argument is correct—you may find weaknesses elsewhere
in the argument.
Note: We should never make extra assumptions without a plan for getting rid
of them. If we are proving conclusion C from assumptions A1 , . . . , An , then for
the last step of the proof, where we get C, we must not use any assumptions
beyond A1 , . . . , An .
Exercises 1.4
Proof Problems. For each of the following, give a formal proof.
1. (P → ¬Q) ` (Q → ¬P )
2. (¬P → Q) ` (¬Q → P )
3. (¬P → ¬Q) ` (Q → P )
4. (P → Q), (Q → R) ` (P → R)
5. (P → (Q → R)) ` ((P → Q) → (P → R))
21
6. (P → (Q → (R → S))) ` (R → (P → (Q → S)))
7. (P → Q) ` ((Q → R) → (P → R))
8. P ` ((P → Q) → Q)
9. P ` ((¬(Q → R) → ¬P ) → (¬R → ¬Q))
Translation Problems
1. Translate the following argument, a version of the “Problem of Evil”,
into propositional logic. In your translation, let G—all-good, W —desires
world with no evil, P —all-powerful, C—can guarantee that there is no
evil, E—evil exists.
If God were all-good, He would want to create a world without
evil; while if He were all-powerful, He could create such a world.
Yet evil exists. Thus, God cannot be both all-good and allpowerful.
2. As stated, the argument is not valid—it is possible for the premises to be
true while the conclusion is false. There is a tacit (unstated) assumption
which would make the argument valid.
((W & C) → ¬E)
Translate this extra assumption into English.
3. For a valid argument, if you believe all of the premises, then you will
necessarily believe the conclusion. For this particular argument, think
about whether you believe the conclusion. If not, say which of the premises
you do not believe.
Further Problem. If you are interested in the Problem of Evil, Professor
Alvin Plantinga, from the Notre Dame Philosophy Department, is responsible
for an interesting analysis. Find out what Plantiga said, and write an account
of it, for extra credit.
1.5
More on CP , and new rules for conjunctions
Summary of first five rules
1. Assumption A
Statement: F B F
Decoration on right: A
Decoration on left: just the current step
22
2. Modus ponens M P P
Statement: F, (F → G) B G
Decorations on right (in order): M P P , step of F , step of (F → G)
Decorations on left: assumptions behind F and (F → G).
3. Modus tollens M T T
Statement: (F → G), ¬G B ¬F
Decorations on right (in order): M T T , step of (F → G), step of ¬G
Decorations on left: assumptions behind (F → G) and ¬G.
4. Double negation DN
(a) Statement: F B ¬¬F
Decorations on right: DN , step of F
Decorations on left: assumptions behind F
(b) Statement: ¬¬F B F
Decorations on right: DN , step of ¬¬F
Decorations on left: assumptions behind ¬¬F .
5. Conditional proof CP (used to prove statements of form (F → G))
Statement: If . . . , F ` G, then . . . ` (F → G).
Decorations on right (in order): CP , step where F is assumed, step of G.
Decorations on left: assumptions behind G, apart from F . Wherever we write
in the left column the step in which F is assumed, we write this step with a ∗
decoration.
We will do a some more examples using CP , and then discuss two new rules,
for “conjunctions”—statements of the form (F & G).
1.5.1
More on Conditional Proof
Recall the strategy for proving an implication (F → G), using CP .
1. Assume F .
2. Prove G (possibly using F ).
3. Apply CP to obtain (F → G), dropping the extra assumption F (if used).
Before we can apply CP , we must have steps 1 and 2.
Example 1.5.1. (P → ¬Q), (R → Q) ` (P → ¬R)
1
2
3∗
1, 3∗
1, 2, 3∗
1, 2
1.
2.
3.
4.
5.
6.
(P → ¬Q)
(R → Q)
P
¬Q
¬R
(P → ¬R)
23
A
A
A∗
M P P 3, 1
M T T 2, 4
CP 3, 5
Example 1.5.2. ` ((P → ¬Q) → (Q → ¬P ))
1∗
2∗
2∗
1∗ , 2∗
1∗
1.5.2
1.
2.
3.
4.
5.
6.
(P → ¬Q)
A∗
Q
A∗
¬¬Q
DN 2
¬P
M T T 1, 3
(Q → ¬P )
CP 2, 4
((P → ¬Q) → (Q → ¬P )) CP 1, 5
Rules for conjunctions
The next two rules involve &. One rule lets us use a conjunction, and the other
lets us prove a conjunction.
Rule 6. & Elimination (&E). This rule lets us use a statement of the form
(F & G). Like DN , it is two rules in one. Here are the statements.
(a) From (F & G), we may conclude F . As a short way of stating this rule,
we write (F & G) B F .
(b) From (F & G), we may conclude G. We write (F & G) B G.
Decorations for &E. The decorations are natural. On the right, we give the
rule &E and then the number of the step where we have (F & G). On the left,
we give the assumptions behind that step.
Rule 7. & Introduction (&I). This rule is used to prove a statement of
the form (F & G). The rule says that if we have F and G, we may conclude
(F & G). We write F, G B (F & G). The decorations are natural. On the right,
we give the rule &I, then the number of the step where we have F , and then
number of the step where we have G. On the left, we give the assumptions
behind those two steps.
We begin with a basic example.
Example 1.5.3. (P & Q) ` (Q & P )
1 1. (P & Q) A
1 2. P
&E 1
1 3. Q
&E 1
1 4. (Q & P ) &I 3, 2
Example 1.5.4. Prove (P & Q), (Q → R) ` (P & R)
1
2
1
1
1, 2
1, 2
1.
2.
3.
4.
5.
6.
(P & Q)
(Q → R)
P
Q
R
(P & R)
24
A
A
&E 1
&E 1
M P P 4, 2
&I 3, 5
Let us look at more examples using these rules.
Example 1.5.5. P, (P → Q), (P → R) ` (Q & R)
1
1. P
2
2. (P → Q)
3
3. (P → R)
1, 2
4. Q
1, 3
5. R
1, 2, 3 6. (Q & R)
A
A
A
M P P 1, 2
M P P 1, 3
&I 4, 5
Example 1.5.6. ` (P → (Q → (P & Q)))
1∗
2∗
1∗ , 2∗
1∗
1.
2.
3.
4.
5.
P
A∗ (extra assumption)
Q
A∗ (extra assumption)
(P & Q)
&I 1, 2
(Q → (P & Q))
CP 2, 3
(P → (Q → (P & Q))) CP 1, 4
Example 1.5.7. (P → R), (S → ¬Q) ` ((P & Q) → (R &¬S))
1
1. (P → R)
2∗
2. (P & Q)
2∗
3. P
1, 2∗
4. R
5
5. (S → ¬Q)
2∗
6. Q
2∗
7. ¬¬Q
2∗ , 5
8. ¬S
1, 2∗ , 5 9. (R & ¬S)
1, 5
10. ((P &Q) → (R & ¬S))
A
A∗
&E 2
M P P 3, 1
A
&E 2
DN 6
M T T 5, 7
&I 4, 8
CP 2, 9
Example 1.5.8. (G → W ), (P → C), ((W & C) → ¬E), E ` ¬(G & P )
It may take some experimenting to arrive at a workable plan for this proof.
The idea in the proof given below is to prove ((G & P ) → ¬E), using the
assumptions. Having done this, we can finish in two more steps, using double
negation and finally MTT.
25
1
2
3
4
4
6∗
6∗
6∗
1, 6∗
2, 6∗
1, 2, 6∗
1, 2, 3, 6∗
1, 2, 3
1, 2, 3, 4
1. (G → W )
2. (P → C)
3. ((W & C) → ¬E)
4. E
5. ¬¬E
6. (G & P )
7. G
8. P
9. W
10. C
11. (W & C)
12. ¬E
13. ((G & P ) → ¬E)
14. ¬(G & P )
A
A
A
A
DN 4
A∗
&E 6
&E 6
M P P 7, 1
M P P 8, 2
& I 9, 10
M P P 11, 3
CP 6, 12
M T T 13, 5
Exercises 1.5
Proof problems. For each of the following, give a proof.
1. P ` (Q → (P & Q))
2. ((P & R) → Q), P ` (R → Q)
3. (P & (Q & R)) ` (Q & (P & R))
4. ((P → Q) & (P → R)) ` (P → (Q & R))
5. (P → Q), (R → S) ` ((P & R) → (Q & S))
6. (P → (Q & R)) ` ((P → Q) & (P → R))
7. (P → (Q → R)), ((P & S) → T ), (R → S) ` ((P & Q) → T )
Thinking about the intended meaning of propositional formulas
1. Explain, in words, why ((P & Q) → P ) is always true.
2. Explain why ((P ∨ Q) → P ) need not always be true. Give an example,
assigning meanings to P and to Q such that (P ∨ Q) is true and P is false.
3. Explain why (P → Q) may be true when (P ↔ Q) is false. Recall that
(P ↔ Q) should be true just when ((P → Q) & (Q → P )) is true. Give
an example, assigning meanings to P and to Q such that (P → Q) is true
and (Q → P ) is false.
26
1.6
Rules for disjunctions
The first new rule, ∨I tells us how to prove a disjunction. The second new rule,
∨E, lets us use a disjunction to prove something else.
Rule 8: ∨ Introduction (∨I). This rule is two in one. Here are the statements.
(a) From F , we may conclude (F ∨ G). We write F B (F ∨ G).
(b) From G, we may conclude (F ∨ G). We write G B (F ∨ G).
Decorations for ∨I:
The decorations are straightforward. On the right, we write ∨I and then the
step where we have F for (a), or G for (b). On the left, we give the assumptions
behind F for (a), or those behind G for (b).
Example 1.6.1. (P & Q) ` (P ∨ Q)
1 1. (P & Q) A
1 2. P
&E 1
1 3. (P ∨ Q) ∨I 2
Rule 9: ∨ Elimination (∨E). This rule lets us use information (F ∨ G) to
prove some H. The rule says that if we have (F ∨ G), (F → H), and (G → H),
then we may conclude H. We may write (F ∨ G), (F → H), (G → H) B H.
Strategy for using ∨E—proof by cases
Suppose we have (F ∨ G), and we want to prove H. In order to use ∨E, we
must first show (F → H) and (G → H) using CP.
1. Get (F ∨ G) in some way.
2. Case 1: Prove (F → H) using CP. To do this:
(a) Assume F . This is an extra assumption.
(b) Prove H, possibly using F .
(c) Use CP to conclude (F → H). (This step cancels our assumption of
F .)
3. Case 2: Prove (G → H) using CP.
(a) Assume G. This is an extra assumption.
(b) Prove H, possibly using G.
(c) Use CP to conclude (G → H). (This step cancels our assumption of
G.)
27
4. Apply ∨E to get H.
Decorations for ∨E
On the right, we give the rule ∨E and then the steps where we have first,
(F ∨ G), second, (F → H), and third, (G → H). On the left, we give the
assumptions behind (F ∨ G),(F → H), and (G → H).
As with when we worked with &, we begin with a basic example.
Example 1.6.2. (P ∨ Q) ` (Q ∨ P )
1
2∗
2∗
5∗
5∗
4∗
1.
2.
3.
4.
5.
6.
7.
8.
(P ∨ Q)
A
P
A∗
(Q ∨ P )
∨I 2
P → (Q ∨ P ) CP 2, 3
Q
A∗
(Q ∨ P )
∨I 5
P → (Q ∨ P ) CP 2, 3
(Q ∨ P )
∨E 1, 4, 7
This rule is used in informal arguments involving proof by cases.
Example 1.6.3 (Informal Argument). To convince you that doing homework
as a group benefits you, I might argue by cases as follows. Either you see how
to get started on the homework or you don’t. First, I’ll show that if you don’t
see how to get started, then doing your work in a group will help you. Suppose
you don’t see how to get started. Other members of your group may be able to
figure out the first problem and explain the solution in a way that enables you to
solve later problems. Second, I’ll argue that even if you see how to get started,
then working in a group will benefit you. Suppose you do see how to get started.
The process of explaining the solution of the first problem to other members
of your group will deepen your understanding, putting the basic concepts into
your memory in a more organized, permanent way. In either case, you benefit.
Example 1.6.4 (Informal Argument). To argue that a certain statement is
true for all natural numbers n, we might give one argument proving if n is even,
then the statement holds, and another argument proving if n is odd, the the
statement holds as well. In view of the fact that every natural number is even
or odd, this is enough to show the statement is true for all natural numbers.
We now return to some formal proofs using ∨E.
Example 1.6.5. ((P & Q) ∨ (R & P )) ` P
28
1
2∗
2∗
5∗
5∗
1
1.
2.
3.
4.
5.
6.
7.
8.
((P & Q) ∨ (R & P ))
(P & Q)
P
((P & Q) → P )
(R & P )
P
((R & P ) → P )
P
A
A∗ (Assume (P & Q) to prove P .)
&E 2
CP 2, 3
A∗ (Assume (R & P ) to prove P .)
&E 5
CP 5, 6
∨E 1, 4, 7
Example 1.6.6. (P & (Q ∨ R)) ` ((P & Q) ∨ (P & R))
1
1
1
4∗
1, 4∗
1, 4∗
1
8∗
1, 8∗
1, 8∗
1
1
1. (P & (Q ∨ R))
2. P
3. (Q ∨ R)
4. Q
5. (P & Q)
6. ((P & Q) ∨ (P & R))
7. (Q → ((P & Q) ∨ (P & R)))
8. R
9. (P & R)
10. ((P & Q) ∨ (P & R))
11. (R → ((P & Q) ∨ (P & R)))
12. ((P & Q) ∨ (P & R))
A
&E 1
&E 1 (Suggests proof by cases.)
A∗ (Start of Case 1)
&I 2, 4
∨I 5 (End of Case 1)
CP 4, 6
A∗ (Start of Case 2)
&I 2, 8
∨I 9 (End of Case 2)
CP 8, 10
∨E 3, 7, 11
In the next example, one of the cases is an assumption.
Example 1.6.7. (P ∨ Q), (P → Q) ` Q
1
2
3∗
1.
2.
3.
4.
1, 2 6.
(P ∨ Q)
(P → Q)
Q
(Q → Q)
Q
A
A (This is Case 1)
A∗ (Start of Case 2)
CP 3, 3 (End of Case 2)
∨E 1, 2, 4
Exercises 1.6
For each of the following, give a proof.
1. Q ` (P ∨ Q)
2. (P & Q) ` (P ∨ Q)
3. ((P → Q) & (Q → R)) ` ((P ∨ Q) → R)
4. (P → Q), (R → S) ` ((P ∨ R) → (Q ∨ S))
29
5. (P ∨ Q), (P → R), (¬S → ¬Q) ` (R ∨ S)
Extra Problem
Translate the following argument, letting J—justice in this life, N —after-life
needed, R—God is just, P —after-life provided
If there is justice in this life, no after-life is necessary. On the other
hand, if there is no justice in this life, then one has no reason to
believe God is just. However, if one has no reason to believe that,
what reason do we have to think He’ll provide us with an after-life?
So, either no after-life is needed or else we have no reason to think
God will provide one.
1.7
Completing the set of rules for propositional
logic
So far, we have 9 rules of inference for propositional logic:
A, M P P, M T T, DN, CP, &I, &E, ∨I, and ∨ E .
There is one more rule. Like ∨I, this rule has an implication as a premise.
Since we’ll need to have a particular implication to apply this rule, our strategy is
to first prove this implication by making an extra assumption, and then getting
rid of it using CP. Like modus ponens and modus tollens, the rule has a Latin
name.
Rule 10: Reductio ad absurdum (RAA)
We use RAA to prove statements of the form ¬F . The rule says that if we have
(F → (G & ¬G)), then we may conclude ¬F . We write (F → (G & ¬G)) B ¬F.
In other words, RAA says that if F implies (G & ¬G), an absurd statement,
then we have ¬F .
For decorations, on the right, we give the rule RAA and then the step where
we have (F → (G & ¬G)). Here is the strategy for using RAA.
Strategy for using RAA to prove ¬F —proof by contradiction.
1. Assume F .
2. Prove an explicit contradiction (G & ¬G) (probably using F ).
3. Use CP to obtain (F → (G & ¬G)) (canceling the extra assumption F .)
4. Apply RAA to get ¬F .
30
This strategy of proof by contradiction is used in informal arguments in
English. Here is an argument of Lucretius, a Roman poet and philosopher in
the century before Christ, showing that the universe is infinite.
Example 1.7.1 (Informal Example of RAA).
Suppose the universe is finite. Go to the boundary. From there,
throw a dart straight out. Now, the dart is in the universe, but it
is also not in the universe. This is a contradiction. Therefore, the
universe must really be infinite.
When we apply RAA in a formal proof, we have something like the following.
7∗
A∗
7. F
..
.
1, 7∗ , 8 11. (G & ¬G)
1, 8
12. (F → (G & ¬G)) CP 7, 11
1, 8
13. ¬F
RAA 12
On the line where we apply RAA, we decorate on the right by giving RAA
and then the line where we got the implication (F → (G & ¬G)). On the left,
we give the assumptions behind (F → (G & ¬G)).
Example 1.7.2. Show P, ¬R ` ¬(P → R)
1
2
3∗
1, 3∗
1, 2, 3∗
1, 2
1, 2
1.
2.
3.
4.
5.
6.
7.
P
A
¬R
A
(P → R)
A∗ (Get absurd implication and use RAA)
R
M P P 1, 3
(R & ¬R)
&I 4, 2
((P → R) → (R & ¬R)) CP 3, 5
¬(P → R)
RAA 6
It is possible to give a different proof, using M T T instead of RAA. Using
P , we show that ((P → R) → R). Combining this with ¬R, we get ¬(P → R).
Example 1.7.3. Show ¬(P ∨ Q) ` ¬P .
1
2∗
2∗
1, 2∗
1
1
1.
2.
3.
4.
5.
6.
¬(P ∨ Q)
A
P
A∗
(P ∨ Q)
∨I 2
((P ∨ Q) & ¬(P ∨ Q))
& I 3, 1
(P → ((P ∨ Q) & ¬(P ∨ Q))) CP 2, 4
¬P
RAA 5
Again it is possible to give a proof using M T T instead of RAA. We show
that (P → (P ∨ Q)). Combining this with ¬(P ∨ Q), we get ¬P .
31
Example 1.7.4. Show ¬P ` (P → R)
1
2∗
3∗
1, 2∗
1, 2∗
1, 2∗
1, 2∗
1
1.
2.
3.
4.
5.
6.
7.
8.
¬P
P
¬R
(P & ¬P )
(¬R → (P & ¬P ))
¬¬R
R
(P → R)
A
A∗ (Prove R and use CP )
A∗ (Prove absurd implication to get ¬¬R)
&I 2, 1
CP 3, 4
RAA 5
DN 6
CP 2, 7
The next example is very short.
Example 1.7.5. ` ¬(P &¬P )
1∗
1. (P & ¬P )
A∗
2. ((P & ¬P ) → (P & ¬P )) CP 1, 1
3. ¬(P & ¬P )
RAA 2
Exercises 1.7
Prove the following.
1. (¬P → P ) ` P
2. (P & Q) ` ¬(P → ¬Q)
3. ¬(P → ¬Q) ` Q
4. ¬(P → ¬Q) ` P
5. (P ∨ ¬Q) ` (Q → P )
Extra Problem. Look for an interesting example of an argument from one of
your other courses (philosophy, theology, political science), or from your reading (newspapers, books). Choose an argument that translates successfully into
propositional logic. Write the argument in English, and then give a translation,
saying what propositional variables you are using and what they stand for.
1.8
More Examples
We have 10 rules of inference: A, M P P , M T T , DN , CP , &I, &E, ∨I, ∨E, and
RAA. This set of rules is “sound”, in the sense that if we can prove a conclusion
C from assumptions A1 , . . . , An , then whenever the assumptions are all true,
the conclusion is also true. We are not proving things that we do not want to
prove. The set of rules is also “complete”, in the sense that if C is true whenever
32
the assumptions are true, then there is a proof using only the rules we have.
We do not need more rules to prove what we want to prove. For any argument
expressed in the language of propositional logic, whenever the assumptions are
true, the conclusion is also true. We will say more about truth soon. The
rule CP (used to prove implications) involves making an extra assumption and
discharging this assumption when we actually apply the rule. Before we apply
rules ∨E and RAA, we often must first prove certain implications using CP .
CP: We use CP to prove statements of the form (F → G). Recall the strategy.
1. Assume F .
2. Prove G, possibly using F .
3. Apply CP to get (F → G), discharging the extra assumption F .
∨E: We use ∨E when we want to use a statement of the form (F ∨ G) to prove
some H. Recall the strategy—proof by cases.
1. Get (F ∨ G) somehow.
2. Assume F . (Start of Case 1)
3. Prove H, possibly using F .
4. Obtain (F → H) using CP . (End of Case 1)
5. Assume G. (Start of Case 2)
6. Prove H, possibly using G.
7. Obtain (G → H) using CP . (End of Case 2)
8. Apply ∨E to get H.
RAA: We use RAA to prove statements of the form ¬F . Recall the strategy—
proof by contradiction.
1. Assume F .
2. Prove a statement of the form (G & ¬G), possibly using F .
3. Obtain (F → (G & ¬G)) using CP .
4. Apply RAA to get ¬F .
Remark: Whenever we make an extra assumption, we should have in mind a
strategy telling us to make this particular assumption, what is our next goal,
and how we will eventually discharge the extra assumption using CP .
Let us consider some more examples, using these rules.
33
Example 1.8.1. Show P ` (Q → P ).
1
2∗
1
1. P
A
2. Q
A∗
3. (Q → P ) CP 2, 1
To apply CP , we needed to assume Q, and we needed to get P somehow—we
are not required to actually use the extra assumption.
Example 1.8.2. Show P ` (¬P → ¬Q)
1
2∗
3∗
1, 2∗
1, 2∗
1, 2∗
1
1.
2.
3.
4.
5.
6.
7.
P
¬P
Q
(P & ¬P )
(Q → (P & ¬P ))
¬Q
(¬P → ¬Q)
A
A∗ (prove ¬Q and use CP to get implication)
A∗ (use RAA to get ¬Q)
&I 1, 2
CP 3, 4
RAA 5
CP 2, 6
Off-label use of RAA. We think of using RAA when we want to prove a
negation. Occasionally, we want to prove a statement F that is not a negation,
and we manage to use RAA anyway. What we do is use RAA to prove ¬¬F ,
which is a negation, and then use DN to get F . We should think about using
RAA in this way when our other strategies do not apply, and we do not see how
to get anything from our hypotheses.
Example 1.8.3. Show (P ∨ Q), ¬P ` Q.
1
2
3∗
4∗
2, 3∗
2, 3∗
2, 3∗
2, 3∗
2
10∗
1, 2
1. (P ∨ Q)
2. ¬P
3. P
4. ¬Q
5. (P & ¬P )
6. (¬Q → (P & ¬P ))
7. ¬¬Q
8. Q
9. (P → Q)
10. Q
11. (Q → Q)
12. Q
A
A
A∗ (Start of Case 1)
A∗ (We will use RAA to get ¬¬Q.)
&I 3, 2
CP 4, 5
RAA 6
DN 7
CP 3, 8 (End of Case 1)
A∗ (Start of Case 2)
CP 10, 10 (End of Case 2)
∨E 1, 9, 11
34
Example 1.8.4. Show ¬(P → Q) ` P
1
2∗
3∗
4∗
2∗ , 3∗
2∗ , 3∗
2∗ , 3∗
2∗ , 3∗
2∗
1, 2∗
1
1
1
1. ¬(P → Q)
2. ¬P
3. P
4. ¬Q
5. (P & ¬P )
6.(¬Q → (P & ¬P ))
7. ¬¬Q
8. Q
9. (P → Q)
10. ((P → Q) & ¬(P → Q))
11. (¬P → ((P → Q) & ¬(P → Q)))
12. ¬¬P
13. P
A
A∗ (we will use RAA to get ¬¬P )
A∗ (we prove (P → Q))
A∗ (we will use RAA to get ¬¬Q)
&I 3, 2
CP 4, 5
RAA 6
DN 7
CP 3, 8
& I 9, 1
CP 2, 10
RAA 11
DN 12
Here are some partial proofs.
Example 1.8.5. Complete the proof of (¬P ∨ Q) ` (P → Q) by filling in the
decorations on the right.
1. (¬P ∨ Q)
2. P
3. ¬P
4. ¬Q
5. (P & ¬P )
6. (¬Q → (P & ¬P ))
7. ¬¬Q
8. Q
9. (¬P → Q)
10. Q
11. (Q → Q)
1, 2 12. Q
1
13. (P → Q)
1
2
3
4
2, 3
2, 3
2, 3
2, 3
2
10
Example 1.8.6. Complete the following proof of ` (P ∨ ¬P ) by filling in
decorations on the left.
1. ¬(P ∨ ¬P )
A∗
2. P
A∗
3. (P ∨ ¬P )
∨I 2
4. ((P ∨ ¬P ) & ¬(P ∨ ¬P ))
&I 3, 1
5. (P → ((P ∨ ¬P ) & ¬(P ∨ ¬P )))
CP 2, 4
6. ¬P
RAA 5
7. (P ∨ ¬P )
∨I 6
8. ((P ∨ ¬P ) & ¬(P ∨ ¬P ))
&I 7, 1
9. (¬(P ∨ ¬P ) → ((P ∨ ¬P ) & ¬(P ∨ ¬P ))) CP 1, 8
10. ¬¬(P ∨ ¬P )
RAA 9
11. (P ∨ ¬P )
DN 10
35
You need to make sure that you understand exactly how each rule works,
with its decorations. For problems like the two above, that is enough. In writing
complete proofs, there is the further difficulty of thinking which steps to take,
what strategies to apply, when to make extra assumptions.
1.9
Summary of all rules
What have we done so far in propositional logic? What are you expected to
know?
Definition of wff. You should know what is a well-formed formula of propositional logic, and you should be able to show that something is a well-formed
formula by giving a formation sequence.
Translation. You should be able to translate single statements, or whole arguments from English into propositional logic, and you should also be able to
translate from propositional logic to English.
Example: Consider the following:
Tom embezzled money from ABAG, and if he is caught, then he will go to jail.
Let P stand for “Tom embezzled money from ABAG”, let Q stand for “He is
caught”, and let R stand for “He goes to jail”. Which is the correct translation?
(P & (Q → R)) or ((P & Q) → R)
What is the difference? The first says that P holds and, in addition, (Q → R)
holds. The second says that if P and Q both hold, then so does R. The first is
correct.
Could we translate (¬(P & Q) → ¬R)?
If it is not the case that both Tom embezzled money from ABAG and he is
caught, then he will not go to jail.
Proofs. You should have a good command of our proof system. You need to
know all of the rules, with their decorations. You should understand what each
rule is intended to do, and know the strategy associated with the rule. Know
what (if anything) must be done before you can apply the rule.
Strategies
1. To prove a conjunction, the usual strategy is to prove both parts and use
&I.
2. To prove a disjunction, the usual strategy is to prove one part and use ∨I.
36
3. To prove an implication, the usual strategy is to assume the part on the
left, prove the part on the right, and then apply CP to get the implication
and discharge the extra assumption.
4. To prove a negation ¬F , one strategy is proof by contradiction. You assume F , prove a contradiction of the form (G & ¬G)), obtain (F → (G & ¬G)
using CP (discharging the extra assumption) and then apply RAA to get
¬F .
5. If you have an implication (F → G), M P P and M T T provide different
ways of using it. For M P P , you need F and you get G. For M T T , you
need ¬G, and you get ¬F .
6. If you have a conjunction (F & G), then &E lets you get the separate
conjuncts F , G.
7. If you have a disjunction, (F ∨ G), ∨E lets you use this statement to
prove some H—giving a proof by cases. The strategy is as follows. You
assume F , prove H, and get (F → H) using CP . Then you assume G
and prove H, and get (G → H) using CP . Finally, apply ∨E to get H.
It is important that you prove the same H in both cases.
Exercises 1.9
For each of the following, give a complete proof.
1. P, (P → Q), (Q → R) ` R.
2. (P → Q), (Q → R), ¬R ` ¬P
3. (P → Q), (Q → R) ` (P → R)
4. (P & Q), (R & S) ` (Q & R)
5. ` ((P & Q) → (R → Q))
6. (P → Q) ` (¬Q → ¬P )
7. (P → ¬Q) ` (Q → ¬P )
8. (P → ¬P ) ` ¬P
9. (P → S) ` ((P & Q) → S)
10. ` (P ∨ ¬P )
37
Extra Problem: Complete the proof of (P → (Q ∨ R)), P, ¬Q ` R, by filling
in the decorations on the left.
1. (P → (Q ∨ R))
2. P
3. ¬Q
4. (Q ∨ R)
5. Q
6. ¬R
7. (Q & ¬Q)
8. (¬R → (Q & ¬Q))
9. ¬¬R
10. R
11. (Q → R)
12. R
13. (R → R)
14. R
A
A
A
M P P 2, 1
A∗
A∗
&I 5, 3
CP 6, 7
RAA 8
DN 9
CP 5, 10
A∗
CP 12, 12
∨E 4, 11, 13
1.10
Bi-conditional, and the use of proofs in analyzing arguments
1.10.1
Bi-conditional
There are no special rules for the symbol ↔. We justify this by saying that ↔ is
really an abbreviation. We feel free to use it in writing formulas. However, when
we are doing proofs, we treat (F ↔ G) as identical to ((F → G) & (G → F )).
If we are given (F ↔ G), then we may apply &E just as we would given
(F → G) & (G → F ). If we are asked to prove (F ↔ G), then we prove
(F → G) & (G → F ) and stop.
Example 1.10.1. P, (P → Q) ` (P ↔ Q) = ((P → Q) & (Q → P ))
1
1. (P → Q)
A
2∗
2. Q
A∗
3
3. P
A
3
4. (Q → P )
CP 2, 3
1, 3 5. ((P → Q) & (Q → P )) &I 1, 4
Example 1.10.2. (P ↔ Q), ¬P ` ¬Q
1
1. (P ↔ Q) = ((P → Q) & (Q → P )) A
2
2. ¬P
A
1
3. (Q → P )
&E 1
1, 2 4. ¬Q
M T T 3, 2
38
1.10.2
Soundness and Completeness Properties
Our proof system has two important properties. Some notation makes these
properties easier to state.
Notation
1. F1 , . . . , Fn ` G means there is a proof of G from assumptions F1 , . . . , Fn .
2. F1 . . . Fn |= G means that whenever F1 , . . . , Fn are all true, G is also true.
Here are the two properties of our proof system for propositional logic.
I. Soundness Property: If F1 , . . . , Fn ` G, then F1 , . . . , Fn |= G.
Soundness says if we have a proof of a certain conclusion from some premises,
then whenever the premises are true, the conclusion is also true. (Our set of
rules is “sound” in that if we use true premises, then any conclusions we can
prove are also true.)
II. Completeness Property: If F1 , . . . , Fn |= G, then F1 , . . . , Fn ` G.
An argument is valid if whenever the premises are true, the conclusion must
also be true. Completeness says that if the argument we are considering is valid,
then there is a proof. (Our set of rules is “complete” in that it lets us prove
everything that we want to prove—we do not need to add more rules.) .
If we have successfully translated an argument into English, and we can find
a proof of the conclusion from the premises, then Soundness tells us that the
argument is valid. Completeness says that if the argument is valid, then we
should be able to find a proof.
So far, we have only an intuitive notion of truth. Soon we will have a
precise definition. Then we will justify the claim that our proof system has the
properties of Soundness and Completeness. For now, you should learn what the
properties say, and which is which. Also, make sure you can explain the choice
of words “Soundness” and “Completeness”.
1.10.3
Analyzing arguments
The main goal in this course is to develop formal tools for analyzing arguments.
We have two tools so far: translation and proof. We may be able to translate an
argument into propositional logic. This makes the form transparent, and forces
us to resolve ambiguities. Having translated the argument, we may be able to
prove the conclusion from the premises. In analyzing arguments, we want to
know whether the argument is valid. This means that whenever the premises
are all true, the conclusion is also true. Here is a sample argument.
39
David will study either Italian or Mathematics. He will not study
both. If he studies Mathematics, he will get an A in that course.
Therefore, if he does not get an A in Mathematics, he must have
been studying Italian.
Letting I—studies Italian, M —studies Mathematics, A—gets A in Mathematics, we have the following translation.
Premises: (I ∨ M ), ¬(I & M ), (M → A)
Conclusion: (¬A → I)
Does the argument look valid? If you believe that it is valid, then there
should be a proof. The Completeness property tells us this. In fact, there is a
proof. We show (I ∨ M ), ¬(I & M ), (M → A) ` (¬A → I).
1
2
3
4∗
3, 4∗
6∗
8∗
9∗
3, 4∗ , 8∗
3, 4∗ , 8∗
3, 4∗ , 8∗
3, 4∗ , 8∗
3, 4∗
1, 3, 4∗
1, 3
1. (I ∨ M )
2. ¬(I & M )
3. (M → A)
4. ¬A
5. ¬M
6. I
7. (I → I)
8. M
9. ¬I
10. (M & ¬M )
11. (¬I → (M & ¬M ))
12. ¬¬I
13. I
14. (M → I)
15. I
16. (¬A → I)
A
A
A
A∗
M T T 3, 4
A∗
CP 6, 6
A∗
A∗
& I 8, 5
CP 9, 10
RAA 11
DN 12
CP 8, 13
∨E 1, 7, 14
CP 4, 15
We have a proof—we did not need the second premise. Since there is a proof,
the argument should be valid. The Soundness Property tells us this.
Exercises 1.10
A. Prove the following. (Treat (F ↔ G) as an abbreviation for the formula
((F → G) & (G → F )).)
1.
2.
3.
4.
Q, (P ↔ Q) ` P
(P → Q), (Q → P ) ` (P ↔ Q)
(¬P ↔ ¬Q) ` (P ↔ Q)
((P ∨ Q) ↔ P ) ` (Q → P )
B. Consider the following argument.
40
Premises:
(1) If the river floods, then the wheat crop will be destroyed.
(2) If the wheat crop is destroyed, then our community will be bankrupt.
(3) The river will flood if there is an early thaw.
Conclusion: If there is an early thaw, then our community will be bankrupt.
1. Give a translation of this argument, letting P —river floods, Q—wheat crop
destroyed, R—community bankrupt, S—early thaw.
2. Give a proof of the conclusion from the premises—if your translation is
correct, this should be possible. (You may find that not all of the premises are
actually used.)
C. Extra problems. Here are some more challenging proof problems.
1. ` ¬(P & ¬P )
2. ¬(P & Q) ` (¬P ∨ ¬Q)
3. (¬P ∨ ¬Q) ` ¬(P & Q)
4. ¬(P ∨ Q) ` (¬P & ¬Q)
5. (¬P & ¬Q) ` ¬(P ∨ Q)
6. (P → Q) ` (¬P ∨ Q)
7. ¬(P → Q) ` (P & ¬Q)
8. (P & ¬Q) ` ¬(P → Q)
1.11
Truth
1.11.1
Definition of Truth
We are about to define truth for propositional logic. That is, we will give some
rules that allow us to calculate the truth-value for a formula, given truth values
for the basic propositional variables. These rules ignore all of the subtleties that
are present in English. They have the virtue of being precise.
Rules for computing truth-values
1. ¬G is true if and only if G is false.
2. (G & H) is true if and only if G and H are both true.
41
3. (G ∨ H) is true if and only if at least one of G, H is true.
4. (G → H) is false if G is true and H is false, and true otherwise.
We are treating ↔ as an abbreviation, not as a basic connective. Nonetheless, we could add a clause for ↔.
5. (G ↔ H) is true if and only if G and H have the same truth value; i.e.,
both are true or both are false.
Recall that a formation sequence for a formula G is a finite sequence of
wf f ’s, ending in G, such that each step is gotten by applying one of the clauses
in the definition of wf f . To calculate the truth value of a formula G, given
truth values of the propositional variables, we work our way along a formation
sequence, applying the rules above. Earlier, we wrote our formation sequences
vertically, with numbered steps, and we indicated exactly how each step was
obtained. Here, we omit the decorations, although we will make sure that our
formation sequences could be decorated properly. We will put the steps all on
one horizontal line, and we will start by giving all of the propositional variables.
Sample formation sequence: For G = (P ∨ (¬Q & ¬P )), we have the following formation sequence, written in the new way.
P
Q ¬Q ¬P
(¬Q & ¬P )
(P ∨ (¬Q & ¬P ))
Let us calculate the truth value for (P ∨ (¬Q & ¬P )), in the case where P is
false and Q is true. We see that ¬Q is false and ¬P is true. Then (¬Q & ¬P )
is false. Finally, (P ∨ (¬Q & ¬P )) is false.
1.11.2
Truth tables
We get a great deal of information by considering all possible asignments of
truth values for the propositional variables, and putting our calculations into a
truth table.
Example 1.11.1. For the formula (P ∨ (¬Q & ¬P )), here is a truth table.
P
F
F
T
T
Q
F
T
F
T
¬Q ¬P
T
T
F
T
T
F
F
F
(¬Q & ¬P )
T
F
F
F
(P ∨ (¬Q & ¬P ))
T
F
T
T
Looking at the truth table, we see that the formula (P ∨ (¬Q & ¬P )) is true
except when P is false and Q is true. We classify formulas according to whether
they are always true, never true, or neither.
Definitions: Let F be a formula (wff) of propositional logic.
42
1. F is tautological if it is always true—true on all lines of its truth table.
2. F is inconsistent if it is never true—i.e., false on all lines of its truth table.
3. F is contingent if it is sometimes true and sometimes false—i.e., true on
at least one line and false on at least one line of its truth table.
Example 1.11.1 above is contingent. Let us consider further examples.
Example 1.11.2. Here is a truth table for the formula (P ∨ ¬P ).
P
F
T
¬P
T
F
(P ∨ ¬P )
T
T
From the truth table, we see that this formula is tautological.
Example 1.11.3. Here is a truth table for the formula (P & ¬P ).
P
F
T
¬P
T
F
(P & ¬P )
F
F
From the truth table, we see that this formula is inconsistent.
Example 1.11.4. Here is a truth table for the formula (P → (Q → R)).
P
F
F
F
F
T
T
T
T
Q
F
F
T
T
F
F
T
T
R
F
T
F
T
F
T
F
T
(Q → R)
T
T
F
T
T
T
F
T
(P → (Q → R))
T
T
T
T
T
T
F
T
From the truth table, we see that this formula is contingent.
You should make sure that you know how to organize a truth table. You
need to have complete formation sequences along the top. You need a systematic
method for assigning of truth values to the basic propositional variables. For
formulas with just one propositional variable P , the truth table has two lines,
one where P is false and one where it is true. For each additional propositional
variable, the number of lines doubles—the new variable is false, while we run
through all possibilities for the earlier variables, and then the new variable is
true, again with all possibilities for the earlier variables. For a formula with
three propositional variables P , Q, and R, we would have 8 lines, with the
following truth assignments.
43
P
F
F
F
F
T
T
T
T
Q
F
F
T
T
F
F
T
T
R
F
T
F
T
F
T
F
T
Exercises 1.11
A. Truth tables
For each of the following formulas, give a truth table. Then say whether the
formula is tautological, inconsistent, or contingent.
1. (P → P )
2. ¬(P → P )
3. (¬P → (P → Q))
4. ((P & Q) & ¬(P ↔ Q))
5. ((P &Q) → R)) → ((P → R)&(Q → R)))
6. (((P → Q) & (R → S)) → ((P ∨ R) → (Q ∨ S)))
B. Soundness and Completeness
Explain in your own words what the Soundness and Completeness properties
say about our proof system for propositional logic. Make sure to separate the
two properties in your explanation. Say why we use the word “Soundness” for
the one property, and “Completeness” for the other.
1.12
Analysis of arguments using truth tables
1.12.1
Review of truth tables
Last time, we defined truth. That is, we gave rules for calculating the truth
value of formulas, based on the truth of the basic propositional variables. We
saw how, for a given formula, to form an organized table, a truth table, in which
we calculate the truth value for our formula for all possible assignments of truth
values to the basic propositional variables. It may be useful to write out truth
44
tables for the basic connectives, corresponding to the clauses in our definition
of truth.
1. For ¬P , we have the following.
¬P
T
F
P
F
T
2. For (P & Q), we have the following.
P
F
F
T
T
Q
F
T
F
T
(P & Q)
F
F
F
T
3. For (P ∨ Q), we have the following.
P
F
F
T
T
(P ∨ Q)
F
T
T
T
Q
F
T
F
T
4. For (P → Q), we have the following.
P
F
F
T
T
(P → Q)
T
T
F
T
Q
F
T
F
T
Here is a more interesting example.
Example 1.12.1. For the formula (P ∨ (¬Q → ¬P )), give a truth table, and
then classify the formula as tautological, contingent, or inconsistent.
P
F
F
T
T
Q
F
T
F
T
¬Q ¬P
T
T
F
T
T
F
F
F
(¬Q → ¬P )
T
T
F
T
(P ∨ (¬Q → ¬P ))
T
T
T
T
We would classify this formula as tautological. If the last column were all
F ’s, the formula would inconsistent, and if there were at least one T , and at
least one F , the formula would be contingent.
45
1.12.2
Review of soundness and completeness
We say that an argument is valid if whenever the premises are all true, the
conclusion is also true. We have introduced some notation for this. If A1 , . . . , An
are the premises and C is the conclusion, we write A1 , . . . , An |= C to mean that
the argument is valid; i.e., every assignment of truth values to the propositional
variables that makes all of A1 , . . . , An true also makes C true. The notation
A1 , . . . , An ` C means that there is a proof of C from A1 , . . . , An . There
are differences in the definitions of ` and |= (one involves truth and the other
involves proof). However, our proof system is designed so that the two notions
are equivalent—A1 , . . . , An |= C if and only if A1 , . . . , An ` C. We separate the
two implications.
Soundness. If A1 , . . . , An ` C, then A1 , . . . , An |= C.
Completeness. If A1 , . . . , An |= C, then A1 , . . . , An ` C.
Consider the following argument.
If the river floods, then the wheat crop will be destroyed. If the
wheat crop is destroyed, then our community will be bankrupt. The
river will flood if there is an early thaw. In any case, there will be
heavy rains later in the summer. Therefore, if there is an early thaw,
then our community will be bankrupt unless there are heavy rains
later in the summer.
Letting P —river floods, Q—wheat crop destroyed, R—early thaw, S—heavy
rains later in summer, T —community bankrupt, we have the following translation.
Premises: (P → Q), (Q → T ), (R → P ), S
Conclusion: (R → (S ∨ T ))
We have a proof of (P → Q), (Q → T ), (R → P ), S ` (R → (S ∨ T )).
1
2
3
4
5∗
4
4
1.
2.
3.
4.
5.
6.
7.
(P → Q)
(Q → T )
(R → P )
S
R
(S ∨ T )
(R → (S ∨ T ))
A
A
A
A
A∗
∨I 4
CP 5, 6
Since there is a proof, the argument should be valid. Which property of our
proof system, soundness or completeness, tells us this?
46
1.12.3
Truth analysis of arguments
We want a method for analyzing arguments stated in propositional logic. If we
can find a proof of the conclusion from the premises, then our argument is valid,
by Soundness. How could we tell if an argument is really not valid? The fact
that we have tried for an hour and haven’t found a proof may just mean that
we need to try harder.
Truth tables give us a fail-safe method that we can apply, in principle, to
any argument in propositional logic, to decide whether it is valid. Consider an
argument with premises A1 , . . . , An and conclusion C. Recall that the argument
is valid if whenever the assumptions A1 , . . . , An are all true, the conclusion C
is also true. We use the notation A1 , . . . , An |= C for this.
Method for determining whether A1 , . . . , An |= C
We write a combined truth-table for the assumptions A1 , . . . , An and the conclusion C. We check whether there is a line where A1 , . . . , An are all true,
and C is false. If the argument is not valid, there will be such a “bad line”,
or “counter-example”. If the argument is valid, there will be no bad line, no
counter-example.
Example 1.12.2. Use truth tables to determine whether (P & Q) |= (P ∨ Q).
Justify the answer by either finding an assignment of truth values that makes
the assumption true and the conclusion false, or showing that there is no such
assignment.
P Q (P & Q) (P ∨ Q)
F F
F
F
F T
F
T
T F
F
T
T T
T
T
The assumption (P & Q) is only true on the last line, and there the conclusion
(P ∨ Q) is also true, so we have (P & Q) |= (P ∨ Q).
Example 1.12.3. Determine whether ¬R, (P → R), (Q → ¬R) |= ¬(P ∨ Q).
Justify the answer.
P
F
F
F
F
T
T
T
T
Q
F
F
T
T
F
F
T
T
R
F
T
F
T
F
T
F
T
(P → R) ¬R
T
T
T
F
T
T
T
F
F
T
T
F
F
T
T
F
(Q → ¬R)
T
T
T
F
T
T
T
F
(P ∨ Q) ¬(P ∨ Q)
F
T
F
T
T
F
T
F
T
F
T
F
T
F
T
F
The argument is not valid. If P and R are false and Q is true, then the
hypotheses are both true and the conclusion is false.
47
Example 1.12.4. Determine whether (P ∨ Q), (P → S), (S → P ) |= (R ∨ Q).
Justify the answer.
There are four propositional variables, so the truth table would have 16
lines. Let us see if we can think about the truth table and arrive at an answer,
without writing out the whole thing. We will record our reasoning, by way of
justification. Suppose there is a bad line on the truth table. On that line, the
conclusion is false, and the premises are true. Then R and Q must both be
false. Then if the first premise is true on the line, P must be true. If the second
premise is also true, then S must be true. Then the third premise is also true.
If R and Q are false and P and S are true, then the premises are true and the
conclusion is false, so this is a bad line. Therefore, the argument is not valid.
How much do we need to write? At a minimum, we should say that if R and
Q are false and P and S are true, then the premises are true and the conclusion
is false. Therefore, the argument is not valid. We could also write out the bad
line of the truth table:
P
T
Q R
F F
S
T
(P ∨ Q)
T
(P → S)
T
(S → P )
T
(R ∨ Q)
F
Exercises 1.12
A. For each of the arguments below, decide whether it is valid. Justify your
answer by either giving a specific truth assignment that makes the premises
all true and makes the conclusion false or proving that the argument is valid.
To show an argument is valid, you can write out the entire truth table or give
a formal proof of the argument. In the latter case, the argument is valid by
Soundness. (The arguments are stated using |=, as though they are valid, but
not all of them actually are valid.)
1. ((P & Q) → R) |= (P → R)
2. (P → (Q ∨ R)) |= (P → Q)
3. (P → Q), (P → R) |= (Q → R)
4. (P → R), (Q → R) |= (P → Q)
B. Problems on Soundness and Completeness.
1. Explain in your own words the difference between the Soundness and Completeness properties of our proof system.
2. Give a proof for (P & Q) ` ¬(P → ¬Q).
3. If you have a proof of ¬(P → ¬Q) from (P & Q), which property, Soundness
or Completeness, lets you conclude that whenever (P & Q) is true, ¬(P → ¬Q)
must also be true?
48
4. Suppose you have an argument that is invalid; i.e., there is some assignment
of truth values that makes all of the premises true and makes the conclusion
false. Which property, Soundness or Completeness, lets you conclude that there
is no proof of the conclusion from the premises?
1.13
Analyzing arguments in English
We now focus on analyzing whether actual arguments in English are valid. The
next example asks us to practice translating arguments into symbols and figure
out which arguments have the same form. Then, we will use the tools we have
developed to determine whether the argument is valid or not.
1.13.1
Sample LSAT problem
The following question comes from the LSAT sample test of June, 2007.
Suppose I have promised to keep a confidence and someone asks me
a question that I cannot answer truthfully without thereby breaking the promise. Obviously, I cannot both keep and break the same
promise. Therefore, one cannot be obliged both to answer all questions truthfully and to keep all promises.
Which one of the following arguments is most similar in its reasoning to the
argument above?
(A) If creditors have legitimate claims against a business and the business has
the resources to pay those debts, then the business is obliged to pay them. Also,
if a business has obligations to pay debts, then a court will force it to pay them.
But the courts did not force this business to pay its debts, so either the creditors
did not have legitimate claims or the business did not have suffcient resources.
(B) If we put a lot of effort into making this report look good, the client might
think we did so because we believed our proposal would not stand on its own
merits. On the other hand, if we do not try to make the report look good, the
client might think we are not serious about her business. So, whatever we do,
we risk her criticism.
(C) It is claimed that we have the unencumbered right to say whatever we want.
It is also claimed that we have the obligation to be civil to others. But civility
requires that we not always say what we want. So, it cannot be true both that
we have the unencumbered right to say whatever we want and that we have the
duty to be civil.
(D) If we extend our business hours, we will either have to hire new employees or
have existing employees work overtime. But both new employees and additional
49
overtime would dramatically increase our labor costs. We cannot afford to
increase labor costs, so we will have to keep our business hours as they stand.
We may translate the original argument into propositional logic, letting A—
answer truthfully, K—keep promise. We get (A → ¬K) ` ¬(A & K).
Let’s translate the choices A,B,C,D.
For argument A, assign L—have legitimate claims, R—has the resources,
P —obliged to pay debts, and F —forced to pay. We get
((L & R) → P ), (P → F ), ¬F ` (¬L ∨ ¬R) .
For argument B, assign E—put in effort, M —stand on merits, and S—taken
seriously. We get
(E → ¬M ), (¬E → ¬S) ` ((E ∨ ¬E) → (¬M ∨ ¬S)) .
For argument C, assign W —right to say whatever you want and C—duty
to be civil. We get
(C → ¬W ) ` ¬(W & C) .
We see that the argument in C is essentially of the same form as the original
argument. But to be extra careful, let’s also translate argument D.
For argument D, assign E—extend hours, H—hire new workers, O–add overtime, and I—increase costs. We get
(E → (H ∨ O)), ((H ∨ O) → I), ¬I ` ¬E .
Thus, argument C is the correct answer.
Does the original argument, as translated, look valid? If it is valid, then by
Completeness, we should be able to give a proof.
1
2∗
2∗
2∗
1, 2∗
1, 2∗
1
1
1.
2.
3.
4.
5.
6.
7.
8.
(A → ¬K)
A
(A & K)
A∗
A
&E 2
K
&E 2
¬K
M P P 3, 1
(K & ¬K)
&I 4, 5
((A & K) → (K & ¬K)) CP 2, 6
¬(A & K)
RAA 7
Having found a proof, we know for sure that the argument is valid by Soundness.
We have translated arguments into English and used our proof system to justify
this argument. We also can use truth tables to analyze arguments in English.
You should also ask yourself whether the other arguments in the example are
valid.
50
1.13.2
Using truth tables to analyze English arguments
We have a powerful method for analyzing arguments. Let us apply it to some
arguments stated in English. Consider the following.
Example 1.13.1. Tom embezzled money from ABAG. If he is caught, then
he will go to jail and his assistant will be fired. Tom’s assistant is being fired.
Therefore, Tom is going to jail.
Let us translate, using the propositional variables E—Tom embezzled, C—Tom
is caught, J—Tom goes to jail, A—assistant fired.
Premises:
1. E
2. (C → (J&A))
3. A
Conclusion: J
Formal problem: Determine whether E, (C → (J & A)), A |= J.
The truth table would have 16 lines. We could write it all out, or we could try
to shorten our calculations, as in a previous example. We ask whether there is a
bad line in the truth table—an assignment of truth values to the propositional
variables that makes all of the premises true and makes the conclusion false.
We must either find such a “bad” truth assignment or prove the argument is
valid. To prove that an argument is valid, we can write out a full truth table
that demonstrates that the argument is valid, or we can simply give a formal
proof of the argument (and apply Soundness). If the conclusion is false, then J
must be false. If the premises are all true, then since E and A are two of the
premises, these must both be true. So, we consider truth assignments (lines in
the truth table) where J is false and E and A are both true. Since J is false,
in order to make the other premise, (C → (J&A)), true, we must have C false.
Suppose J and C are false, and E and A are true. For this assignment, all of
the premises are true, and the conclusion is false. Therefore, the argument is
not valid.
Let’s summarize the method of using truth tables to analyze arguments.
1.13.3
A strategy for analyzing arguments
First method—writing out the full truth table. We make a large truth
table, including all of the premises and the conclusion. Next, we check whether
there is a “bad” line, where the premises are true and the conclusion is false. If
51
there is no such line, the argument is valid. If there is a bad line, the argument
is not valid.
Second method—reasoning about the truth table. We reason about the
truth table, and ask directly whether there could be a “bad” line, where the
premises are all true and the conclusion is false. If we believe no such “bad”
line exists, we should argue convincingly that whenever the premises are true
the conclusion is true. The easiest way to confirm that an argument is valid
(without writing out the entire truth table) is to give a formal proof of the
argument and apply Soundness.
Example 1.13.2. Determine whether (P → Q), Q |= (Q → P ).
Let us try the first method.
P
F
F
T
T
Q
F
T
F
T
(P → Q)
T
T
F
T
(Q → P )
T
F
T
T
The premises are true on lines 2 and 4. The conclusion is false on line 2,
where P is false and Q is true. Therefore, the argument is not valid.
Example 1.13.3. Determine whether (P → Q), (R → P ), S |= (R → (S ∨ T )).
In principle, we could apply the first method, but the truth table is large—
32 lines. Instead, let us try the second method. Without writing out the
whole table, we reason about it, and we ask whether there is a bad line. If the
conclusion is false, then R is true and (S ∨ T ) false. This means that S and T
are both false. However, S is one of the premises. If this one premise is true,
then the conclusion cannot be false. There is no truth assignment that makes
the conclusion false and the premises all true. Therefore, the argument is valid.
We could also simply show (P → Q), (R → P ), S ` (R → (S ∨ T )) and then
apply Soundness to show the above argument is valid.
1.13.4
More examples
We want to apply our methods to arguments in English. We first try to translate
the argument into propositional logic. If we are successful, then our methods
let us determine whether it is valid.
Example 1.13.4. Translate the following argument, and determine whether it
is valid.
If the first appeal or second appeal is successful, then the original decision will
be reversed. Hence, if the original decision stands, there was not a successful
first appeal or a successful second appeal.
52
Letting R—decision reversed, F —first appeal succeeds, S—second appeal
succeeds, we get the following translation.
Premise: ((F ∨ S) → R)
Conclusion: (¬R → (¬F & ¬S)). (Alternatively, we could translate the
conclusion as (¬R → ¬(F ∨ S)).)
Do we have ((F ∨ S) → R) |= (¬R → (¬F & ¬S)? Could there be an assignment of truth values that makes the premises true and makes the conclusion
false? What would it mean for the conclusion to be false? ¬R must be true
and ¬F & ¬S must be false. Then R must be false, and at least one of ¬F ,
¬S must be false, which means that at least one of F , S must be true. Is there
an assignment of variables where the premises are true under these conditions?
Since R must be false, for the premise to be true, (F ∨ S) must be false as
well. But then both F and S would have to be false. For the conclusion to be
false, at least on of F , S must be true. Hence, there is no truth assignment
that makes the conclusion false and makes the premises true. Therefore, the
argument is valid. To more simply show that the argument is valid, we could
have given a proof of it.
Example 1.13.5. The following argument was discussed in a first-year philosophy course at Notre Dame.
If God can create a mountain that He cannot climb, then He is not
omnipotent. If God cannot create such a mountain, then He is not
omnipotent. Therefore, God is not omnipotent.
Letting C—can create mountain too high, O—omnipotent, we translate the
argument as follows.
Premises: (C → ¬O), (¬C → ¬O)
Conclusion: ¬O
Let us use our new methods to see if this argument is valid.
C
F
F
T
T
O
F
T
F
T
¬C
T
T
F
F
¬O
T
F
T
F
(C → ¬O)
T
T
T
F
(¬C → ¬O)
T
F
T
T
The premises are both true on lines 1 and 3. The conclusion is also true on both
of these lines. Therefore, the argument is valid.
We could also have reasoned as follows. On a bad line in the truth table,
the conclusion must be false, so O is true. Also on a bad line, the two premises
must be true. If the first premise is true, then C must be false. Then the second
premise cannot be true. There is no bad line. Therefore, the argument is valid.
53
Some arguments in English have tacit assumptions.
Example 1.13.6. If x ≥ 0, then x2 ≥ 0. Also, if x < 0, then x2 ≥ 0. So,
x2 ≥ 0.
Let us translate, using the propositional variables P —x ≥ 0, S—x2 ≥ 0,
N —x < 0.
Premises:
1. (P → S)
2. (N → S)
3. (P ∨ N ) (tacit)
Conclusion: S
We want to determine whether (P → S), (N → S), (P ∨ N ) |= S. The truth
table would have 8 lines.
P
F
F
F
F
T
T
T
T
S
F
F
T
T
F
F
T
T
N
F
T
F
T
F
T
F
T
(P → S)
(N → S)
(P ∨ N )
We could fill in the whole truth table—it is not too long. We could also try
reasoning about the truth table, trying to see if there is a line where all of the
premises are true and the conclusion is false.
If the conclusion is false, then S must be false. Assuming that S is false, if
the premises are true, then P must be false and N must also be false. However,
if P and N are both false, then (P ∨ N ) is also false, and this is one of the
premises. Hence, there is no line on which the premises are all true and the
conclusion fails. The argument is valid.
Alternatively, here we could show that the argument is valid by showing
(P → S), (N → S), (P ∨ N ) ` S
(giving a proof).
1
2
3
1, 2, 3
1.
2.
3.
4.
(P → S)
(N → S)
(P ∨ N )
S
54
A
A
A
∨E 1, 2, 3
It is interesting to think about how much a machine can do. Can a machine
determine whether an argument is valid? Can a machine find proofs? In the
setting of propositional logic, we can apply truth tables in a mechanical way
to determine whether an argument is valid. So, a machine could do this, in
principal. If the argument is valid, then by Completeness, there should be a
proof. If we know that there is a proof, we could search until we find one.
Example 1.13.7. The following argument came from a seminar course at Notre
Dame. This type of argument, trying to determine causes, is important in
science, medicine, and law. We start with a simple version, and then add
complexity.
The driveway is wet. There are only three possible causes: sprinklers, a flood, or rain. So, one of the three must have happened.
Let D—wet driveway, S—sprinklers, F —flood, R—rain.
We have the translation
D, (D → (S ∨ F ∨ R)) ` (S ∨ F ∨ R) .
This is clearly valid. Our goal is to determine the cause of the wet driveway.
Maybe we can rule out some possibilities. Let us add a little more to the
argument.
The driveway is wet. There are only three possible causes: sprinklers, a flood, or rain. If there has been a flood, the basement would
also be wet, but it is not. If it has just rained, then the driveway
across the street would also be wet, but it is not. So, the cause of
our wet driveway must be the sprinklers.
Let B—basement wet, A—driveway across the street wet.
Now, we have the translation D, (D → (S ∨ F ∨ R)), (F → B), ¬B, (R → A), ¬A
for the premises, and S for the conclusion.
Is this valid? Do we have that D, (D → (S ∨ F ∨ R)), (F → B), ¬B, (R →
A), ¬A ` S?
Exercises 1.13
0. Is the argument D, (D → (S ∨ (F ∨ R))), (F → B), ¬B, (R → A), ¬A ` S
valid? Either give a proof or find a counterexample.
For each of the following, first give a translation, using the indicated propositional variables—some of the arguments will be familiar. Indicate clearly which
55
are the premises and which is the conclusion. Then determine whether the argument is valid, and justify your answer, either giving a truth assignment that
makes the conclusion false and the premises all true, or showing that there is no
such truth assignment (by carefully arguing this fact or giving a formal proof
of the argument).
1. On her 100th birthday, a sweet old lady is asked, “What is the secret of your
long life?” “Oh”, she answers, “I strictly follow my diet”, and she proceeds to
give the following argument.
If I don’t drink beer for dinner, then I always have fish. Any time I
have both beer and fish for dinner, then I do without ice cream. So,
if I have ice cream or don’t have beer, then I never eat fish.
In your translation, let B—beer, F —fish, I—ice cream.
2. The following argument is from Bertrand Russell, “An Inquiry into Meaning
and Truth”.
Naive realism leads to physics, and physics, if true, shows that naive
realism is false. Therefore, naive realism, if true, is false; therefore,
it is false.
In your translation, let R—naive realism, P —Physics.
3. The following is a mathematical argument. The analysis does not depend on
the meaning of such terms as f 0 (x).
The derivative of f is given by f 0 (x) = 3x(x + 2). Hence, f 0 (x) > 0
if both x > 0 and x + 2 > 0, or if both x < 0 and x + 2 < 0. Thus,
f 0 (x) > 0 if x > 0 or if x < −2.
In your translation, let P —f 0 (x) = 3x(x + 2), Q—f 0 (x) > 0, R1 —x > 0,
R2 —x < 0, S1 —x + 2 > 0, S2 —x + 2 < 0, T1 —x > −2, T2 —x < −2.
This argument has some tacit assumptions: (S1 ↔ T1 ), (S2 ↔ T2 ), (R1 → T1 ),
(T2 → R2 ).
4.
If Henry is to please his grandmother, he must get A in Logic. If
he is to get A in Logic, he will not have time for basketball. If he
has no time for basketball, he will be depressed. If he is depressed,
he will not please his grandmother. Therefore, he will not get A in
Logic.
In your translation, assign P —please grandmother, A—get A in Logic, B—time
for basketball, D—be depressed.
5.
56
If Cleon lies, the gods punish him and he suffers. If Cleon lies, men
reward him and he does not suffer. Therefore, Cleon does not lie.
In your translation, assign L—Cleon lies, P —gods punish him, S—Cleon suffers,
R—men reward him.
6.
If God knows today what I will do tomorrow, then what I do tomorrow is foreordained. And if it’s foreordained, then either I have
no freedom of choice, or else I will freely choose to do what’s foreordained. However, the latter is impossible. Therefore, I have no
freedom of choice unless God doesn’t know today what I will do
tomorrow.
In your translation, assign G—God knows the future, F —what I do tomorrow
is foreordained, C—I have freedom of choice, A—I freely choose to do what’s
foreordained.
7.
Unless we continue price support, we will lose the farm vote. Overproduction will continue if we continue supports, unless we institute
production controls. Without the farm vote, we cannot be reelected.
Therefore, if we are reelected and do not institute production controls, over-production will continue.
In your translation, assign S—Continue price support, F —Get the farm vote,
O—Overproduce, R—Get reelected, C–Institute production controls.
1.14
Puzzles
One fun application of logic is puzzles. Let’s see how we can use the tools we
have developed to solve the following recreational puzzles.
Example 1.14.1. The people on the planet GP are either green or purple.
Green people always tell the truth and purple people never tell the truth.
1. We meet two people: Anna and Bill.
Anna says: If I am green then so is Bill.
What color is Anna? What color is Bill?
Let’s translate these statements into symbols. Assign A— Anna is green,
and B— Bill is green. Since every person is either green or purple, ¬A and
¬B mean Anna is purple and Bill is purple, respectively. Anna’s claim can
be translated as (A → B). Green people always tell the truth and purple
57
always lie, so the statement Anna is green holds if and only if the statement
(A → B) holds. The translation of this sentence is (A ↔ (A → B)). Given
the assumption (A ↔ (A → B)) we want to determine the truth values of
A and B. Let’s write out the truth table to solve this problem.
A
F
F
T
T
B
F
T
F
T
¬A
T
T
F
F
¬B
T
F
T
F
(A → B)
T
T
F
T
(A ↔ (A → B))
F
F
F
T
The assumption is true only in line 4 of the truth table. In this line, A
and B are true. In other words, Anna and Bill are both green. We have
shown (A ↔ (A → B)) |= A, B.
2. We meet two people: Anna and Bill.
Anna says: We are both purple.
What color is Anna? What color is Bill?
We again use the same letters for our translation. Anna’s claim now
translates as (¬A & ¬B). Our assumption is now (A ↔ (¬A & ¬B)).
Rather than write out a truth table, let’s try to reason about the truth
table to solve the problem. Notice that if Anna was green, i.e., A was true,
then the assumption says in part that ¬A is true, i.e., Anna is purple. This
is absurd, so Anna must be purple. Since Anna is purple, we have that
¬(¬A & ¬B); in other words, not both Anna and Bill are purple. Since
Anna is purple, Bill must be green. Thus, we have that Anna is purple
and Bill is green. We can write this in symbols as:
(A ↔ (¬A & ¬B)) |= (¬A &B).
Since this argument is valid, we should be able to give a deduction showing
(A ↔ (¬A & ¬B)) ` (¬A &B) by Completeness.
58
1
1
3∗
1, 3∗
1, 3∗
1, 3∗
1
1
1
1
11∗
1, 11∗
1
1
1
1
1. (A ↔ (¬A & ¬B))
2. (A ↔ (¬A & ¬B))
3. A
4. (¬A & ¬B)
5. ¬A
6. (A & ¬A)
7. (A → (A & ¬A))
8. ¬A
9. ((¬A & ¬B) → A)
10. ¬(¬A & ¬B)
11. ¬B
12. (¬A & ¬B)
13. (¬B → (¬A & ¬B))
14. ¬¬B
15. B
16. (¬A & B)
A
&E 1
A∗ (Use RAA to get ¬A)
M P P 3, 2
&E 4
&I 3, 5
CP 3, 6
RAA 7
&E 1 (Work on getting B)
M T T 9, 8
A∗
&I 8, 11
CP 11, 12
M T T 13, 10
DN 14
&I 8, 15
Notice that the above deduction was a formalization of our informal argument above.
3. (Looking for Waldo). We meet two people: Anna and Bill.
Anna says: If we are both green, then Waldo is on the planet
Bill says: That is true
Is Waldo on the planet?
Let A and B be as before, and let W stand for “Waldo is on the planet”. We
can translate Anna’s claim as ((A & B) → W ). Bill claims that ((A & B) → W )
is true. We again get an assumption based on whether Anne is green or not,
(A ↔ ((A & B) → W )), and we get another assumption, (B ↔ ((A & B) → W ))
based on whether Bill is green or not.
Notice that if Anna and Bill are green, i.e., (A & B) is true, then W holds
by the first assumption, so Waldo is on the planet. We should check, however,
that in general
(A ↔ ((A & B) → W )), (B ↔ ((A & B) → W )) |= W .
Let’s argue informally. You should be able to turn this argument into a
formal deduction. Notice that if A is false, then (A & B) is false. In this case,
the right hand side of the first biconditional is true regardless of whether W is.
Hence, A must be true, but that’s absurd since we assumed A was false. Thus,
A must be true. If we assume, B is false, we get the same contradiction. Hence
B must be true. Then, (A & B) is true. We saw that if (A & B) is true, then
W is true, i.e., Waldo is on the planet.
59
Example 1.14.2. Brown, Jones and Smith are suspected in a crime. They
testify:
Brown: Jones is guilty and Smith is innocent.
Jones: If Brown is guilty, then so is Smith.
Smith: I am innocent, but at least one of the others is guilty.
Assuming everyone told the truth, who is innocent and who is guilty?
Let B stand for “Brown is innocent”, J stand for “Jones is innocent”,
and S stand for “Smith is innocent.” Since everyone is telling the truth, we
can translate their statements as assumptions: (¬J & S), (¬B → ¬S), and
(S & (¬B ∨ ¬J)).
We show Brown and Smith are innocent and Jones is guilty, in other words,
(¬J & S), (¬B → ¬S), (S & (¬B ∨ ¬J)) |= ((B & S) & ¬J))
by providing a proof and applying Soundness.
1
1
3
1
1, 3
1, 3
1, 3
1
1, 3
1.
2.
3.
4.
5.
6.
7.
8.
9.
(¬J & S)
A
S
&E 1
(¬B → ¬S)
A
¬¬S
DN 2
¬¬B
M T T 3, 4
B
DN 5
(B & S)
&I 6, 2
¬J
&E 1
((B & S) & ¬J)) &I 7, 8
Exercises 1.14
Solve the following puzzles, and justify your answers by giving a truth table or
deduction.
1. The people on the planet GP are either green or purple. Green people
always tell the truth and purple people never tell the truth.
(a) We meet two people: Anna and Bill.
Anna says: At least one of us is purple.
What color is Anna? What color is Bill?
(b) We meet two people: Anna and Bill.
Anna says: If Bill is green then I am purple.
What color is Anna? What color is Bill?
60
(c) We meet two people: Anna and Bill.
Anna says: We are of the same kind.
What color is Anna? What color is Bill?
(d) We meet two people: Anna and Bill.
Anna says: If either of us is green, then Waldo is on the planet
Bill says: That is true
Is Waldo on the planet?
2. Brown, Jones and Smith are suspected in a crime. They testify:
Brown: Jones is guilty and Smith is innocent.
Jones: If Brown is guilty, then so is Smith.
Smith: I am innocent, but at least one of the others is guilty.
(a) Assuming everybody is innocent, who committed the perjury?
(b) Assuming that the innocent told the truth and the guilty told lies,
who is innocent and who is guilty?
1.15
Soundness and Completeness
1.15.1
Terminology
Recall that we are using the terms “soundness” and “completeness” to describe
the most important properties of our proof system.
Soundness Property: If A1 , . . . , An ` C, then A1 , . . . , An |= C; i.e., if there
is a proof of C from assumptions A1 , . . . , An , then whenever A1 , . . . , An are all
true, C is also true.
Completeness Property: If A1 , . . . , An |= C, then A1 , . . . , An ` C; i.e., if
C is true whenever A1 , . . . , An are all true, then there is a proof of C from
A1 , . . . , A n .
There is some additional terminology used in describing arguments. We use
the term “valid” to describe an argument whose premises “logically imply” the
conclusion; i.e., whenever the premises are true, the conclusion is also true.
Philosophers sometimes use the term “sound” to describe an argument whose
premises are true. We do not focus so much on that question, so we will not
use the term “sound” in describing arguments.
61
1.15.2
Verifying soundness
Why should you believe Soundness? The idea is that if we have a proof showing
that A1 , . . . , An ` C, we could work our way through the steps in the proof,
from first to last, and we can show that if the assumptions behind the step are
all true, then the statement given on the step is also true. When we get to the
last step, the statement is C, and the only assumptions allowed are the real ones
A1 , . . . , An . Then whenever the assumptions are all true, the conclusion is also
true.
Example: Verify Soundness for the following proof of P, ¬Q ` ¬(P → Q).
1
2
3∗
1, 3∗
1, 2, 3∗
1, 2
1, 2
1.
2.
3.
4.
5.
6.
7.
P
A
¬Q
A
(P → Q)
A∗
Q
M P P 1, 3
(Q & ¬Q)
& I 4, 2
((P → Q) → (Q & ¬Q)) CP 3, 5
¬(P → Q)
RAA 6
We must explain why, for steps 1—7, if the assumptions behind the step are
true, then the statement on the step is also true. For the the first three steps,
we apply Rule A.
1. If P is true, then P is true.
2. If ¬Q is true, then ¬Q is true.
3. If (P → Q) is true, then (P → Q) is true.
For step 4, we apply M P P . We will appeal to the definition of truth—the
clause for →.
4. Supposing that 1 and 3 are true, we must explain why 4 is true. By our
definition of truth, if P and (P → Q) are both true, then Q must be true
On step 5, we apply &I. We will appeal to the definition of truth—the
clause for &.
5. Supposing that 1, 2, and 3 were all true, we must explain why 5 would be
true. On step 2, we saw that if 2 is true then ¬Q is true. On step 4, we saw
that if 1 and 3 are both true, then Q is true. Therefore, by our definition of
truth, if 1,2, and 3 were all true, (Q & ¬Q) would also be true.
On step 6, we apply CP . The definition of truth for → tells us that line 6
is true if lines 1 and 2 are true.
62
6. Suppose 1 and 2 are true. If 3 is false, i.e., (P → Q) is false, then by the
definition of truth for →, we have that ((P → Q) → (Q & ¬Q)) is true. If 3 is
true, i.e., (P → Q) is true, we saw in step 5 that (Q & ¬Q) would also be true.
Hence, ((P → Q) → (Q & ¬Q)) is true regardless of whether (P → Q) is.
On step 7, we apply RAA. Our definition of truth lets us see that a statement
of the form (G & ¬G) is inconsistent—it can never be true. Therefore, for line
6 to be true, (P → Q) must not be true. In addition, the truth definition for ¬
tells us that if F is not true, then ¬F is true. So, since (P → Q) is not true,
¬(P → Q) is true.
6. Supposing that 1 and 2 are true, we must show that 6 is true. On line 5, we
saw that if 1, 2 are true, and 3 is also true, then (Q & ¬Q) would be true. Of
course, this is impossible. Therefore, if 1 and 2 are both true, 3 cannot be true.
If 3 is not true, then its negation, ¬(P → Q), must be true.
We could do something like this for any proof—argue step by step that if
the assumptions behind the step are true, then the statement on the step must
be true. There is an extra problem on the homework, asking you to verify
Soundness in this way, for a particular proof. If you have seen one or two
examples like this, you should be convinced of Soundness.
For an implication F → G, the contrapositive is the statement ¬G → ¬F .
It is easy to see that if the implication holds, then so does the contrapositive.
The contrapositive of Soundness says that if an argument is invalid; i.e., there
is a truth assignment that makes the premises A1 , . . . , An true and makes the
conclusion C false, then there is no proof of the C from A1 , . . . , An . Similarly,
the contrapositive of Completeness says that if there is no proof of C from
A1 , . . . , An , then there must be a truth assignment that makes A1 , . . . , An true
and makes C false.
1.15.3
Explanation of Completeness
Let us turn to Completeness. How do we know that our proof system has this
property? It is not some great mystery that must be accepted on faith. There
is an explanation in terms of things that you know—the rule ∨E, etc. The
explanation is somewhat long and complicated, but all of the ingredients are
things you know. The goal is just to see this. You should not feel that you need
to memorize the explanation.
Completeness: If A1 , . . . , An |= C, then A1 , . . . , An |= C.
The general statement follows from the special case where there are no assumptions on the left.
Special Case of Completeness: If |= G, then ` G; i.e., if G is always true,
then there is a proof of G from no assumptions. (We could have used the letter
C instead of G.)
63
Why does the Special Case imply full Completeness?
Reduction of Completeness to Special Case
Suppose A1 , . . . , An |= C. Using the definition of truth, we can see that
|= ((A1 & . . . & An ) → C). Assuming the Special Case of Completeness, we
have ` ((A1 & . . . & An ) → C). We need to show A1 , . . . , An ` C. For
simplicity, let us suppose n = 2. If we have a proof of ((A1 & A2 ) → C) from
no assumptions, then we can turn it into a proof of C from the assumptions
A1 , A2 . We start with the assumptions, and use &I to get (A1 & A2 ). Next,
we copy in the proof of ((A1 & A2 ) → C) from no assumptions. Then we apply
M P P to get C from the assumptions A1 , A2 . So, the general version of the
Completeness Property is implied by the special case.
We now partially explain the special case (If |= G, then ` G), by proving it in
a “very special case” and a “not-quite-so-special case”.
Very Special Case. Suppose G is built up from one propositional variable P .
We must show that if G is a tautology, then there is a proof of G from no
assumptions. There are three steps.
Step I. Show that for all well formed formulas G built up from P , we have
P ` G or P ` ¬G, and ¬P ` G or ¬P ` ¬G.
Step II. Show that ` (P ∨ ¬P ). (We have shown this before.)
Step III. Suppose that G is built up from P , and |= G. Whether P is true or
false, G is true. We show the following claim.
Claim.
(a) P ` G and
(b) ¬P ` G.
Why should we believe the Claim?
First, consider (a). By I, either P ` G or P ` ¬G. If P ` ¬G, then by
Soundness, P |= ¬G. This is impossible, since |= G. Therefore, we must have
P ` G.
Next, consider (b). Again by I, either ¬P ` G or ¬P ` ¬G. If ¬P ` ¬G, then
by Soundness, ¬P |= ¬G. This is impossible. Therefore, we must have ¬P ` G.
We have justified the Claim. We want to produce a proof showing that ` G. We
start with a proof of (P ∨ ¬P ), as in II. To prove G, we shall use ∨E. We give
64
a proof by cases. Since P ` G, we can use CP to first show Case 1, ` (P → G).
Since ¬P ` G, we can use CP to show Case 2, ` ¬P → G. If we can do both
of these, then we get G by ∨E.
Explanation of Step I: Here, we explain the ideas behind why it’s true that
if G is build up from P , then either P ` G or P ` ¬G. Showing that either
¬P ` G or ¬P ` ¬G is similar.
The basic idea is that we use a formation sequence for G to break G up into
its constituent components. If we know that P is true, then we know whether
or not each of the components of G is true, and from this we can determine
whether or not G is true. We only need to explain from there why every step of
our reasoning is a step that we can do in our proof system. We illustrate this
idea with an example.
Example: Let G be the statement ((P → P ) & (P ∨ (P & (P & P )))). We
seek to show that either P ` G or P ` ¬G.
We first look at G and tell ourselves “well, (P → P ) is definitely true. Also,
if P is true, then both P and (P & (P & P )) are true, so (P ∨ (P & (P & P )))
should also be true. The first one is going to be easier to prove though, so let’s
go for that one. Thus, if P is true, then I expect both parts of G will be true,
so I expect G will be true, so let’s try to show that P ` G.
1
1.
2.
1 3.
1 4.
P
A
(P → P )
CP 1, 1
(P ∨ (P & (P & P )))
∨I 1
((P → P ) & (P ∨ (P & (P & P )))) & I 2, 4
We looked at the formula, figured out which parts were true and which
were false, figured out why we believed that, and then we wrote the proof that
followed our reasoning. All we need to do is explain why, for any formula, if we
can prove or disprove every formula that it’s made out of, then we can prove or
disprove the formula, and then we can take any G at all, break it down into its
pieces, and start proving or disproving each piece from the bottom up.
We now go on to discuss, in general, how one goes about proving or disproving larger formulas if one can prove or disprove its pieces.
(¬): We must show that if P can prove or disprove F , then P can prove or
disprove ¬F .
If P proves F , then P ` F , and so one extra double negation step allows us
to show that P ` ¬¬F , or in other words, that P disproves ¬F .
If P disproves F , then P ` ¬F , which is the same thing as proving ¬F .
Thus, in either case, we know that if P proves or disproves F , then P proves
or disproves ¬F .
(&): We now must show that if P can prove or disprove F , and if P can also
prove or disprove G, then P can prove or disprove (F & G).
65
If P proves both F and G, then P can prove (F & G) by first proving F ,
then proving G, then using & I:
1
1.
P
..
.
A
1
30. F
..
.
1
1
54. G
???
55. (F & G) & I 30, 54
???
On the other hand, if P disproves either F or G, then P can disprove (F & G)
with an RAA argument. We do the version were P disproves F . The version
where P disproves G is basically the same.
1
1.
P
..
.
A
1
31∗
31∗
1, 31∗
1
1
30.
31.
32.
33.
34.
35.
¬F
(F & G)
F
(F & ¬F )
((F & G) → (F & ¬F ))
¬(F & G)
???
A∗
& E 31
& I 32, 30
CP 31, 33
RAA 34
(∨): We would then show that if P can prove or disprove F , and if P can also
prove or disprove G, then P can prove or disprove (F ∨ G).
If P proves either F or G, then P ` (F ∨ G) by proving either F or G, and
then using ∨I.
If P disproves both F and G, then P disproves (F ∨G) by an RAA argument
that we will not go over here. The argument is very similar to the argument
that (¬P & ¬Q) ` ¬(P ∨ Q).
(→): Finally, we would need to show that if P can prove or disprove F , and if
P can also prove or disprove G, then P can prove or disprove (F → G).
If P ` G, then P ` (F → G) by assuming F , going through its standard
proof of G, and then using CP to conclude (F → G).
If P disproves F , then P ` (F → G) either with a fairly short RAA argument, or by first proving (¬G → ¬F ), and then assuming F , using M T T with
some double negations to conclude G, and then using CP .
Finally, if P ` F and P ` ¬G, then P ` ¬(F → G) by a fairly short and
straightforward RAA argument. We do not go into the details, but you are
welcome to check them if you want additional practice with RAA.
The case with more than one propositional variable has the same basic ideas,
but is just more work because there are more possibilities for the truth values
of the variables.
66
Not-Quite-So-Special Case. Suppose G is built up from two propositional
variables P and Q.
Again, we must show that if G is a tautology, then there is a proof of G from
no assumptions. The idea is similar to that for the Very Special Case. In Part
I, we would show that P, Q ` ±G, P, ¬Q ` ±G, ¬P, Q ` ±G and ¬P, ¬Q ` ±G.
For Part II, we would show that ` (P & (Q ∨ ¬Q)) ∨ (¬P & (Q ∨ ¬Q)). For
Part III, the proof takes more applications of ∨E, but the idea is the same.
Exercises 1.15
A. 1. Give a proof showing ¬(P ∨ Q) ` ¬P .
2. Verify Soundness for this particular proof. For each step in the proof, explain
why, if the assumptions behind the step are true, then the statement on the step
is also true.
B. Prove the following:
1. ¬P, (Q → P ) ` (¬P & ¬Q)
2. ` (P → (P ∨ Q))
3. P, (R → ¬P ) ` ¬R
4. (R → P ), (R → ¬P ) ` ¬R
C. Which of the following are versions of Soundness, and which are versions of
Completeness?
1. If A1 , . . . , An ` C, then A1 , . . . , An |= C.
2. If there is a proof of C from A1 , . . . , An , then whenever A1 , . . . , An are
true, C is also true.
3. If A1 , . . . , An |= C, then A1 , . . . , An ` C.
4. If C is true whenever A1 , . . . , An are true, then there is a proof of C from
A1 , . . . , A n .
D. Justify each of the following by appealing to either Soundness or Completeness—
a one-word answer is sufficient.
1. If F is tautological, then ` F .
2. If F contingent, then there is no proof of F from no assumptions.
67
3. If there is no proof of G from F , then there is an assignment of truth
values to the propositional variables in F and G that makes F true and
makes G false.
4. If there is an assignment of truth values to the propositional variables in
F1 , F2 , G that makes F1 and F2 both true and makes G false, then there
is no proof of G from F1 and F2 .
68
Chapter 2
Predicate logic
2.1
2.1.1
Beginning predicate logic
Shortcomings of propositional logic
We have quite a lot of power to analyze arguments. In fact, our methods
let us analyze any argument, provided that we can successfully translate it
into propositional logic. There are some arguments that do not translate into
propositional logic. Consider the following.
Example 1 (famous):
Premises:
1. All men are mortal.
2. Socrates is a man.
Conclusion: Socrates is mortal.
Example 2 (not famous, but of the same form):
Premises:
1. All Notre Dame women are smart.
2. Natalia is a Notre Dame woman.
Conclusion: Natalia is smart.
Arguments of this form are called “syllogisms”. The arguments seem correct.
However, if we try to translate into propositional logic, we have trouble.
In propositional logic, complicated statements are built up from basic ones
using the logical connectives “and”, “or” “implies”, and “not”. Here we have
none of these logical connectives. Thus, we cannnot take any of the statements
69
apart. So, when we attempt to translate Example 1 into propositional logic, we
have to use a different propositional variable for each statement. Say A—all men
are mortal, S—Socrates is a man, M —Socrates is mortal. (Example 2 looks the
same.) In trying to translate into propositional logic, we lose everything that
is important in the syllogism. Propositional logic does not have a way to talk
about properties like being a man or a smart person. It doesn’t have names
for special objects, like Socrates or Natalia. Finally, there is no way to say in
propositional logic that something holds “for all” objects of a certain kind.
2.1.2
Predicate Logic
In predicate logic, we have symbols for properties, names for distinguished objects, and quantifiers “for all” and “there exists”. We also have equality, to
indicate that two things are identical.
Let us see what predicate logic does with the syllogism in Example 1. We use
a predicate symbol M for the property of being a man and another predicate
symbol D for the property of being mortal. We add a constant symbol s to
name Socrates. To say Socrates is a man, we write M s. The rules of grammar
(syntax) for predicate languages require us to use a variable x, y, . . . whenever
we want to say something about “all”. We write M x for “x is a man”, Dx for
“x is mortal”. We write (M x → Dx) for “if x is a man, then x is mortal.” Now,
(∀x) (M x → Dx) says literally “for all x, if x is a man, then x is mortal”. This
is the way we translate “All men are mortal” into predicate logic.
Here is the translation of the whole argument in Example 1.
Premises:
1. All men are mortal—(∀x) (M x → Dx).
2. Socrates is a man—M s.
Conclusion: Socrates is mortal. Ds.
2.2
Predicate Logic
We saw that there are arguments, such as the famous syllogism, that cannot be
translated into propositional logic. Predicate logic has the expressive power that
we need to express such arguments. For propositional logic, we described the
symbols, gave an inductive definition of well-formed formula, or wf f , did some
translation, developed a system of proof, defined truth, and stated Completeness
and Soundness Theorems. We will do the same for predicate logic.
2.2.1
Symbols of predicate logic
Predicate languages have the following symbols—some are the same as for
propositional languages; others are new.
70
logical connectives: ¬, &, ∨, →
quantifiers:
(∀ . . .)—for all . . ., and (∃ . . .)—there exists . . .,
equality: =—used to say that two things are identical,
individual variables: x, y, z, x0 , y1 , y2 , . . .. (We use small letters at the end of
the alphabet, sometimes with decorations.) These variables stand for objects.
parentheses: ( and ),
Predicate symbols: M , C, B, L etc. (We use capital letters for predicate
symbols.)
A predicate symbol denotes a property or relation. For each predicate symbol,
we specify a number of places, or “arity”. We use 1-place predicate symbols
for properties such as being mortal, being a cat, which apply to single objects.
We use 2-place predicate symbols to stand for relations that may hold between
two objects; e.g., the ordering relation on numbers, the parenting relation on
people. We use 3-place predicate symbols for relations that hold among three
objects; e.g., the relation of being located between—Portage is between South
Bend and Chicago.
individual constants: s, n, a, b, etc. (We use small letters, different from the
ones we use for variables.)
Individual constants stand for specific objects like Socrates or Caitlin.
2.2.2
Definition of well-formed formula
We need some preliminary definitions before we can say what is a well-formed
formula.
2.2.1. A term is a variable or a constant.
2.2.2. An atomic formula is a string of symbols of one of the following two
forms:
1. t = t0 , where t and t0 are terms
2. P t1 , . . . , tn , where P is an n-place predicate symbol and t1 , . . . , tn are
terms.
Here are some sample atomic formulas.
71
Example 2.2.1.
1. x = y,
2. x = s,
3. M x,
4. M s,
5. Lxy,
6. Babc
Now, we define the class of well-formed formulas (wf f ’s) for predicate logic as
follows:
2.2.3.
1. An atomic formula is a wf f .
2. If F is a wf f , then so is ¬F .
3. If F and G are wf f ’s, then so are (F & G), (F ∨ G), and (F → G).
4. If F is a wf f and v is a variable, then (∃v) F and (∀v) F are wf f ’s.
5. Nothing is a wf f unless it can be obtained by finitely many applications
of 1–4.
As in propositional logic, to show that F is a wf f , we give a formation
sequence, building it up from the simplest parts (the atomic formulas), and
adding connectives and quantifiers one at a time.
Example 2.2.2. Give a formation sequence for (∃y)(∀x) (M x → Lyx).
1. M x is a wf f , by clause 1 (atomic formula).
2. Lyx is a wf f , by clause 1 (atomic formula).
3. (M x → Lyx) is a wf f , by clause 3 applied to 1, 2.
4. (∀x) (M x → Lyx) is a wf f , by clause 4 applied to 3.
5. (∃y) (∀x) (M x → Lys) is a wf f , by clause 4 applied to 4.
We first defined the atomic formulas. For example, s = x and Rxs are
atomic formulas using the constant s and the 2-place predicate symbol R. It is
important to note that atomic formulas cannot be broken into simpler formulas.
In particular, s, and s = are not well-formed formulas.
Example 2.2.3. Give a formation sequence showing that (¬x = s & (∀x) ¬Rxs)
is a wf f . (Assume that s is a constant and R is a 2-place predicate symbol.)
72
1. x = s is a wf f , by clause 1 (atomic).
2. ¬x = s is a wf f , by clause 2, appl. to 1.
3. Rxs is a wf f , by clause 1 (atomic).
4. ¬Rxs is a wf f , by clause 2, appl. to 3.
5. (∀x) ¬Rxs is a wf f , by clause 4, appl. to 4.
6. (¬x = s & (∀x) ¬Rxs) is a wf f , by clause 3, appl. to 2,5.
2.2.3
Translation
Example 2.2.4. Recall the famous syllogism.
All men are mortal. Socrates is a man. Therefore, Socrates is mortal.
Here is a translation, using two 1-place predicates M —man, D—mortal, and
a constant s—Socrates.
Premises: (∀x) (M x → Dx) and M s.
Conclusion: Ds
Example 2.2.5. Here is another famous argument, due to Descartes.
I think. Therefore, I am.
Using predicates T —thinking, B—being, and constant i—I, we have the
following.
Premise: T i
Conclusion: Bi.
As translated, the premise and the conclusion are not related. One common interpretation of the argument is that Descartes intends us to believe the
following tacit premise.
(∀x) (T x → Bx).
We will look at some simpler examples, and try to get used to the grammar
of predicate logic.
73
Example 2.2.6. Translate the following, letting D—dog, F has fleas, f —Fido.
If Fido has fleas, then all dogs do. Whenever we use a quantifier, we have to
introduce a variable.
We write, “If Fido has fleas, then for all x, if x is a dog, then x has fleas”.
(F f → (∀x)(Dx → F x))
So far, we have not used equality.
Example 2.2.7. Translate the following.
1. Fido is not the only dog (i.e., there is another dog different from Fido).
(∃x) (Dx & ¬x = f )
2. There are at least two dogs.
(∃x) (∃y) ((Dx & Dy) & ¬x = y).
Exercises 2.2
A. Show, by giving a formation sequence, that each of the following is a wellformed formula of predicate logic.
1. (∀x) (∃y) (Sx → Lxy)
2. (∃x) (M x & ¬x = s)
3. (Lxz & (∀y)(Lxy ∨ Lyb))
B. Translate each of the following statements from English into predicate logic,
using the one-place predicate symbols C and S for “is a cat” and “is Siamese”,
the constants s and g for Sam and Gauss, and for the last few problems, a
two-place predicate symbol L, where Lxy means that x likes y.
1. Sam is a cat.
2. Gauss is not a cat.
3. Some cats are Siamese.
4. All cats are Siamese.
5. No cats are Siamese.
6. Some cats are not Siamese.
74
7. Sam is the only cat.
8. Gauss likes Sam.
9. Sam likes Gauss.
10. Gauss likes everyone.
11. Sam likes someone.
12. Sam does not like anyone.
13. Gauss likes everyone who likes him.
14. Sam likes only Gauss.
15. Everyone likes someone.
2.3
Learning the idioms for predicate logic
Last time, we defined formulas for predicate logic, and we did a little translation,
but we should do more. It takes quite a lot of thought and effort to learn how
to say what you mean in predicate logic.
Translate the following, letting D—dog, f —Fido, and letting L be a two-place
predicate, where Lxy means x likes y.
1. Fido likes everyone.
Syntax requires us to put the quantifier first, and introduce a variable. We must
write something like, “For all x, Fido likes x”.
(∀x) Lf x.
2. Someone likes Fido.
We write, “There exists x such that x likes Fido”.
(∃x) Lxf .
3. Fido likes everyone who likes him.
Putting the quantifier first, we say “For all x, if x likes Fido, then Fido likes x”.
(∀x) (Lxf → Lf x).
4. To say “all dogs have fleas”, we write (∀x) (Dx → F x).
5. To say “some dogs have fleas, we write (∃x) (Dx & F x).
75
Let us add 1-place predicates G—girl, B—boy, S—sport.
6. To say “every girl likes some sport”, we may write
(∀x) (Gx → (∃y) (Sy & Lxy)), or (∀x) (∃y) (Gx → (Sy & Lxy)).
7. To say “some girl likes nothing but sports”, we write
(∃x) (Gx & (∀y) (Lxy → Sy)), or (∃x) (∀y) (Gx & (Lxy → Sy)).
8. To say “some girl likes Fido”, we write (∃x) (Gx & Lxf ).
9. To say “every sport is liked by some boy”, we may write
(∀x) (Sx → (∃y) (By & Lyx)), or (∀x) (∃y) (Sx → (By & Lyx)).
To say that everything with one property also has a second property, we
write (∀x) if x has the first property, then x has the second property.
Example 2.3.1. To translate “All men are mortal”, we wrote (∀x)(M x → T x).
Similarly, to translate “All girls like Fido”, we write (∀x) (Gx → Lxf ). To
translate “All girls like all dogs”, we write (∀x) (Gx → (∀y) (Dy → Lxy)).
To say that something with one property also has a second property, we
write (∃x) (such that) x has the first property and x has the second property.
Example 2.3.2. To translate “Some dogs have fleas”, we wrote (∃x) (Dx & F x).
Similarly, to translate “Some girls like Fido”, we write (∃x) (Gx & Lxf ). To
translate “Some girls like all dogs”, we write (∃x) (Gx & (∀y) (Dy → Lxy)).
2.3.1
More translation
The order of quantifiers matters. Think about the difference between the following statements.
1. Everyone likes someone.
2. Someone likes everyone.
Using the 2-place predicate symbol L—the first likes the second—we translate as follows.
1. (∀x) (∃y) Lxy
2. (∃x) (∀y) Lxy
We could add more variants.
3. Everyone is liked by someone—(∀x) (∃y) Lyx
4. Someone is liked by everyone—(∃x) (∀y) Lyx.
76
We can use equality to say that two things are the same, or different, or to
indicate the number of things with some property.
Examples. Let H—property of being here, and let the other symbols be as
above.
1. To say that Fido is the only dog here, we want to say that Fido is a dog who
is here, and also that any dog who is here must be Fido:
(∀x) ((Dx & Hx) ↔ x = f ) .
2. To say that at least two girls are here, we may write
(∃x) (∃y) (((Gx & Hx) & (Gy & Hy)) & ¬x = y) .
The ¬x = y is necessary to ensure that the two girls are distinct.
We could also have written
(∃x) (∃y) (¬x = y & (∀z) ((z = x ∨ z = y) → (Gz & Hz))) .
This second phrasing will be helpful to us for the next example.
3. To say that exactly two girls are here, we may write
(∃x) (∃y) (¬x = y & (∀z) ((Gz & Hz) ↔ (z = x ∨ z = y))) .
Without the trick from the second phrasing of the previous question, we
could have written something like
(∃x) (∃y) ((¬x = y & ((Gx & Hx) & (Gy & Hy))) & (∀z) (¬(z = x ∨ z = y) → ¬Gz∨¬Hz)) .
In this version, we explicitly state: “First of all, x and y are different things.
Also, both x and y are girls that are here. Finally, for any z if z is something
other than x or y, then either z is not a girl, or z is not here.”
2.4
Free variables and sentences
Some wf f ’s say something about one or more variables. Other wf f ’s just make
an assertion about the properties, relations, and special individuals named by
our predicates and constants. Think of variables as ranging over the natural
numbers 0, 1, 2, . . .. Consider the 2-place predicate symbol Lxy, for x < y, i.e.,
x is less than y.
Example 2.4.1. (Sample formulas to consider)
1. Lxy
77
This formula says something about x and y. It is true if we plug in 3 for x
and 7 for y. It is not true if we plug in 4 for x and 2 for y.
2. (∃x) Lxy
This formula asserts that there exists something that is less than y. It is
true of 3, and not true of 0.
3. (∀x) (∃y) Lxy
This formula says that there is no greatest number; i.e., for any number,
there is a larger one. It is a statement about the number system as a whole,
and it is true of the natural numbers.
4. (∀y) (∃x) Lxy
This formula says that for each number, there is a smaller one. It is also a
statement about the number system as a whole, and it is false in the natural
numbers. On the other hand, if our variables were ranging over the integers (as
opposed to the natural numbers) then the formula would be true.
2.4.1
Scope and free variables
We want to be able to describe which variables a formula talks about. We also
want a word to distinguish between formulas that say something about one or
more variables, and those that assert something about the whole system—a
number system or something else.
2.4.1. The scope of a quantifier (∀v) or (∃v) (really, of a particular occurrence
of the quantifier) is the smallest complete wf f occurring immediately to the
right of the quantifier. Here v can be any variable.
Example 2.4.2. Let G be a one-place predicate, and let M be a two-place
predicate.
1. In the formula (∀x)(Gx → (∃y)M xy), the scope of (∀x) is (Gx → (∃y)M xy),
and the scope of the quantifier (∃y) is M xy.
2. In the formula (M xz & (∀z)(M zy ∨ M yz)), the scope of the quantifier
(∀z) is (M zy ∨ M yz).
3. In the formula ((∃y)(Gy & M xy) ∨ Gx), the scope of the quantifier (∃y)
is (Gy & M xy).
4. In the formula (∀x)(Gx → ((∃y)Gy & M xz)), the scope of the quantifier
(∀x) is (Gx → ((∃y)Gy & M xz)), and the scope of (∃y) is Gy.
5. In the formula ((∃x)Gx & (∃x)¬Gx), the scope of the first quantifier (∃x)
is Gx, and the scope of the second (∃x) is ¬Gx.
78
2.4.2.
1. An occurence of a variable v is said to be bound in the formula F if it is
in the scope of a quantifier (∀v) or (∃v).
2. An occurrence of a variable v that is not bound is said to be free.
3. A sentence is a wf f in which no variable occurs freely.
Example 2.4.3. For each of the wf f ’s in Example 2.4.1, say whether it is a
sentence, and if not, give the variables that occur freely.
1. Lxy is not a sentence—x and y occur freely.
2. (∃x) Lxy is not a sentence—y occurs freely. If Lxy is as above, this says that
there is some number less than y. Note that (∃z) Lzy says the same thing about
y; the particular choice of variable being quantified doesn’t matter, so long as
it is different from y.
3. (∀x) (∃y) Lxy is a sentence. If Lxy is as above, this says that there is no
greatest number. Note that (∀y) (∃z) Lyz says the same thing.
4. (∀y) (∃x) Lxy is a sentence. If Lxy is as above, it says there is no smallest
number. Again, we could say the same thing using different variables.
Example 2.4.4. For each of the following, say whether it is a sentence, and if
not, give the free variables.
1. (Gx & (∃x) P xy) is not a sentence - y is a free variable, as is the first x.
2. (Ga & (∀x) P xb) is a sentence.
3. (∀y)((∃x)Gx → P xy) is not a sentence - the last x is a free variable.
4. (∃x)(P xb ∨ (∀y)(Gy & P xy)) is a sentence.
In general, a wf f with free variables says something about the variables. A
sentence makes an assertion about a whole system. We will give meaning to a
formula by thinking of the variables as ranging over a certain class of objects,
letting the predicates stand for certain relations, and letting the constants name
specific objects. A wf f with free variables may be “satisfied” by assignments
of certain objects for the free variables, and not others. A sentence will just be
“true” or “false”. We will make this precise later.
79
2.4.2
Changing variables
Variables which are introduced by quantifiers may be changed without changing
the meaning of the statement.
Example 2.4.5. We translate the statement “Tom likes some sport” as follows.
(∃x) (Sx & Ltx)
We could change the variable x to y throughout and say the same thing.
(∃y) (Sy & Lty)
Example 2.4.6. We translate the statement “x likes some sport” as follows.
(∃y) (Sy & Lxy)
We could change the variable y to z and say the same thing.
(∃z) (Sz & Lxz)
We cannot change y to x and say the same thing.
Example 2.4.7. Express the relation x is a grandparent of y, using just a
2-place predicate P —the first is a parent of the second.
(∃z) (P xz & P zy)
Exercises 2.4
A. Show, by giving a formation sequence that each of the following is a wellformed formula of predicate logic.
1. (∃x) (T x & Bsxt)
2. (¬M x → ¬x = s)
B: Translate the following, using 1-place predicates Gx for “x is a girl”, Bx for
“x is a boy”, and Sx for “x is a sport”, a 2-place predicate Lxy for “x likes y”,
and constants m and t for Maria and Tom.
1. Tom is a boy.
2. Tom likes Maria.
3. Maria likes some sport.
4. Every boy likes some sport.
80
5. Some girl likes all sports.
C. For each of the following formulas of predicate logic, say whether it is a
sentence, and if not, give the free variables. [Note that G is a 1-place predicate,
F is a 2-place predicate, and m and n are constants.]
1. (∀x) (∃y) (Gx & ¬F xy)
2. ((Gx & (∃y) F xy) → Gx)
3. (Gm & Gn)
4. (F xy → (∃z) (F xz & F zy))
2.5
Toward proofs in predicate logic
2.5.1
More on free variables and sentences
Recall that a variable v occurs freely in a predicate formula if it occurs without
being introduced by a quantifier (∀v) or (∃v)—outside the scope. A sentence is
a formula with no free variables.
Examples
1. The formula (∃x) Lxy is not a sentence, since y occurs freely.
2. The formula (∃x) Lxc is a sentence - note that c is a constant, not a
variable.
3. The formula (Lxy & (∀x) P x) is not a sentence, since x and y both occur
freely.
A formula with free variables says something about those variables. When
we translate into English, the variables are still there. We could change the other
variables without changing the meaning. A sentence asserts something that does
not depend on the choice of variables. Normally, we translate a sentence into
English without mentioning variables.
Example 2.5.1. Let P be a two-place predicate saying that the object in the
first place is a parent of the one in the second place.
1. P xy says that x is a parent of y - x and y are both free here.
2. (∃x) P xy says that y has a parent. Here y is a free variable. The other
variable, x, is not mentioned when we translate into English. We could
write (∃z) P zy and say the same thing.
3. (∃y) P xy says that x is a parent. Here x is a free variable. The other
variable, y, is not mentioned in the translation. We we would translate
(∃z) P xz in the same way.
81
4. (∀y) (∃x) P xy says that everyone has a parent. This formula is a sentence.
The English translation does not mention any variables. We could write
(∀x) (∃y) P yx and say the same thing.
5. (∀x) (∃y) P xy says that everyone is a parent. This formula is also a sentence.
2.5.2
Substituting constants for free variables
We are about to start proofs in predicate logic. In our proofs, we will use
only sentences, not formulas with free variables. We will obtain some of these
sentences by “substituting” constants for free variables. So, let us focus on that
process.
Notation: We write A(x) to indicate that A is a formula with at most x
occurring freely. Then A(c) is the sentence that results from substituting the
constant c for the free occurrences of x in the formula A(x). Similarly, we may
write B(x, y) to indicate that B is a formula with at most x and y occurring
freely. Then B(c, y) is the formula that results from substituting the constant
c for the free occurrences of x. (We do the same thing with other variables and
constants.)
Example 2.5.2. Let A(x) be the formula (F x → Gx). Then A(m) is the
sentence (F m → Gm).
Example 2.5.3. If B(x, y) be the formula (M y & (∃y)Lxy), then B(a, y) is
(M y & (∃y)Lay), and B(a, c) is the sentence (M c & (∃y)Lay).
Note that x and y both occur freely in B(x, y), y occurs free in B(a, y), and
B(a, c) is a sentence.
2.5.3
Proofs in predicate logic
As in propositional logic, we will use the notation A1 , . . . , An ` B to mean that
there is a proof of the sentence B from the sentences A1 , . . . , An .
Our proof system will include all the rules from propositional logic:
A, M P P, M T T, DN, CP, &I, &E, ∨I, ∨E, RAA .
The only difference is that now we apply the rules to sentences in predicate logic,
rather than to wf f s of propositional logic. You need to remember exactly how
the old rules work, with their decorations.
Example 2.5.4. Show that M a, (M a → P a) ` P a.
1
1. M a
A
2
2. (M a → P a) A
1, 2 3. P a
M P P 1, 2
Note that every line of this proof is a sentence.
82
Example 2.5.5. The following is Descartes’ divisibility argument for substance
dualism.
If the mind and body are one and the same substance, they have all
of the same features. The body is divisible. The mind is indivisible.
Therefore, the mind and body must be different substances.
Let S be a two-place predicate—same substance, let D be a one-place predicate—
divisible, and let m and b be constants—for mind and body, respectively. In
translating the first premise, let us focus just on the property of being divisible.
We get an argument for which we can give a proof. Show that
Smb → (Db ↔ Dm), Db, ¬Dm ` ¬Smb .
1
2
3
4∗
1, 4∗
1, 4∗
1, 2, 4∗
1, 2, 3, 4∗
1, 2, 3
1, 2, 3
1. Smb → (Db ↔ Dm)
2. Db
3. ¬Dm
4. Smb
5. (Db ↔ Dm)
6. (Db → Dm)
7. Dm
8. (Dm & ¬Dm)
9. (Smb → (Dm & ¬Dm))
10. ¬Smb
A
A
A
A∗
M P P 4, 1
&E 5
M P P 2, 6
&I 7, 3
CP 4, 8
RAA 9
In addition to the ten old rules, for the propositional connectives, we need
new rules, for quantifiers and equality. Here is the first new rule.
Universal Elimination (UE): From a sentence of the form (∀x) A(x), we may
conclude A(c) for any constant c. (We could use any variable, not just x in the
assumption.) In other words, for any constant c we have (∀x) A(x) B A(c).
Decorations for U E. On the right, we give the line where we have (∀x) A(x).
On the left, we give the assumptions behind (∀x) A(x).
Example 2.5.6. Recall the famous syllogism.
All men are mortal.
Socrates is a man.
Therefore, Socrates is mortal.
We may translate the premises as (∀x) (M x → Dx) and M s, and we may
translate the conclusion as Ds. Show that (∀x) (M x → Dx), M s ` Ds.
1
1. (∀x) (M x → Dx) A
2
2. M s
A
1
3. (M s → Ds)
UE 1
1, 2 4. Ds
M P P 2, 3
83
Here are some more examples.
Example 2.5.7. Show that (∀x) (∀y) (F xy → F yx), F mn ` F nm.
1
2
1
1
1, 2
1.
2.
3.
4.
5.
(∀x) (∀y) (F xy → F yx) A
F mn
A
(∀y) (F my → F ym)
UE 1
(F mn → F nm)
UE 3
F nm
M P P 2, 4
Example 2.5.8. Show that ¬Gm ` ¬(∀x) Gx.
1
2∗
2∗
1, 2∗
1
1
1.
2.
3.
4.
5.
6.
¬Gm
A
(∀x) Gx
A∗ (use RAA)
Gm
UE 2
(Gm & ¬Gm)
&I 3, 1
((∀x) Gx → (Gm & ¬Gm)) CP 2, 5
¬(∀x) Gx
RAA 5
Exercises 2.5
Part A. Translate each of the following into idiomatic English, letting A be a
one-place predicate for “is American”, and letting L be a two-place predicate
such that Lxy means “x likes y”. (You may assume quantifiers range over all
people, so (∀x) means “for all people x”.)
1. (∃x)(Ax & (∀y)(¬Ay → Lxy))
2. (∀x)(∀y)((Ax & ¬Ay) → (¬Lxy & ¬Lyx))
3. (∀x)(Ax → (∀y)Lyx)
4. (∀x)(Ax → (∀y)Lxy)
Part B. Prove each of the following.
1. Dm, (∀x) (Dx → Bx) ` Bm
2. Bm, (∀x) (Dx → ¬Bx) ` ¬Dm
3. ¬Dm, (∀x) (Hx → Dx) ` ¬Hm
4. (∀x) ((M x & N x) → Sx), ¬Sn, M n ` ¬N n
5. (∀x) ((F x & ¬P x) → Kx), F m, ¬Km ` P m
84
2.6
Proving universal statements
Last time, we started on proofs in predicate logic. These look much like proofs
in propositional logic, except that the statement on each line is a sentence of
predicate logic, not a formula of propositional logic. We use the ten basic rules
from propositional logic, plus some new ones, for the quantifiers and equality.
We introduced one new rule.
Universal Elimination (UE). For a predicate formula A(x), with at most x
occurring free, if we have (∀x) A(x), we may conclude A(c)—similarly, for other
variables instead of x, and other constants instead of c.
For the decorations, on the right, we refer to the step where we have (∀x) A(x),
and on the left, we give the assumptions behind that step.
Example 2.6.1. (∀y) P y, (∀z) Qz ` (P s & Qs)
1
2
1
2
1, 2
1.
2.
3.
4.
5.
(∀y) P y
A
(∀z) Qz
A
Ps
UE 1
Qs
UE 2
(P s & Qs) &I 3, 4
Example 2.6.2. (∀x) (P x ∨ Qx), (∀x) (P x → Qx) ` Qc.
1
2
1
2
5∗
1.
2.
3.
4.
5.
6.
1, 2 7.
(∀x) (P x ∨ Qx)
(∀x) (P x → Qx)
(P c ∨ Qc)
(P c → Qc)
Qc
(Qc → Qc)
Qc
A
A
UE 1
UE 2
A∗
CP 5, 5
∨E 3, 4, 6
Here is the second new rule, a companion to Universal Elimination.
Universal Introduction (UI): To prove (∀x) A(x), it is enough to prove A(c),
where c is an “arbitrary” constant. A constant is arbitrary if it does not appear
in any of the assumptions that we use for A(c), or in (∀x) A(x). We can write
this rule as, if c is arbitrary then A(c) B (∀x) A(x).
For the decorations, on the right, we refer to the step where we have A(c),
and on the left, we give the assumptions behind that step.
Question: Why is the rule U I sound? If we can prove A(c), where c is arbitrary,
we have not used anything special about c, and our proof would work for any c.
85
The rule U I is used in proving statements of the form (∀x) A(x). Like CP ,
RAA, or ∨E, it comes with a strategy. To prove (∀x) A(x), we do the following.
1. Choose an arbitrary constant c (a new one, not in our assumptions, not in
the statement (∀x)A(x) that we are trying to prove, and not used anywhere else
so far).
2. Prove A(c).
3. Apply U I to get (∀x) A(x).
The rule U I should be familiar from in arguments given in English. When
we are talking about people, “Jane Doe” is a common choice for the arbitrary
constant.
Example 2.6.3. Show that (∀x) F x, (∀x) (F x → Gx) ` (∀x)Gx.
1
2
1
2
1, 2
1, 2
1.
2.
3.
4.
5.
6.
(∀x) F x
A
(∀x) (F x → Gx) A
Fc
U E 1 (c is an arbitrary constant)
(F c → Gc)
UE 2
Gc
M P P 3, 4
(∀x) Gx
UI 5
Example 2.6.4. Show that (∀x) (F x → Gx) ` ((∀x) F x → (∀x) Gx).
1
2∗
1
2∗
1, 2∗
1, 2∗
1
1.
2.
3.
4.
5.
6.
7.
(∀x) (F x → Gx)
(∀x) F x
(F c → Gc)
Fc
Gc
(∀x) Gx
((∀x)F x → (∀x)Gx)
A
A∗ (prove (∀x) Gx, use CP ; prove Gc, use U I)
UE 1
UE 2
M P P 3, 4
UI 5
CP 2, 6
Example 2.6.5. Show that (∀x) (F x → Gx) ` ((∀x) ¬Gx → (∀x) ¬F x).
1
2∗
1
2∗
1, 2∗
1, 2∗
1
1.
2.
3.
4.
5.
6.
7.
(∀x) (F x → Gx)
(∀x) ¬Gx
(F c → Gc)
¬Gc
¬F c
(∀x) ¬F x
((∀x) ¬Gx → (∀x) ¬F x)
A
A∗
UE 1
UE 2
M T T 3, 4
UI 5
CP 2, 6
We often use the two rules U E and U I together in a proof. We strip off
universal quantifiers using U E, then apply propositional rules to re-combine the
parts, and then put the universal quantifiers back using U I.
86
Example 2.6.6. Show that (∀x) (∀y) Gxy ` (∀x) Gxx.
1 1. (∀x) (∀y) Gxy
1 2. (∀y) Gcy
1 3. Gcc
1 4. (∀x) Gxx
A
UE 1
UE 2
UI 3
Example 2.6.7. Show that (∀x) (∀y) Gxy ` (∀y) (∀x) Gxy.
1
1
1
1
1
1.
2.
3.
4.
5.
(∀x) (∀y) Gxy
(∀y) Gcy
Gcd
(∀x) Gxd
(∀y) (∀x) Gxy
A
UE 1
UE 2
UI 3
UI 4
Example 2.6.8. Show that (∀x) (F x ∨ Gx), (∀x) ¬F x ` (∀x) Gx
1
2
1
4∗
2
6∗
2, 4∗
2, 4∗
2, 4∗
2, 4∗
2
12∗
1, 2
1, 2
1. (∀x) (F x ∨ Gx)
2. (∀x)¬F x
3. (F c ∨ Gc)
4. F c
5. ¬F c
6. ¬Gc
7. (F c & ¬F c)
8. (¬Gc → (F c & ¬F c))
9. ¬¬Gc
10. Gc
11. (F c → Gc)
12. Gc
13. (Gc → Gc)
14. Gc
15. (∀x) Gx
A
A
UE 1
A∗
UE 2
A∗
&I 4, 5
CP 6, 7
RAA 8
DN 9
CP 4, 10
A∗
CP 12, 12
∨E 3, 11, 13
U I 11
Exercises 2.6
Prove each of the following.
1. (∀x) (F x → Gx), (∀x) (Gx → ¬Hx) ` (∀x) (F x → ¬Hx)
2. (∀x) (F x → ¬Gx), (∀x) (Hx → Gx) ` (∀x) (F x → ¬Hx)
3. (∀x) (Gx → ¬F x), (∀x) (Hx → Gx) ` (∀x) (F x → ¬Hx)
4. (∀x) (F x → Gx) ` ((∀x) F x → (∀x) Gx)
5. (∀x) ((F x ∨ Gx) → Hx), (∀x)¬Hx ` (∀x) ¬F x
87
2.7
More on proofs in predicate logic
Recall that the proof system for predicate logic includes the ten rules that
we used in propositional logic. There are new rules, for the quantifiers, and
for equality. So far, we have two new rules, both dealing with the universal
quantifier ∀.
U E says that from (∀x) A(x), we get A(c). There are no restrictions on the
constant c. This rule lets us use a piece of information that starts with a
universal quantifier.
U I says that to prove (∀x) A(x), it is enough to prove A(c), where c is an
“arbitrary” constant. The constant c counts as arbitrary if it does not appear in
the assumptions behind A(c), and it does not appear in the statement (∀x) A(x).
This rule lets us prove statements of the form (∀x) A(x). To prove such a
statement, we use the following strategy.
1. Choose a new constant c.
2. Prove A(c).
3. Apply U I to get (∀x) A(x).
Example 2.7.1. Show that ((∀x) F x ∨ (∀x) Gx) ` (∀x) (F x ∨ Gx). To prove
(∀x) (F x ∨ Gx), we choose a new constant c and prove (F c ∨ Gc).
1
2∗
2∗
2∗
6∗
6∗
6∗
1
1
1. ((∀x) F x ∨ (∀x) Gx)
2. (∀x) F x
3. F c
4. (F c ∨ Gc)
5. ((∀x) F x → (F c ∨ Gc))
6. (∀x) Gx
7. Gc
8. (F c ∨ Gc)
9. ((∀x) Gx → (F c ∨ Gc))
10. (F c ∨ Gc)
11. (∀x) (F x ∨ Gx)
A
A∗
UE 2
∨I 3
CP 2, 4
A∗
UE 6
∨I 7
CP 6, 8
∨E 1, 5, 9
UI 8
Example 2.7.2. Show that ` ((∀x) (∀y) Rxy → (∀x) Rxc)
1∗
1∗
1∗
1∗
1.
2.
3.
4.
5.
(∀x) (∀y) Rxy
(∀y) Ray
Rac
(∀x) Rxc
((∀x) (∀y) Rxy → (∀x) Rxc)
A∗ (Choose a new constant a, prove Rac.)
UE 1
UE 2
UI 3
CP 1, 4
88
Example 2.7.3. Is the following a good proof for Em ` (∀x) Ex?
1
1
1. Em
A
2. (∀x) Ex U I 1
Line 1 is an assumption in whch m appears, so m is not an arbitrary constant.
We are not justified in applying U I on line 2.
2.7.1
Rules for existential quantifiers
We turn to the rules for existential quantifiers. The first is for proving statements
that start with an existential quantifier.
Existential Introduction (EI)
If we have A(c), we can conclude (∃x) A(x), where A(c) is the result of substituting c for the free occurrences of x in A(x). We can write this rule
A(c) B (∃x) A(x), for any c. There are no restrictions on the constant. We
allow the possibility that c occurs in A(x). When we are looking at the formula
A(c), we may take A(x) to be any formula that yields the appropriate A(c)
when we substitute c for the free occurrences of x.
The idea behind the rule is clear. If we believe that A(c) holds for some constant
c, then we believe (∃x) A(x).
Decorations
On the right, we give the line of A(c), and on the left, we put the assumptions
behind A(c).
Example 2.7.4. Show that (∀x) F x ` (∃x) F x
1 1. (∀x) F x A
1 2. F c
UE 1
1 3. (∃x) F x EI 2
Example 2.7.5. Show (∀x) P xc ` (∀x) (∃y) P xy.
1 1. (∀x) P xc
1 2. P ac
1 3. (∃y) P ay
1 4. (∀x) (∃y) P xy
A
UE 1
EI 2
UI 3
Example 2.7.6. Ecc ` (∃x) Ecx
1
1
1. Ecc
A
2. (∃x) Ecx EI 1
89
Example 2.7.7. (∀x) Ecx ` Ecc
1 1. (∀x) Ecx A
1 2. Ecc
UE 1
The next new rule lets us use information that starts with an existential
quantifier.
Existential Elimination (EE)
If we have (∃x) A(x), and we can prove (A(c) → B), where c is “arbitrary”,
then we may conclude B. The constant c is arbitrary if it does not appear
in (∃x) A(x) or B or any of the premises used for (∃x) A(x) or (A(c) → B).
Specifically, if c is arbitrary, we have the rule (∃x) A(x), (A(c) → B) B B.
The idea behind EE is natural. If we have (∃x) A(x), we let c be an arbitrary
witness satisfying A(x). If B follows from A(c), then it follows from (∃x) A(x).
Strategy for EE (Proof using an arbitrary witness)
Suppose we have (∃x) A(x), and we want to prove B. We proceed as follows.
1. Assume A(c), where c is a new constant, not in B or (∃x) A(x), or in any
of our assumptions.
2. Prove B, probably using A(c).
3. Obtain (A(c) → B) using CP.
4. Apply EE to conclude B.
Decorations
On the right, we give the line where we have (∃x) A(x), and the line where we
have (A(c) → B).
On the left, we give the assumptions behind (∃x) A(x) and (A(c) → B).
Example 2.7.8. Show that (∃x) F x, (∀x) (F x → Gx) ` (∃x) Gx.
1
2
3∗
2
2, 3∗
2, 3∗
2
1, 2
1.
2.
3.
4.
5.
6.
7.
8.
(∃x)F x
A
(∀x) (F x → Gx) A
Fc
A∗ (use EE)
(F c → Gc)
UE 2
Gc
M P P 3, 4
(∃x) Gx
EI 5
(F c → (∃x) Gx) CP 3, 6
(∃x) Gx
EE 1, 7
90
Exercises 2.7
Give proofs of the following.
1. (∀x) (F x → Gx), (∃x) ¬Gx ` (∃x) ¬F x
2. (∀x) (F x → (Gx & Hx)), (∃x) F x ` (∃x)Hx
3. (∀x) ((F x ∨ Gx) → Hx), (∃x)¬Hx ` (∃x)¬F x
4. (∀x)(Hx → ¬Gx), (∃x) (F x & Gx) ` (∃x) (F x & ¬Hx)
5. (∃x) (Gx & Hx), (∀x) (Gx → F x) ` (∃x) (F x & Hx)
6. ¬(∃x) (P x & ¬W x), P a ` W a
7. ((∃x) Bx → Be) ` ((∃x) Bx ↔ Be)
8. ((∃x) F x → (∀x) (F x → Gxa)), ((∃x) Hx → (∀x) (Hx → Jxa))
` (∃x) (F x & Hx) → (∃x)(∃y)(Gxy & Jxy)
2.8
More examples
We have two rules for universal quantifiers, and two rules for existential quantifiers.
Universal Elimination (U E). Given (∀x) A(x), we get A(c). There is no
restriction on c. It may appear in (∀x) A(x), or in assumptions.
Universal Introduction (U I). To prove (∀x) A(x), it is enough to show A(c),
where c is “arbitrary”. This means that c is not in (∀x) A(x) or in any of the
assumptions behind A(c).
Existential Introduction (EI). Given A(c), we get (∃x) A(x). There is no
restriction on c. It may appear in (∃x) A(x), or in assumptions.
Existential Elimination (EE)—proof with an arbitrary witness. If we have
(∃x) A(x), and we can prove (A(c) → B), where c is “arbitrary”, then we may
conclude B. Here c is “arbitrary” if it does not appear in (∃x) A(x), B, or any
of the assumptions used for (∃x) A(x) or (A(c) → B).
Example 2.8.1. Show that (∃x) Sx, (∀x) (¬T x → ¬Sx) ` (∃x) T x.
91
1
2
3∗
2
3∗
2, 3∗
2, 3∗
2, 3∗
2
1, 2
1. (∃x) Sx
2. (∀x) (¬T x → ¬Sx)
3. Sc
4. (¬T c → ¬Sc)
5. ¬¬Sc
6. ¬¬T c
7. T c
8. (∃x) T x
9. (Sc → (∃x) T x)
10. (∃x) T x
A
A
A∗
UE 2
DN 3
M T T 4, 5
DN 5
EI 6
CP 3, 8
EE 1, 9
Example 2.8.2. Show that (∀x) (∃y) Lxy ` (∃x) Lcx
1
1
3∗
3∗
1.
2.
3.
4.
5.
6.
1
(∀x) (∃y) Lxy
(∃y) Lcy
Lca
(∃x) Lcx
(Lca → (∃x) Lcx)
(∃x) Lcx
A
UE 1
A∗
EI 3
CP 3, 4
EE 2, 5
Example 2.8.3. (∃x) (∀y) Lxy ` (∃x) Lxc
1
2∗
2∗
2∗
1.
2.
3.
4.
5.
6.
1
(∃x) (∀y) Lxy
A
(∀y) Lay
A∗ (use EE)
Lac
UE 2
(∃x) Lxc
EI 3
((∀y) Lay → (∃x) Lxc) CP 2, 4
(∃x) Lxc
EE 1, 5
Example 2.8.4. (∃x) (∀y) Lxy ` (∀y) (∃x) Lxy
1
2∗
2∗
2∗
2∗
1
1.
2.
3.
4.
5.
6.
7.
(∃x) (∀y) Lxy
(∀y) Lay
Lac
(∃x) Lxc
(∀y) (∃x) Lxy
((∀y) Lay → (∀y) (∃x) Lxy)
(∀y) (∃x) Lxy
A
A∗ (use EE)
UE 2
EI 3
UI 4
CP 2, 5
EE 1, 6
Example 2.8.5. What is wrong with the following proof of
(∀y) (∃x) Lxy ` (∃x) (∀y) Lxy?
1
1
3∗
3∗
3∗
1
1.
2.
3.
4.
5.
6.
7.
(∀y) (∃x) Lxy
(∃x) Lxc
Lac
(∀y) Lay
(∃x) (∀y) Lxy
(Lac → (∃x)(∀y) Lxy)
(∃x) (∀y) Lxy
92
A
UE 1
A∗
UI 3
EI 4
CP 2, 5
EE 2, 6
Line 4 is not justified. The constant c is not arbitrary. It occurs in 3, which is
an assumption used to obtain line 4.
One way to think of this is that when we originally chose the constant c, it
was arbitrary, but for the purposes of line 3, and therefore line 4, which depends
on line 3, c is no longer arbitrary, because we already chose a to be some specific
a that would make Lac true, and we have no reason to believe that a would
work for other values of y.
Example 2.8.6. What is wrong with the following proof of
M s, (∃x) W x ` (∃x) (M x & W x)?
1
2
3∗
1, 3∗
1, 3∗
1
1, 2
1.
2.
3.
4.
5.
6.
7.
Ms
A
(∃x) W x
A
Ws
A∗
(M s & W s)
&I 1, 3
(∃x) (M x & W x)
EI 4
(W s → (∃x) (M x & W x)) CP 3, 5
(∃x) (M x & W x)
EE 2, 6
Line 7 is not justified. The constant s is not arbitrary. It occurs in 1, which is an
assumption used to obtain line 6. In particular, we assumed M s and (∃x) W x,
but we do not know that W s necessarily holds.
2.8.1
A complex argument
Here is an example of a rather complex argument.
The trinity is the Father, Son, and Holy Spirit. The trinity is not
the Father, the Son, or the Holy Spirit. The trinity is complete if
and only if the Father, Son, and Holy Spirit are united. The trinity
is incomplete if the three persons are not united.
Translating this is quite a challenge. It is difficult to see what to do with
“is”. First, let’s translate it into propositional logic. We might think of the first
premise as saying that if we have the trinity, then we must have the Father, the
Son, and the Holy Spirit. We might think of the second premise as saying that
if we have the Father, and we do not also have the Son and the Holy Spirit, or
if we have the Son, and we do not have the Father and the Holy Spirit, or if we
have the Holy Spirit, and we do not also have the Father and the Son, then we
do not have the trinity. If we think this way, the first conclusion is that we have
the trinity if and only if we have the Father, the Son, and the Holy Spirit. The
second conclusion is that if it is not the case that we have the Father, the Son,
and the Holy Spirit, then we do not have the complete trinity. Let T —we have
the complete trinity, F —we have the Father, S—we have the Son, H—we have
the Holy Spirit. We write the following.
T → (F & S & H)
Premises
((F & ¬(S & H)) ∨ (S & ¬(F & H)) ∨ (H & ¬(F & S))) → ¬T
93
Conclusions
T ↔ (F & S & H)
¬(F & S & T ) → ¬T
This translation into propositional logic captures the idea that we have the
trinity just in case we have all three parts. It does not attempt to say what
the trinity is or is not. Let us try using predicate logic. We think of the first
premise as saying that the trinity has the properties of being the Father, the
Son, and the Holy Spirit. The second premise may say that anything with one
of the properties and not both of the others is not the trinity. If we think this
way, the first conclusion says that anything is the trinity if and only if it has all
three properties. The second conclusion says that anything that does not have
all three properties is not the trinity. Let t be a constant for the trinity and
predicates F , S, and H for the properties of being the Father, the Son, and the
Holy Spirit. We write the following.
F t & St & Ht
Premises
(∀x) ((F x & ¬(Sx & Hx)) ∨ (Sx & ¬(F & H)) ∨ (Hx & ¬(F & S)) → ¬x = t)
Conclusions
(∀x) (x = t ↔ (F x & Sx & Hx))
(∀x) ((¬F x ∨ ¬Sx ∨ ¬Hx) → ¬x = t)
This translation into predicate logic captures the idea that the trinity has all
three aspects. It does not say that these are united. We might using constants
f , s, and h, in addition to t, for a predicate U (x, x0 , x00 , y) saying that y is the
result of uniting x, x0 , x00 . Try, if you are interested.
2.9
What comes next?
Our goal in the course is to develop some formal tools for analyzing arguments.
In propositional logic, we said what was a propositional formula, we translated,
described a proof system, and defined truth, with the properties of Soundness
and Completeness. Together, these properties say that an argument is valid
(in the truth sense) just in case there is a proof of the conclusion from the
premises. Soundness says that if we can find a proof, then our argument is
valid. Completeness says that if our argument is valid, then there is a proof. In
propositional logic, we can use truth tables or an abbreviated truth analysis to
effectively decide whether an argument is valid or not.
In predicate logic, we have said what was a predicate formula, we have done
some translation, and we have a good part of the proof system, with all of the
rules we need for formulas without equality. The set of rules so far has the
Soundness Property. It would have the Completeness Property if we restricted
our attention to sentences that do not involve equality. We will add rules for
equality later.
94
2.9.1
Structures
We are about to define truth for predicate logic. In propositional logic, we
used different propositional languages, with different propositional variables,
depending on what we were trying to express. Sometimes, we used just P and
Q. Sometimes, we used F , C, R, and D. In propositional logic, when we defined
truth, we started with a truth assignment, saying which of the propositional
variables are true and which are false. The definition of truth gave rules that
let us calculate the truth value for any formula in our language, given a truth
assignment.
In predicate logic, we use different predicate languages, with different predicate symbols and constants, standing for the properties, relations, and special
individuals we want to describe. We define the notion of a “structure”—roughly
corresponding to the basic truth assignments in propositional logic.
Definition. For simplicity, we start with a very simple predicate language, with
just one 1-place predicate symbol, P . A structure for this language is a pair
A = (A, P ∗ ), where A is a non-empty set, the universe, and P ∗ is a subset of
A, the interpretation of P . Note that we will often use Greek letters to denote
elements of the universe of a structure.
Example 2.9.1. Let A = (A, P ∗ ), where A is the set of all students in this
class, and P ∗ is the set of those wearing glasses.
Example 2.9.2. Consider the structure A = (N, P ∗ ), where N is the set of
natural numbers, and P ∗ is the set of even numbers.
Example 2.9.3. Consider the structure A = (A, P ∗ ), where A = {α, β}, and
P ∗ = {α}.
Suppose that our language had more symbols. A structure would have to
provide interpretations for all of them. For the language with two 1-place predicate symbols, P and Q, a structure has the form (A, P ∗ , Q∗ ), where P ∗ and Q∗
are both subsets of A. For example, A might be the set of all people, P ∗ might
be the set of men, and Q∗ might be the set of mortals. For languages with a
2-place predicate symbol R, a structure includes an interpretation R∗ , where
R∗ is a set of ordered pairs (α, β) of elements of A. We might let A be the set of
all people, and let P ∗ be the set of ordered pairs (α, β) such that α is a parent
of β. If our language includes constant symbols c and s, then a structure would
include interpretations c∗ and s∗ , which would be elements of the universe.
2.9.2
Satisfaction and truth
You already have a good intuitive understanding of truth. For a sentence, such
as (∀x) P x or (∃x) P x, we ask whether it is true or false in a given structure.
For the structure in Example 2.9.1, the sentence (∃x) P x is true—there are
95
students who wear glasses. The sentence (∀x) P x is false in this structure—
there are students who don’t wear glasses. For a formula with free variables,
such as (P x & ¬P y), we will ask whether it is satisfied in a given structure
when we substitute particular elements of the universe for the free variables x
and y.
For the structure in Example 2.9.2, the formula (P x & ¬P y) is satisfied when
we substitute 2 for x, and 3 for y. It is not satisfied when we substitute 3 for x,
and 2 for y. For the structure in Example 2.9.3, the formula P x is satisfied when
we substitute α for x, and not when we substitute β. The sentence (∃x) P x is
true in the structure, while the sentence (∀x) P x is false in the structure.
For simplicity, we again consider the predicate language with just one 1-place
predicate symbol P . For this language, a structure A has the form (A, P ∗ ),
where A, the universe of the structure, is a non-empty set and P ∗ , the interpretation of the symbol P , is a subset of A.
Notation: For a structure A = (A, P ∗ ) and formula F (x1 , . . . , xn ) with free
variables among x1 , . . . , xn , and elements α1 , . . . , αn of A, we write A |= F (α1 , . . . , αn )
to mean that the formula is satisfied when we substitute αi for xi .
Set notation.
1. We write α ∈ A to mean that α is an element of A.
2. We write ∅ for the set with no elements.
3. For sets X and Y write X ⊆ Y to mean that every element of X is an
element of Y .
4. We write A = {α, β, γ} to mean that A is the set whose elements are α, β,
and γ (and nothing else).
2.9.1. Fix a structure A = (A, P ∗ ). We define satisfaction of formulas in A as
follows.
1. A |= P α if α ∈ P ∗ ,
2. A |= α = β if α = β,
3. A |= ¬G(α1 , . . . , αn ) if it is not the case that A |= G(α1 , . . . , αn ),
4. A |= (G & H)(α1 , . . . , αn ) if A |= G(α1 , . . . , αn ) and A |= H(α1 , . . . , αn ),
5. A |= (G ∨ H)(α1 , . . . , αn ) if either A |= G(α1 , . . . , αn ) or A |= H(α1 , . . . , αn ),
6. A |= (G → H)(α1 , . . . , αn ) if A |= G(α1 , . . . , αn ) implies A |= H(α1 , . . . , αn ),
7. A |= (∀x) G(α1 , . . . , αn , x) if for all β in A, A |= G(α1 , . . . , αn , β),
96
8. A |= (∃x) G(α1 , . . . , αn , x) if for some β in A, A |= G(α1 , . . . , αn , β).
2.9.2. Suppose F is a sentence (a formula with no free variables). Then F is
true in A if it is satisfied in A. We write A |= F .
We have been considering the language with just one 1-place predicate symbol
P . For other languages, we would modify the definitions in a straightforward
way. The changes affect only the first two clauses—for the atomic formulas.
Suppose, for example, that our language has a 2-place predicate symbol L, and
a constant symbol s. Let A = (A, L∗ , s∗ ) be a structure for this language—
L∗ is a 2-place relation on A, and s∗ is an element of A. If F (x1 , . . . , xn ) is
an atomic formula of the form s = xi , and α1 , . . . , αn are elements of A, then
A |= F (α1 , . . . , αn ) if s∗ = αi . If F (x1 , . . . , xn ) is an atomic formula of the form
Lxi s, then A |= F (α1 , . . . , αn ) if the pair (αi , s∗ ) is in L∗ .
2.9.3
Examples of satisfaction and truth
Consider the structure A = (A, P ∗ , Q∗ ), where A = {α, β}, P ∗ = {α}, Q∗ = {β}.
Determine whether the following sentences are true in A.
1. Is (∀x) (P x ∨ Qx) true in A?
We need to check whether each element of A satisfies (P x ∨ Qx). For α, we
have A |= P α, so A |= (P α ∨ Qα). For β, we have A |= Qβ, so A |= (P β ∨ Qβ).
We have checked all of the elements, so (∀x) (P x ∨ Qx) is true in A.
2. Is (∀x) P x ∨ (∀x) Qx true in A?
We need to see if at least one of the disjuncts is true. We start with (∀x) P x.
Now, A |= P α, but A 6|= P β. Therefore, A 6|= (∀x) P x. Similarly, A 6|= (∀x) Qx.
Therefore, neither disjunct is true, and the sentence (∀x) P x ∨ (∀x) Qx is false
in A.
3. Is (∃x) (P x & Qx) true in A?
We need to see if some element satisfies (P x & Qx). Now, α fails, since A 6|= Qα,
and β fails, since A 6|= P β. Therefore, the sentence is false in A.
4. Is (∃x) P x & (∃x) Qx) true in A?
We need to check whether both conjuncts are true. We find that α witnesses
the first, and β witnesses the second. Therefore, the sentence is true in A.
Exercises 2.9
A. Prove the following.
97
1. (∀x) (F x → Gx) ` ((∀x)¬Gx → (∀x)¬F x)
2. (∀x) (F x → Gx) ` ((∃x) ¬Gx → (∃x) ¬F x)
3. (∃x) ¬F x ` ¬(∀x) F x
(Hint: assume (∀x) F x, and try for a contradiction.)
4. ¬(∀x) F x ` (∃x) ¬F x
(Hint: start by assuming ¬(∃x)¬F x, and use that to show (∀x) F x. To
do that, you’ll need to get a contradiction.)
5. (∃x) F x, (∃x) Gx, (∀y)(∀z) ((F y & Gz) → Lyz) ` (∃x)(∃y) Lxy
(Remember, you’ll need to use EE twice to finish this problem.)
B. Let A = (A, P ∗ , Q∗ ), where A = {α, β, γ}, P ∗ = {α, β}, and Q∗ = {β, γ}.
For each of the following sentences, determine whether it is true in A.
1. (∃x) (P x & Qx)
2. (∃x) (P x & ¬Qx)
3. (∀x) (P x → Qx)
4. (∀x) (P x ∨ Qx)
5. ((∀x) P x ∨ (∀x) Qx)
6. ((∀x) P x → (∀x) Qx)
C. For each of the following sentences, give a structure A = (A, P ∗ , Q∗ ) in which
the sentence is true. Take the universe of the structures to be A = {α, β, γ}.
1. (∀x) (P x → Qx)
2. (∀x) (Qx → P x)
3. ((∀x) P x → (∀x) Qx)
4. (∃x) (¬P x & Qx)
5. (∃x) (¬P x & ¬Qx)
6. ((∃x) (¬P x & Qx) & (∃x) (¬P x & ¬Qx))
7. ((∀x)¬P x ∨ (∀x)¬Qx)
8. ¬(∃x) Qx
98
2.10
Toward the analysis of arguments in predicate logic
Last time, we defined satisfaction and truth. We want to analyze arguments
in predicate logic. We discuss some facts about negations that will be useful
in a technical way. We say a little about Soundness and Completeness, which
provide the foundation of our methods.
2.10.1
Logically valid sentences
In propositional logic, we described a formula that is always true as tautological.
In predicate logic, we consider the logically valid sentences.
2.10.1. For a sentence F , we say that F is logically valid, and we write |= F , if
F is true in all structures for the given language.
Example 2.10.1. The following sentences are logically valid. This follows
easily from the definition of satisfaction.
1. (∀x) x = x,
2. (∀x) (P x ∨ ¬P x),
3. (∀x) (∀y) (x = y → (P x → P y)).
Example 2.10.2. The sentence (∃x) (P x & Qx) is not logically valid. To see
this, we give an example of a structure in which the sentence is false. Let
A = {α, β}, P ∗ = {α} and let Q∗ = {β}.
2.10.2
Negations
Definition: Formulas F (x1 , . . . , xn ) and G(x1 , . . . , xn ) (possibly with free variables) are logically equivalent if they are satisfied by the same tuples in all
structures; i.e., for all structures A and all α1 , . . . , αn in A, A |= F (α1 , . . . , αn )
if and only if A |= G(α1 , . . . , αn ). We write F (x1 , . . . , xn ) =||= G(x1 , . . . , xn )
for this. If F and G are sentences, then F and G are logically equivalent if and
only if each logically implies the other.
The facts below hold for formulas that may have free variables—not just for
sentences.
Useful facts about negation:
1. ¬(∃x) F (x) =||= (∀x) ¬F (x)
2. ¬(∀x) F (x) =||= (∃x) ¬F (x)
3. ¬(A ∨ B) =||= (¬A & ¬B)
99
4. ¬(A & B) =||= (¬A ∨ ¬B)
5. ¬(A → B) =||= (A & ¬B)
Let us see how to use these facts. For each of the following formulas, find a
logically equivalent one with the negations brought inside, next to the atomic
parts.
Example 2.10.3. ¬(∃x) (P x → Qx)
We get (∀x) ¬(P x → Qx) first, and then (∀x) (P x & ¬Qx).
Example 2.10.4. ¬(∀x) (P x ∨ Qx)
We get (∃x) ¬(P x ∨ Qx) first, and then (∃x) (¬P x & ¬Qx).
Example 2.10.5. ¬(∀x) (∃y) (∀z) Bxyz
We get (∃x) (∀y) (∃z) ¬Bxyz (there are intermediate steps, but once you get the
idea, you can probably do these in your head).
2.10.3
Soundness and Completeness in predicate logic
We write A1 , . . . , An |= C to indicate that whenever the premises A1 , . . . , An
are true, the conclusion C is also true—the notation says that the argument
with these premises and this conclusion is valid. We have most of the rules
of proof for predicate logic—all except the rules for equality. We have defined
satisfaction and truth in predicate logic. Our proof system was designed to have
the properties of Soundness and Completeness, with this definition of truth.
Let F1 , . . . , Fn , G be sentences of predicate logic.
Soundness: If F1 , . . . , Fn ` G, then F1 , . . . , Fn |= G; i.e., if there is a proof of G
from the assumptions F1 , . . . , Fn , then for every structure A in which F1 , . . . , Fn
are all true, G is also true.
Completeness: If F1 , . . . , Fn |= G, then F1 , . . . , Fn ` G; i.e., if G is true in
every structure in which F1 , . . . , Fn are all true, then there is a proof of G from
F1 , . . . , F n .
In propositional logic, we gave some explanation of Soundness and Completeness. For Soundness, we argued that for each line in a proof, if the assumptions
behind the line are true, then the statement on the line is also true. In predicate logic, we could do the same. We could look at a given proof and check
that for any structure (with interpretations for the appropriate symbols), if the
assumptions behind a given line are true, then statement on the line is also
true. For Completeness, in propositional logic, we reduced the statement to
a special case, showing that for a single formula F , if F is a tautology, then
100
there is a proof of F from no assumptions. We then indicated how to get the
proof, at least in the case where F is built up out just from P . In predicate
logic, we could something similar—reduce to a special case where there are no
assumptions, and then show that if F is logically valid, then it is provable from
no assumptions, or, if there is no proof of ¬F , then we can build a structure in
which F is true. This is all we will do to explain Soundness and Completeness
in predicate logic. We will use these properties of the proof system in analyzing
arguments.
2.10.4
Two approaches
In propositional logic, we had an effective procedure for deciding whether an
argument is valid. The procedure was based on truth tables, or reasoning about
truth tables. In predicate logic, we do not have anything like truth tables.
Still, we can apply our logical tools, together with our human ingenuity, to
successfully analyze many arguments. To determine whether an argument is
valid, we use two approaches simultaneously.
Approach I. If we think that F1 , . . . , Fn |= G, then we look for a proof showing
F1 , . . . , Fn ` G.
Approach II. If we think that F1 , . . . , Fn 6|= G, then we look for a counterexample. This is a structure A in which the premises F1 , . . . , Fn are all true and
the conclusion G is false.
For any argument, exactly one of the approaches should succeed, in principle.
We justify this in terms of Soundness and Completeness. First, let us see why
at least one approach succeeds. If F1 , . . . , Fn |= G, then by Completeness, there
is a proof. If F1 , . . . Fn 6|= G, then there is a counterexample, just by definition.
Could we have both a proof and a counterexample? If F1 , . . . , Fn ` G, then by
Soundness, F1 , . . . , Fn |= G, and by definition, there is no counterexample.
When we try to analyze a particular argument, we know that one of the two
approaches will work, but we don’t know which one. We have to be prepared to
switch from one to the other. We start by looking for a proof (or a counterexample), and if we don’t find it, then we switch and look for a counterexample
(or a proof). If we still have trouble, we switch again. We need to be creative,
and we need to learn from our failures.
Exercises 2.10
A. For each of the sentences 1–5, find a logically equivalent sentence among
(a)—(e), with the negations brought inside.
1. ¬(∀x) (∃y) P xy
2. ¬(∃x) (∀y) P xy
101
3. ¬(∀x) (M x → Rx)
4. ¬(∃x) (M x ∨ Rx)
5. ¬(∃x) (M x & Rx)
(a) (∀x) (¬M x ∨ ¬Rx)
(b) (∃x) (M x & ¬Rx)
(c) (∀x) (∃y) ¬P xy
(d) (∀x) (¬M x & ¬Rx)
(e) (∃x) (∀y) ¬P xy
B. For each of the following, show that the argument is not valid by finding a
counterexample - that is, a structure A in which the premises are true and the
conclusion is false.
1. (∀x)(P x ∨ Qx) |= (∀x)P x
2. (∀x)((P x & Qx) → Rx), (∃x)P x, (∃x)Qx |= (∃x)Rx
3. (∀x)P x → (∀x)Qx |= (∀x)(P x → Qx)
4. (∀x)(P x → (Qx ∨ Rx)), (∀x)P x |= (∀x)Qx ∨ (∀x)Rx
5. (∀x)((P x & Qx) → Rx) |= (∀x)(P x → Rx) ∨ (∀x)(Qx → Rx)
2.11
Applying our techniques to analyze arguments
We will analyze some arguments in predicate logic. To show that an argument
is valid, we give a proof. To show that an argument is invalid, we give a
“counterexample”, where this is a structure in which the premises are all true
and the conclusion is false.
2.11.1
Proof or Counterexample
Example 2.11.1. Determine whether (∀x) (P x ∨ Qx) |= ((∀x) P x ∨ (∀x) Qx).
If we think this looks valid we might look for a proof. We would have
difficulty. So, we might look for a counter-example, a structure A = (A, P ∗ , Q∗ )
such that
(a) A |= (∀x) (P x ∨ Qx) and
(b) A 6|= ((∀x) P x ∨ (∀x) Qx); i.e., A |= ¬((∀x) P x ∨ (∀x) Qx).
102
For the sentence ¬((∀x) P x ∨ (∀x) Qx), when we bring the negation inside, we
have ¬((∀x) P x ∨ (∀x) Qx) =||= ¬(∀x) P x & ¬(∀x) Qx =||= (∃x) ¬P x & (∃x) ¬Qx.
We consider what (a) and (b) say about A, P ∗ , and Q∗ . To make (a) hold, we
must make sure that everything we put into A goes into either P ∗ or Q∗ . To
make (b) hold, we must put into A something that we keep out of P ∗ and we
must put into A something that we keep out of Q∗ .
Let A = {α, β}, P ∗ = {α}, and Q∗ = {β}. We can see that this is the desired
counterexample. Therefore, the argument is invalid; we do not have that
(∀x) (P x ∨ Qx) |= (∀x) P x ∨ (∀x) Qx.
Example 2.11.2. Determine whether (∀x) (P x → Qx) |= ((∀x) P x → (∀x) Qx).
We can give a proof.
1
2∗
2∗
1
1, 2∗
1, 2∗
1
1.
2.
3.
4.
5.
6.
7.
(∀x) (P x → Qx)
(∀x) P x
Pc
(P c → Qc)
Qc
(∀x) Qx
((∀x) P x → (∀x) Qx)
A
A∗
UE 2
UE 1
M P P 3, 4
UI 5
CP 2, 6
Therefore, the argument is valid.
Example 2.11.3. Determine whether (∀x) (¬F x → Gx), (∃x) F x |= (∃x)¬Gx.
We look for a counterexample A = (A, F ∗ , G∗ ).
We want the following:
(a) A |= (∀x) (¬F x → Gx)
(b) A |= (∃x) F x
(c) A 6|= (∃x) ¬Gx; i.e., A |= ¬(∃x) ¬Gx,
keeping in mind that ¬(∃x) ¬Gx is logically equivalent to (∀x) Gx.
For (a), each element of A that is not in F ∗ must be in G∗ .
For (b), some element of A must be in F ∗ .
For (c), every element of A must be in G∗ .
We let A = {α}, F ∗ = {α}, and G∗ = {α}.
This is a counterexample. Therefore, the argument is not valid.
103
Example 2.11.4. Determine whether (∀x) P x |= (∀x) (P x ∨ Qx)
Here we can give a proof.
1 1. (∀x) P x
A
1 2. P a
UE 1
1 3. (P a ∨ Qa)
∨I 2
1 4. (∀x) (P x ∨ Qx) U I 3
Therefore, the argument is valid.
Example 2.11.5. Determine whether
(∀x) (P x ∨ Qx), (∃x) ¬P x, (∃x) ¬Qx) |= (∀x) (P x → ¬Qx).
Here we can give a counterexample. We want a structure A = (A, P ∗ , Q∗ ) with
the following properties.
(a) A |= (∀x) (P x ∨ Qx), A |= (∃x) ¬P x, and A |= (∃x) ¬Qx
(b) A 6|= (∀x) (P x → ¬Qx); i.e., A |= ¬(∀x) (P x → ¬Qx). We have
¬(∀x) (P x → ¬Qx) =||= (∃x) (P x & Qx).
For (a), we make sure that everything in A is in P ∗ or Q∗ , something in A is not
in P ∗ , and something in A is not in Q∗ . For (b), we make sure that something
in A is in both P ∗ and Q∗ .
We let A = {α, β, γ}, P ∗ = {α, β}, Q∗ = {α, γ}. Then A = (A, P ∗ , Q∗ ) is the
desired counterexample. The argument is not valid.
2.11.2
Arguments in English
Let us consider some arguments given in English.
Example 2.11.6. No politicians are fair. Some judges are fair. Therefore,
some judges are not politicians.
Translation
Premises: (∀x) (P x → ¬F x), (∃x) (Jx & F x)
Conclusion: (∃x) (Jx & ¬P x)
Note that we may translate “No P’s are F’s” as (∀x) (P x → ¬F x), or ¬(∃x) (P x & F x).
Above, we chose the first option.
For this argument, we can give a proof of the conclusion from the premises.
104
1
2
3∗
1
3∗
3∗
3∗
1, 3∗
1, 3∗
1, 3∗
1
1, 2
1. (∀x) (P x → ¬F x)
2. (∃x) (Jx & F x)
3. (Jc & F c)
4. (P c → ¬F c)
5. Jc
6. F c
7. ¬¬F c
8. ¬P c
9. (Jc & ¬P c)
10. (∃x) (Jx & ¬P x)
11. ((Jc & F c) → (∃x) (Jx & ¬P x))
12. (∃x) (Jx & ¬P x)
A
A
A∗ (use EE)
UE 1
&E 3
&E 3
DN 6
M T T 4, 7
&I 5, 8
EI 9
CP 3, 10
EE 2, 11
It follows that the argument is valid.
Example 2.11.7. Some judges are fair; all judges are politicians; therefore
some politicians are fair.
Translation: (∃x) (Jx & F x), (∀x) (Jx → P x) ` (∃x) (P x & F x)
Again, we can give a proof.
1
2
3∗
2
3∗
2, 3∗
3∗
2, 3∗
2, 3∗
2
1, 2
1. (∃x) (Jx & F x)
2. (∀x) (Jx → P x)
3. (Ja & F a)
4. (Ja → P a)
5. Ja
6. P a
7. F a
8. (P a & F a)
9. (∃x) (P x & F x)
10. ((Ja & F a) → (∃x) (P x & F x))
11. (∃x) (P x & F x)
A
A
A∗ (use EE)
UE 2
&E 3
M P P 5, 4
&E 3
&I 6, 7
EI 8
CP 3, 9
EE 1, 10
The argument is valid.
Example 2.11.8. No basketball players play chess. Some mathematicians play
chess and some do not. Therefore, some mathematicians play neither chess nor
basketball.
Translation: (∀x)(Cx → ¬Bx), ((∃x)(M x & Cx) & (∃y)(M y & ¬Cy)) `
(∃x)((M x & ¬Cx) & ¬Bx)
Not valid. A counterexample: A = {α, β}, C ∗ = {α, β}, B ∗ = {α}, M ∗ = {α, β}.
105
Exercises 2.11
A. For each of the following, determine whether the argument is valid. Justify
your answer by giving either a proof or a counterexample.
1. (∀x) (F x & Gx) |= (∀x) F x & (∀x) Gx
2. (∀x) (F x → ¬Gx), (∃x) ¬F x |= (∃x) Gx
3. (∃x) (P x & Qx), (∃x) (Qx & Rx) |= (∃x) (P x & Rx)
4. (∀x) (P x → Qx), (∀x) (¬Rx → ¬Qx) |= (∀x) (P x → Rx)
B. Translate the following arguments into predicate logic, and determine whether
the argument is valid. Justify your answer by giving either a proof or a counterexample.
1. All basketball players play chess. Some mathematicians play chess and
some do not. Therefore, some mathematicians play neither chess nor
basketball.
2. All lions are fierce creatures. Some lions do not drink coffee. Therefore,
some fierce creatures do not drink coffee.
3. Everyone will vote for Henry, or Alice will be disappointed. Therefore,
Joe will vote for Henry, or Alice will be disappointed.
4. No birds, except peacocks, are proud of their tails. Some birds, that are
proud of their tails, cannot sing. Therefore, some peacocks cannot sing.
2.12
Rules of proof for equality
So far, in our proof system for predicate logic, we have rules for propositional
connectives and quantifiers. These rules are complete for arguments that do not
mention equality. Now, we add rules for equality.
Identity Introduction (= I): For any constant c, we get c = c from no
assumptions. Succinctly, (from no assumptions) B c = c.
For decorations, on the right, there are no steps referred to, and on the left,
there are no assumptions.
This rule says, for example, “Barack Obama is Barack Obama”.
Identity Elimination (= E): For any constants c and d, if F is a sentence
and F 0 is the result of replacing one or more occurrences of c in F by d, then
from F and c = d, we get F 0 . In shorthand form, the rule states F, c = d B F 0 .
106
For decorations, on the right, we give first the line where we have F and
then the one where we have c = d. On the left, we give the assumptions behind
these two lines.
For example, from “Mark Twain is the author of Huckleberry Finn”, and “Mark
Twain is Samuel Clemens”, we may conclude that “Samuel Clemens is the
author of Huckleberry Finn”.
Caution: As always, we need a perfect match. It is not good enough to have
F and d = c, when we want to replace occurrences of c in F by d.
Example 2.12.1. a = b ` b = a
1
1
1. a = b A
2. a = a = I
3. b = a = E 2, 1 (we replace the first occurrence of a in 2 by b)
In line 3, where we apply = E, F is a = a. The fact that we have a = b gives
us the right to substitute b for a wherever we want in F . Our F 0 is b = a.
Example 2.12.2. a = b, b = c ` a = c
1
1. a = b A
2
2. b = c A
1, 2 3. a = c = E 1, 2
On line 3, where we use = E, our F is a = b, which mentions b. The fact that
we have b = c gives us the right to substitute c for b in F . Our F 0 is a = c.
Example 2.12.3. ` (∀x) (∀y) ((F x & x = y) → F y)
1 1. (F a & a = b)
1 2. F a
1 3. a = b
1 4. F b
5. ((F a & a = b) → F b)
6. (∀y) ((F a & a = y) → F y)
7. (∀x) (∀y) ((F x & x = y) → F y)
A∗ (prove F b and use CP )
&E 1
&E 1
= E 2, 3
CP 1, 4
UI 5
UI 6
Example 2.12.4. Prove ` (∀x) x = x.
1. a = a
2. (∀x) x = x
=I
UI 1
Example 2.12.5. F a, b = a ` F b
1
2
1.
2.
3.
2
4.
1, 2 5.
Fa
A
b=a A
b=b =I
a = b = E 3, 2
Fb
= E 1, 4
107
2.12.1
Arguments in English with equality
Example 2.12.6. Translate and then decide whether the argument is valid.
No riverboat pilots are writers. Samuel Clemens was a riverboat
pilot. Mark Twain was a writer. Therefore, Samuel Clemens was
not Mark Twain.
We translate as follows, using 1-place predicate symbols P and W , and
constants c and t.
Premises: ¬(∃x) (P x & W x), P c, W t
Conclusion: ¬c = t
This argument is valid; we can give a proof.
1
2
3
4∗
2, 4∗
2, 3, 4∗
2, 3, 4∗
1, 2, 3, 4∗
1, 2, 3
1, 2, 3
1. ¬(∃x) (P x & W x)
A
2. P c
A
3. W t
A
4. c = t
A∗
5. P t
= E 2, 4
6. (P t & W t)
& I 5, 3
7. (∃x) (P x & W x)
EI 6
8. ((∃x) (P x & W x) & ¬(∃x) (P x & W x))
&I 7, 1
9. (c = t → ((∃x) (P x & W x) & ¬(∃x) (P x & W x))) CP 4, 8
10. ¬c = t
RAA 9
Example 2.12.7. Translate and then decide whether the argument is valid.
Matt and Jane are the only ones studying at Reckers; Matt and Jane
are both studying Italian; therefore, everyone studying at Reckers is
working on Italian.
We translate as follows, using the 1-place predicate symbols R and I and
constants m and j.
Premises: (∀x) (Rx ↔ (x = m ∨ x = j)), (Im & Ij)
Conclusion: (∀x) (Rx → Ix).
We can give a proof.
(∀x) (Rx ↔ (x = m ∨ x = j)), (Im & Ij) ` (∀x) (Rx → Ix).
108
1
2
3∗
1
1
1, 3∗
7∗
2
7∗
2, 7∗
2
2
14∗
14∗
2, 14∗
2
1, 2, 3∗
1, 2
1, 2
1. (∀x) (Rx ↔ (x = m ∨ x = j)
2. (Im & Ij)
3. Rc
4. (Rc ↔ (c = m ∨ c = j))
5. Rc → (c = m ∨ c = j)
6. c = m ∨ c = j
7. c = m
8. Im
9. c = c
10. m = c
11. Ic
12. (c = m → Ic)
13. Ij
14. c = j
15. j = c
16. Ic
17. (c = j → Ic)
18. Ic
19. (Rc → Ic)
20. (∀x) (Rx → Ix)
A
A
A∗
UE 1
&E 4
M P P 3, 4
A∗
&E2
=I
= E 9, 7
= E 8, 10
CP 7, 11
&E2
A∗
= E 9, 14
= E 13, 15
CP 14, 16
∨E 6, 12, 17
CP 3, 18
U I 17
Exercises 2.12
A. For each of the following, give a proof, using the rules of proof for predicate
logic with equality.
1. F a ` (∀x) (x = a → F x)
2. (∀x) (x = a → F x) ` F a
3. b = a, c = a ` b = c
4. (∀x) (M x → Ix), M j, j = h ` Ih
5. ¬(∃x) (M x & Sx), M j, j = h ` ¬Sh
B. Translate the following arguments. Prove or give a counterexample for each.
1. All frogs are unpoetical. Some ducks are poetical. Therefore, some frogs
are not ducks.
2. No misers are unselfish. None but misers save egg-shells. Therefore, no
one who is unselfish saves egg-shells.
109
3. Riverboat pilots are not writers; i.e., all riverboat pilots are not writers.
Samuel Clemens was a riverboat pilot. Samuel Clemens was Mark Twain.
Therefore, Mark Twain was not a writer. (In your translation, use 1-place
predicate symbols P and W , and constants c and t.)
4. Samuel Clemens was a humorist but Samuel Morse was not. Samuel
Clemens is identical with Mark Twain. Therefore Samuel Morse is not
identical with Mark Twain.
C. Translate the following sentences into predicate logic with equality.
1. There are at most two things with property F .
2. There are exactly three things with property F .
2.13
More on satisfaction and analysis of arguments
We will examine a more complicated structure and do some more comprehensive
analysis of arguments. We will start with an argument in English. We will then
translate and use our formal methods to analyze the argument.
2.13.1
A complicated structure
Consider the following structure A in the language with 1-placed predicates F
and G, 2-placed predicates R and S, and a constant e. Let the universe of A
be A = {α, β, γ, δ}, and let
F ∗ = {α, β, γ}
G∗ = {β, γ, δ}
∗
R = {(α, α), (β, β), (γ, γ), (δ, δ)}
S ∗ = {(α, β), (β, γ), (γ, δ)}
e∗ = β.
We determine whether the following formulas are satisfied in this model:
1. (∃x)Sxe
Answer. A |= (∃x)Sxe because e∗ = β and (α, β) is an element of S ∗ .
2. (∀x)(Rxx → (∃z)Sxz)
Answer. A 6|= (∀x)(Rxx → (∃z)Sxz) because A |6 = (Rxx → (∃z)Sxz)
when x is substituted with δ. Specifically, A |= Rδδ, but there is no
element z in A so that A |= Sδz.
110
3. (∀x)((∃y)Sxy → (∃z)Rxz)
Answer. A |= (∀x)((∃y)Sxy → (∃z)Rxz). Note that if x equals α, β, or
γ then A |= (∃y)Sxy, but A 6|= (∃y)Sδy. So, we only have to check that
A |= (∃z)Rxz when x equals α, β, or γ for the sentence to be satisfied. In
this case, letting z equal x works to show A |= (∃z)Rxz.
2.13.2
Arguments in English
Example 2.13.1. This is very loosely based on an opinion piece by Milton
Friedman in the New York Times.
English statement of argument
No government can bring order to Afghanistan without a huge investment of money. The Russians invested a great deal, and, even so,
they were unsuccessful. If we invest much more money in Afghanistan,
then we will be unable to do anything with health care. Therefore,
if we want to do anything with health care, we will not be able to
bring order to Afghanistan.
Translation. We can translate the argument into predicate logic, using 1-place
predicate symbols I, S, and H to describe governments that invest a great deal
of money in Afghanistan, bring order to Afghanistan, and improve health care,
and constants r and a, for the Russian government and U.S. government.
Premises: (∀x) (Sx → Ix), (Ir & ¬Sr), (Ia → ¬Ha)
Conclusion: (Ha → ¬Sa)
Formal analysis. Here is a proof.
1
2
1
4∗
5∗
1, 5∗
1, 2, 5∗
1, 2, 4∗ , 5∗
1, 2, 4∗
1, 2, 4∗
1, 2
1. (∀x) (Sx → Ix)
2. (Ia → ¬Ha)
3. (Sa → Ia)
4. Ha
5. Sa
6. Ia
7. ¬Ha
8. (Ha & ¬Ha)
9. (Sa → (Ha & ¬Ha))
10. ¬Sa
11. (Ha → ¬Sa)
A
A
UE 1
A∗
A∗
M P P 5, 3
M P P 6, 2
&I 4, 7
CP 5, 8
RAA 9
CP 4, 10
Example 2.13.2. We have seen arguments whose form is the same as the one
below.
English statement of argument
111
Everyone had exams either Tuesday or Thursday. Therefore, either
everyone had exams Tuesday, or everyone had exams Thursday.
Translation. We can translate into predicate logic, using the 1-place predicate
symbols T and H for having exams Tuesday, and Thursday, respectively.
Premise: (∀x) (T x ∨ Hx)
Conclusion: (∀x) T x ∨ (∀x) Hx
Formal analysis. We can find a counterexample. This is a structure A = (A, T ∗ , H ∗ )
in which
(a) (∀x) (T x ∨ Hx) is true, and
(b) ((∀x) (T x ∨ (∀x) Hx) is false.
For (a), we make sure that everything in A is in either T ∗ or H ∗ . For (b), we
note that ((∀x) (T x ∨ (∀x) Hx) false just in case ¬((∀x) T x ∨ (∀x) Hx) is true,
and this is equivalent to (∃x) ¬T x & (∃x) ¬Hx). We make sure that something
in A is not in T ∗ , and something in A is not in H ∗ .
Let A = {a, b}, T ∗ = {a}, H ∗ = {b}. Then A |= (∀x) (T x ∨ Hx) and
A |= ((∃x) ¬T x & (∃x) ¬Hx).
Example 2.13.3. Here is another example, stated first in English.
Everyone who is highly successful must be both bright and hardworking All Notre Dame students are bright. Some Notre Dame
students are highly successful. Therefore, some Notre Dame students
are hard-working.
Translation into predicate logic. Using 1-place predicate symbols B, W ,
and N for the properties of being bright, hard-working, and being from Notre
Dame, we get the following translation.
Premises: (∀x) (Sx → (Bx & W x)), (∀x) (N x → Bx), (∃x) (N x & Sx)
Conclusion: (∃x) (N x & W x).
Formal Analysis. We can give a proof.
112
1
2
3∗
1
3∗
3∗
1, 3∗
1, 3∗
1, 3∗
1, 3∗
1
1, 2
1. (∀x) (Sx → (Bx & W x))
2. (∃x) (N x & Sx)
3. (N c & Sc)
4. (Sc → (Bc & W c))
5. N c
6. Sc
7. (Bc & W c)
8. W c
9. N c & W c
10. (∃x) (N x & W x)
11. ((N c & Sc) → (∃x) (N x & W x))
12. (∃x) (N x & W x)
A
A
A∗
UE 1
&E 3
&E 3
M P P 6, 4
&E 7
&I 5, 8
EI 9
CP 3, 10
EE 2, 11
Note that we do not need the assumption that all Notre Dame students are
bright.
Exercises 2.13
A. Prove the following, using the rules for predicate logic with equality.
1. (∃x) (F x & (∀y) (F y → x = y)) ` ((∃x) F x & (∀x) (∀y) ((F x & F y) → x = y))
2. ((∃x) F x & (∀x) (∀y) ((F x & F y) → x = y)) ` (∃x) (F x & (∀y) (F y → x = y)
B. We again consider the structure A in the language with 1-placed predicates
F and G, 2-placed predicates R and S, and a constant e. The universe of A is
A = {α, β, γ, δ}, and
F ∗ = {α, β, γ}
G∗ = {β, γ, δ}
∗
R = {(α, α), (β, β), (γ, γ), (δ, δ)}
S ∗ = {(α, β), (β, γ), (γ, δ)}
e∗ = β.
Determine whether the following formulas are satisfied in A:
1. (∀x)(F x → Gx)
2. (∃x)((F x ∨ Gx) & (Rxx & ¬(∃y)Sxy))
3. (∃x)((∃y)Syx & (∃z)Sxz)
4. (∀x)(∀y)(∀z)(((Rxy) & Ryz) → Rxz)
5. (∀x)(¬(∃y)Ryx → ¬F x)
113
2.14
Advanced topics
2.14.1
More on Soundness and Completeness
As it does in propositional logic, in predicate logic Soundness says that if
A1 , . . . , An ` C, then A1 , . . . , An |= C; i.e., if there is a proof of C from
A1 , . . . , An , then whenever A1 , . . . , An are all true, C is also true. Completeness
says that if A1 , . . . , An |= C, then A1 , . . . , An ` C; i.e., if C is true whenever
A1 , . . . , An are true, then there is a proof of C from A1 , . . . , An . To make these
statements precise, we need to define truth. The history of the definition of
truth is interesting, with twists and turns that I don’t yet fully understand.
2.14.2
History of truth
Gödel proved the Completeness Theorem for predicate logic in his Ph.D. thesis.
He must have had a definition of truth, but in the published version of the
thesis, he didn’t give it. The following is from the biography of Gödel by John
Dawson.
In a crossed-out passage of an unsent reply to a graduate student’s
later inquiry, Gödel asserted that at the time of his Completeness paper, “a concept of objective mathematical truth . . . was viewed with
[the] greatest suspicion and [was] widely rejected as meaningless” .
Tarski was the one who finally published a definition of truth for predicate
logic. But before that, when Tarski had proved some important results related
to the notion, and Carnap asked him to talk about it at a philosophical meeting,
Carnap says, “Tarski was very skeptical. He thought that most philosophers,
even those working in modern logic, would be not only indifferent, but hostile
to the explication of the concept of truth.” And, Carnap continues, describing
what happened after the talk, “in fact, Tarski’s skeptical predictions had been
right. . . . there was vehement opposition . . . .”
I could imagine some reasons for objections. When you see the definition,
it is very straightforward, like the one for propositional logic. There is a clause
saying (F & G) is true if F and G are both true. People might object if someone
tried to take credit for a definition that we were all born knowing. Or maybe
the word “truth” sounds so grand, mere humans shouldn’t claim they could give
a definition. Of course, the definition we will give is for the limited setting of
predicate logic—that makes it not so grand.
I have asked various colleagues in the Philosophy Department at Notre
Dame—experts in philosophy of mathematics—to explain. Michael Detlefsen
and Patricia Blanchette thought the objection might result from the fact that
people working in foundations of mathematics at the time were committed to a
program of reducing everything to proofs. I definitely do not understand this.
You need truth to understand Soundness and Completeness, and until you know
that your proof system has these properties, how can you trust it to serve as
a foundation for anything else? Tim Bays offered a further explanation, which
114
makes more sense to me. One of the things people wanted to do in foundations
of mathematics was to describe the world of sets, and giving a precise definition
of truth in this setting seemed to lead to circularity, or to paradoxes.
Bays questioned whether Gödel actually had a clear understanding of truth
when he proved Completeness. Bays also mentioned some weird things that
Tarski did in the paper where he first wrote about truth. If any of you come
across further information about why the definition was so controversial, and
why it took so long to write down properly, I would be interested.
2.14.3
Consequences of Gödel’s Incompleteness Theorem
.
In propositional logic, we had an effective procedure for deciding whether
an argument is valid. The procedure was based on truth tables, or abbreviated
truth tables. In predicate logic, we do not have anything like truth tables. You
will hear more about Gödel’s famous “Incompleteness Theorem” later. One of
the consequences of this result is the following.
Theorem 2.14.1. For a predicate language with at least one predicate symbol
with at least two places, there is no effective method for determining which
sentences are logically valid.
This does not mean that we can never have a complete understanding of any
structure. Consider the structure (R, +R , ·R , 0, 1), consisting of the real numbers,
with the usual addition, multiplication, and ordering, and special elements 0,
and 1. Tarski showed the following.
Theorem 2.14.2 (Tarski). There is an effective procedure for deciding which
sentences are true of the real numbers.
Tarski gave a nice set of axioms for the real numbers. These axioms are all
true, and we can effectively recognize what is an axiom. Moreover, the set of
axioms is complete in the sense that it suffices to prove any true sentence. Then
to decide whether a sentence F is true in the real numbers, we may look for a
proof of F or a proof or ¬F . We will eventually find one or the other, and then,
by Soundness, we know whether F is true.
For the structure (Z, +Z , ·Z , 0, 1), consisting of the integers, with the usual
addition, multiplication, and ordering, and the special elements 0 and 1, the
picture is quite different. A special case of Gödel’s Incompleteness Theorem
says the following.
Theorem 2.14.3 (Gödel). Let A be a proposed set of axioms for the integers,
with the following two features:
1. we can effectively recognize the sentences that are elements of A, and
115
2. all of the sentences in A are true in the integers.
Then we can find a sentence that is true in the integers but cannot be proved
from A.
It follows from this that there is no effective procedure for deciding which
sentences are true in the integers. We have a nice set of axioms that are enough
to prove many interesting statements about the integers, and mathematicians
keep trying to determine the truth of further statements.
2.15
Final words
Logic is my favorite subject, and I have enjoyed talking about it with you. You
are a bright group. You are willing to attempt some hard problems, and to
risk making mistakes. This takes mental courage, and it is the only way hard
problems ever get solved.
You have all learned to write proofs—not an easy thing. You have had
experience translating arguments from English into formal languages, carrying
out formal analysis (using truth tables, proofs, counterexamples), and then
saying what your analysis means.
In your papers, you gave clear accounts of challenging material that was not
covered in the lectures. I hope that you found this interesting. I know that
employers need people who can write well about varied material.
You may or may not ever use a formal proof system after our final. Even
so, I hope that some of the ideas from the course will stick with you and prove
useful—in particular, the idea of taking complicated statements, and arguments
(which are collections of statements) breaking them into manageable parts, and
then trying to understand how the parts are connected.
Our communities, and our business, religious, and political organizations,
need leaders who can reason and analyze arguments, and who, in presenting
their own position on an issue, make it a practice to state their assumptions
clearly and give a careful account of the reasoning that led to the position. I
believe that you are going to be those leaders.
116