Download Proof Theory for Propositional Logic

Document related concepts

Turing's proof wikipedia , lookup

Axiom of reducibility wikipedia , lookup

History of the function concept wikipedia , lookup

Foundations of mathematics wikipedia , lookup

History of logic wikipedia , lookup

Meaning (philosophy of language) wikipedia , lookup

Quantum logic wikipedia , lookup

Combinatory logic wikipedia , lookup

Jesús Mosterín wikipedia , lookup

Semantic holism wikipedia , lookup

Inquiry wikipedia , lookup

Cognitive semantics wikipedia , lookup

Propositional formula wikipedia , lookup

Mathematical logic wikipedia , lookup

Argument wikipedia , lookup

Theorem wikipedia , lookup

Catuṣkoṭi wikipedia , lookup

Sequent calculus wikipedia , lookup

Syllogism wikipedia , lookup

Modal logic wikipedia , lookup

Laws of Form wikipedia , lookup

Curry–Howard correspondence wikipedia , lookup

Law of thought wikipedia , lookup

Intuitionistic logic wikipedia , lookup

Principia Mathematica wikipedia , lookup

Mathematical proof wikipedia , lookup

Propositional calculus wikipedia , lookup

Truth-bearer wikipedia , lookup

Natural deduction wikipedia , lookup

Transcript
1
Contents
Preliminaries ................................................................................................................................... 3
1- Propositional Logic: The Language and Syntactic Proofs ..................................................... 3
Semantics for Propositional Logic .................................................................................................. 6
2- Propositional Logic: Semantic Proofs .................................................................................... 6
3- Propositional Logic: (In)consistency, Tautologies, Contingency, and Entailment via
Semantic Proofs ........................................................................................................................ 10
4- Propositional Logic: Truth Tables ........................................................................................ 12
5- Propositional Logic: (In)consistency, Tautologies, Contingency, and Entailment via Truth
Tables ........................................................................................................................................ 17
Proof Theory for Propositional Logic ........................................................................................... 20
Miminal Logic .......................................................................................................................... 20
6- Propositional Minimal Logic Part 1: Rules without Discharge (Reflexivity, ˄
Introduction and Elimination, ˅ Introduction, → Elimination, ¬ Elimination) .................... 20
7- Propositional Miminal Logic Part 2: → Introduction ...................................................... 23
8- Propositional Minimal Logic Part 3:  Introduction........................................................ 30
9- Propositional Miminal Logic Part 4: ˅ elimination. ......................................................... 31
10- Intuitionist Logic ................................................................................................................ 34
11- Classical Logic ................................................................................................................... 36
Semantics for Predicate Logic ...................................................................................................... 38
11- Translating First Order Logic into Natural Language ........................................................ 38
12- A Fregean functional semantics, part 1: functions, names, and predicates ........................ 42
13 - A Fregean functional semantics, part 2: propositional logic ............................................. 49
14- A Fregean functional semantics, part 3: quantifiers ........................................................... 51
15- Entailment ........................................................................................................................... 51
Proof Theory for Predicate Logic ................................................................................................. 52
16- The Universal Quantifier .................................................................................................... 52
17- The Existential Quantifier .................................................................................................. 58
Identity ...................................................................................................................................... 69
Modal Logic .................................................................................................................................. 70
Fitch style proper natural deduction formulation of propositional modal logic K ............... 70
Fitch style proper natural deduction formulation of propositional modal logics T, S4, and S5.
................................................................................................................................................... 74
2
Predicate Modal Logic .............................................................................................................. 75
Actuality.................................................................................................................................... 75
Higher-Order Modal Logic ....................................................................................................... 76
Montague’s Intensional Logic .................................................................................................. 76
Limitations .................................................................................................................................... 76
Appendices .................................................................................................................................... 77
Rules for First-Order Classical Predicate Logic ....................................................................... 77
Test Helps ................................................................................................................................. 78
Test 1 Help ............................................................................................................................ 78
Test 2 Help ............................................................................................................................ 78
Test 3 Help ............................................................................................................................ 79
3
Preliminaries
1- Propositional Logic: The Language and Syntactic Proofs
In order to come up with a precise notion of proof, we shall have to construct a simple language,
one which wears its logical properties on its sleeves, so to speak. For such a language to be
successful it is absolutely essential that certain forms of ambiguity are prohibited. For example,
consider the following English language sentence.
I’m going to the store, and she’s buying smokes, or I’m watching T.V.
This sentence is ambiguous. If we use parentheses to disambiguate, on the one hand it could
mean “(I’m going to the store) and (she’s buying smokes, or I’m watching T.V).” From this
reading of the sentence we would know that I’m going to the store. If we parsed the sentence as
“(I’m going to the store and she’s buying smokes) or (I’m watching T.V),” we wouldn’t know
that I’m going to the store.
By formalizing our language, we prohibit all such ambiguities.
Vocabulary:
All capital English letters (A, B, C, . . . ,Z) are propositional
variables, as well as all numerically subscripted Z’s (Z1,Z2, Z3. . .).
A set of logical connectives: ¬, ˄, ˅, →.
Formation rules:
(1)
All propositional variables are wffs (well formed formulas) of L.
(2)
If  is a wff, then ¬ is a wff.
(3)
If and  are wffs, then (˄  is a wff.
(4)
If and  are wffs, then (˅ is a wff.
(5)
If and  are wffs, then (→  is a wff.
(6)
All and only the wffs of L are generated by the above 5 rules.
Intuitively, the logical connectives mean (respectively) ‘not’, ‘and’, ‘or’, and ‘if... then.’ Thus,
where ‘P’ = ‘I’m going to the store,’ ‘Q’ = ‘she’s buying smokes,’ and ‘R’ = ‘I’m watching T.V.’
We can formalize both interpretations of the above sentence in this language. The first
interpretation is equal to ‘(P ˄ (Q ˅ R))’, while the second interpretation is equal to ‘((P ˄ Q) ˅
R)’.
In the next two lectures will start to worry about the interpretation of the language L. Here we
are just concerned with the syntax of L, given by the above six formation rules. We can think of
the rules as giving us a procedure by which we construct, from the bottom-up, sentences of our
language. This procedure can be rigorously formalized as a natural deduction proof system.
Each of the following rules corresponds to one of the clauses of our definition of wff-hood.
Natural deduction formulation of the first 6 rules-
4
Where is a propositional variable,
________
n) 
rule 1
m)  
n) 
(˄ )




m,n rule 3
m)  
n) 
(→ )


n) 
¬






m,n rule 5




m) 
n) 
(˅ 






n, rule 2


m,n rule 4

A proof of well-formedness is then defined as a numbered sequence of sentences such that each
sentence follows by previously numbered sentences by the above rules.
For example, if I want to show that (P ˅ Q) is a sentence I have to construct the following proof.
1.
2.
3.
P
Q
(P ˅ Q)
by rule 1
by rule 1
1,2 by rule 4
The m’s and n’s don’t need to be distinct. To construct ‘(P ˅ P)’, we need only provide the
following proof of well-formedness.
1.
2.
P
(P ˅ P)
by rule 1
1, 1 by rule 4
Consider the more complicated wff, ¬((P ˄ Q) → (Q ˅ R)). The proof looks like this.
1.
2.
3.
4.
5.
6.
7.
P
Q
R
(P ˄ Q)
(Q ˅ R)
((P ˄ Q) → (Q ˅ R))
¬((P ˄ Q) → (Q ˅ R))
by rule 1
by rule 1
by rule 1
1,2 by rule 3
2,3 by rule 4
4,5 by rule 5
6 by rule 2
Notice that parentheses must be added for applications of all rules except for rules 1 and 2.
Once you get the hang of doing such well-formedness proofs, you will see that they are very
easy. It is very important to be able to do them though, for what follows. We cannot know what a
sentence means, unless we know how a sentence is constructed. What holds for our formal
language holds for English as well. The meaning of a sentence is a function of the meanings of
the words in the sentence and the way those words are put together. If we didn’t have a tacit, or
unconscious, knowledge of English syntax, we would not understand any sentences. Part of the
goal of formal languages is to make explicit the procedures we follow subconsciously when we
5
recognize that sentences are well-formed. While natural languages are incredibly more difficult
than the formal language given here, it is likely that we follow a similar (albeit much more
complicated) procedure every time we speak or understand a sentence.
Homework 1Construct proofs of well-formedness for the following wffs.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
(P ˅ Q)
(¬P → Q)
(P ˄ ¬Q)
((P ˅ ¬Q) ˄ R)
(P → ¬ (P ˅ R))
¬¬¬¬P
¬ ¬ (P ˅ ¬P)
(Q ˄ ¬¬(P → (Q ˄ ¬R)))
(R → ¬ ((P ˅ ¬¬Q) → ¬¬R))
(P → ¬ (P ˅ ¬¬¬P))
6
Semantics for Propositional Logic
One of the most interesting issues in the philosophy of language concerns the notion of
compositionality. It starts with a puzzle raised by Descartes.1 Given that the overwhelming
majority of sentences you hear and speak have never been spoken before and will never be
spoken before, how do you understand what they mean? Some philosophers, like Donald
Davidson,2 pose this issue in terms of human finitude. For any natural language there is no upper
bound on the length of sentences. But that means that every natural language in some sense
includes an infinite number of sentences. But how do finite beings like us grasp such an infinity?
The standard answer to both Descartes’ and Davidson’s questions is compositionality.
Languages contain a finite number of words and a finite set of principles for combining these
words. Some of these combinatory principles are merely synctactic, concerning whether
sentences are well formed or grammatical. For propositional logic we’ve already learned all of
the syntax. Some of the combinatory principles are semantic, concerning how the meaning of
larger units of language, such as sentences, is derived from the meaning of parts (there are many
other kinds of principles such as phonetic, morphological, and pragmantic; but in this class we
only concern ourselves with syntax and semantics).
2- Propositional Logic: Semantic Proofs
Weirdly, in logic there are two approaches to the semantics of the formal languages, the modeltheoretic approach and the proof-theoretic approach. Sometimes by “semantics” logicians just
mean the model-theoretic approach, because it is very clear just from looking at the model theory
that it is a model of how certain facets of the meaning of sentences are determined via a finite set
of principles from certain facets of the meaning of parts. The truth table semantics we are doing
today is the model theory for classical propositional logic, and we will learn the proof theory
later.
Before shortening all of this via truth-tables, we will first develop a system of semantic proofs
parasitic on our system of syntax proofs. For each rule of syntactic proof, there will be two to
four semantic rules.
The syntax proof system began with the following rule:
Where is a propositional variable,
________
n) 
rule 1
We replace this with the following two rules:
Where is a propositional variable,
1
2
7
________
n. T “”
assignment
________
n. F “”
assignment
and,
These rules allow you to assign truth-values (T stands for truth and F for falsity) to the
propositional variables within a sentence. Then the other rules will tell you how to derive the
truth-values of more complicated sentences containing the propositional variables in question.
Now let’s look at our negation rules and do a couple of problems.
n.
n.
T “”
F “¬”
n¬
F “”
T “¬”
n¬
These rules just say that negation switches the truth-value of sentences.
So here is a sample problem:
Where P is true, what is the truth value of ¬¬P?
Proof:
1. T “P” assignment
2. F “¬P”
1¬
3. T “¬¬P”
2¬
Or sometimes you will need to compute all possible assignments:
Do a semantic proof for all possible assignments of truth values to the propositional variables in
¬¬¬P?
Proof:
1. T “P”
2. F “¬P”
3. T “¬¬P”
4. F “¬¬¬P”
assignment
1¬
2¬
3¬
Proof:
1. F “P”
2. T “¬P”
3. F “¬¬P”
5. T “¬¬¬P”
assignment
1¬
2¬
3¬
These proofs tell us that if P is true, then ¬¬¬P is false, and if P is false, then ¬¬¬P is true.
For our other connectives there are four versions of each rule. Let us do ˄ first and give a couple
of examples.
8
m. T “”
n. T “”
T “(˄ )”




m,n ˄


m. T “”


n. F “” 

F “(˄ )” m,n ˄



m. F “”
n. T “”
F “(˄ )”




m,n ˄


m. F “”


n. F “” 

F “(˄ )” m,n ˄








The only difference between this and the negation rule is that we are dealing with two lines. Here
are three sample problems.
Where P is true and Q is false, what is the truth-value of (P ˄ Q)?
Proof:
1. T “P” assignment
2. F “Q” assignment
3. F “P ˄ Q”
1,2 ˄
Where P is true, what is the truth-value of (P ˄ P)?
Proof:
1. T “P”
assignment
2. T “(P ˄ ¬P)” 1,1 ˄
Where P is true, what is the truth-value of (P ˄ ¬P)?
Proof:
1. T “P”
assignment
2. F “¬P”
1, ¬
3. F “(P ˄ ¬P)” 1,2 ˄
Where P is true and Q is false, what is the truth value of ((P ˄ Q) ˄ ¬P)?
Proof:
1. T “P”
assignment
2. F “Q”
assignment
3. F “(P ˄ Q)”
1,2 ˄
4. F “¬P”
1¬
5. F “((P ˄ Q) ˄ ¬P)” 3,4 ˄
Proofs involving disjunction look much like the above, except the truth functions are different.
Here are our rules for disjunction:
m. T “”
n. T “”






m. T “”

n. F “” 





9
T “(˅ )”
m. F “”
n. T “”
T “(˅ )”
T “(˅ )”
m,n ˅




m,n ˅


m,n ˅
m. F “”


n. F “” 

F “(˅ )” m,n ˅



Many people, when seeing this the first time, think that this is weird in the case where both
sentences are true. What we are defining here is called “inclusive disjunction” and when we get
to truth tables we will present the reasons why we use this disjunction. For now it’s good just to
get comfortable with the system. Here’s a problem using disjunction:
When P is false and Q also false, what is the truth value of (¬(P ˄ ¬P) ˅ Q)?
Proof:
1. F “P”
2. F “Q”
3. T “¬P”
4. F “(P ˄ ¬P)”
5. T “¬(P ˄ ¬P)”
6. T”(¬(P ˄ ¬P) ˅ Q)”
assignment
assignment
1¬
1,3 ˄
4¬
2,5 ˅
Finally, here are our four versions of the conditional rule.
m. T “”
n. T “”
T “(→ )”




m,n →


m. T “”


n. F “” 

F “(→ )” m,n →




m. F “”
n. T “”
T “(→ )”




m,n →


m. F “”


n. F “” 

T “(→ )” m,n →








As with treating disjunction inclusively, many people find some aspects of this weird, in
particular the fact that a conditional is counted as true whenever the antecedent (the first term, 
above) is false. Again, let’s just get comfortable doing the proofs for now. When we do truth
tables we will discuss why this is the case for propositional logic. In both cases, the problem
reveals fundamental limitations of the logic, though more severe in the case of the conditional.
At this point it is very, very important to note one difference between the conditional and the
other operators. Conjunction and disjunction are commutative, which in this context means that
(˄ ) is true (false) in exactly the same situations as (˄ ). Similarly with disjunction. But
this is not the case with respect to conditional. Where is true and  is false then by the above
rules (→ ) will be false and (→ ) will be true.
As a result of this, it is extremely important when doing these proofs with conditionals to make
sure what line your  is on and what line your  is on, and to use the correct version of the rule.
10
Otherwise, everything is the same as before.
When P is false, Q is true, and R false what is the truth value of ((P → (¬Q ˅ R))?
1. F “P”
assignment
2. T “Q”
assignment
3. F “R”
assignment
4. F “¬Q”
2¬
5. F “(¬Q ˅ R)”
4,3 ˅
6. T “((P → (¬Q ˅ R))” 1, 5 →
Homework 2
Let P be true, Q false, and R true. Construct semantic proofs for the following.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
(P ˅ Q)
(¬P → Q)
(P ˄ ¬Q)
((P ˅ ¬Q) ˄ R)
(P → ¬ (P ˅ R))
¬¬¬¬P
¬ ¬ (P ˅ ¬P)
(Q ˄ ¬¬(P → (Q ˄ ¬R)))
(R → ¬ ((P ˅ ¬¬Q) → ¬¬R))
(P → ¬ (P ˅ ¬¬¬P))
3- Propositional Logic: (In)consistency, Tautologies, Contingency, and
Entailment via Semantic Proofs
With the syntax and semantics we have for our language we can make precise four semantic concepts.
Since determining these is much easier with truth-tables, we won’t spend too much time on this
now, just using the difficulty to motivate moving to truth-tables.
Tautology- A sentence  is a tautology if and only if  is true for all interpretations
of the propositional variables in  (i.e. if  is true on every possible semantic proof
for , where the possibilities are exhausted by the different ways to assign truthvalues to the propositional variables in ).
Claim: ‘(P ˅ ¬P)’ is a tautology.
Proof:
1. T “P”
assignment
2. F “¬P”
1¬
3. T “(P ˅ ¬P)” 1,2 ˅
11
1. F “P”
assignment
2. T “¬P”
1¬
3. T “(P ˅ ¬P)” 1,2 ˅
Since there are only two possibilities for P’s truth-value, we only need to consider two proofs.
However, for sentences containing n distinct propositional variables, there are 2 raised to the nth
power possibilities. This means that showing that (((P → Q) ˄ P) → Q) to be a tautology will
require four semantic proofs. Showing that (P ˅ (¬P ˅ (Q ˄ R))) to be a tautology will require
eight proofs! Very quickly this becomes unmanageable. Truth-table semantics will make this
much easier, and then proof theoretic semantics even easier with respect to some of the concepts
here.
Contradiction- A sentence  is a contradiction if and only if  is false for all
interpretations of the propositional variables in  (i.e. if  is false on every possible
semantic proof for , where the possibilities are exhausted by the different ways to
assign truth-values to the propositional variables in ).
Claim: ‘(P ˄ ¬P)’ is a contradiction.
Proof:
1. T “P”
assignment
2. F “¬P”
1¬
3. F “(P ˄ ¬P)” 1,2 ˄
1. F “P”
assignment
2. T “¬P”
1¬
3. F “(P ˄ ¬P)” 1,2 ˄
Again, showing that (P ˄ (¬P ˄ (Q ˄ R))) is a contradiction would require eight proofs.
Contingent sentence- A sentence  is a contingent sentence if and only if  is neither
a tautology nor a contradiction. (i.e. if  is true on one or more semantic proof for ,
and  is false on one or more semantic proof for ).
Claim: ‘P’ is a contingent sentence.
Proof:
1. T “P” assignment
1. F “P” assignment
The most important semantic relationship in logic is the following.
Entailment- A set of sentences , entails a sentence  if and only if, for all
interpretations of the propositional variables in  and , it is not the case that all of
12
the sentences in  are true and  is false. (We also say ‘ |= ,’ or ‘’ is a logical
consequence of ‘,’ or the inference from the premisses  to the conclusion  is
valid.)
Demonstrating this would require so many different semantic proofs, that we will not even do
any examples until we have the truth-table method under our belt!
Homework 3
Do all of the relevant semantic proofs to determine whether the following are tautologies,
contradictions, or contingencies.
1.
2.
3.
4.
¬¬P
(P → P)
¬(P → P)
(¬P → Q)
4- Propositional Logic: Truth Tables
Here we will see how much easier it is to determine the semantic properties of sentences and sets
of sentences if we use the truth table method.
Truth tables capture all of the same information as our semantic proofs did, i.e.:
(1)
(2)
(3)
(4)
¬ is a true if and only if  is false.
(˄  is a true if and only if is true and is true.
(˅  is a true if and only if is true or is true.
(→  is a true if and only if is false or is true.
With our syntax and semantics for L we have a procedure for determining the truth value of any
sentence of L from the truth value of the propositional variables in the sentence. The easiest way
to utilize this is via the method of truth tables. A truth table simply encodes the information
given in the semantics of L. Here are the truth tables for the logical connectives.
¬
T || F
F || T
|
T |
T |
F |
F |
(˄ 
T ||
T
F ||
F
T ||
F
F ||
F
|
T |
T |
F |
F |
(˅ 
T ||
T
F ||
T
T ||
T
F ||
F
|
T |
T |
F |
F |
(→ 
T ||
T
F ||
F
T ||
T
F ||
T
The Greek letters in the truth tables are schematic, standing for any wff of L. Thus, we use the
schematic letters in the above truth tables to construct bigger truth tables for any wff of L. For
example, take the sentence ‘¬(A ˄ B)’ of L. I can construct a truth table for this sentence in three
steps.
13
First I make the table itself.
A
T
T
F
F
|
|
|
|
|
B
T
F
T
F
|| ¬(A ˄ B)
||
||
||
||
Each row on the table corresponds to a certain assignment of truth-values to the propositional
variables in our sentence. So I next carry the truth values of the variables over to the right hand
side.
A
T
T
F
F
|
|
|
|
|
B
T
F
T
F
|| ¬(A ˄ B)
|| T
T
|| T
F
|| F
T
|| F
F
Then I utilize the above schematic truth tables to discern the truth value on each row for the
logical operator that binds A and B, in this case ‘˄’. Thus I have:
A
T
T
F
F
|
|
|
|
|
B
T
F
T
F
|| ¬(A ˄ B)
|| T T T
|| T F F
|| F F T
|| F F F
Note that we fill these in in exactly the same order as we do semantic proofs.
Now I’ve determined for each assignment of truth-values to propositional variables what the
truth-value of ‘(A ˄ B)’ is (the truth-value underneath the ‘˄’). Now I need to use the schematic
truth table for ‘¬’ to determine the truth-value of the whole sentence on each line.
A
T
T
F
F
|
|
|
|
|
B
T
F
T
F
||
||
||
||
||
¬(A ˄ B)
F
T
T
T
T
T
F
F
T
F
F
F
T
F
T
F
The truth-value under the negation sign corresponds to the truth-value of the whole sentence,
given the truth-values of propositional variables on the left (again, negation would be the last
operator added by a syntax proof and the last rule used in a semantic proof). For example, I’ve
now proven that when ‘A’ is true and ‘B’ is true, that ‘¬(A ˄ B)’ is false. We see how each row
of the truth-table represents all of the same information as one of our earlier semantic proofs.
Here’s an example of a completed truth table for the sentence ‘(¬P → (Q ˄ R))’.
14
P
T
T
T
T
F
F
F
F
|
|
|
|
|
|
|
|
|
Q
T
T
F
F
T
T
F
F
| R ||
| T ||
| F ||
| T ||
| F ||
| T ||
| F ||
| T ||
| F ||
(¬P → (Q ˄ R))
FT
FT
FT
FT
TF
TF
TF
TF
T T
T T
T F
T F
T T
F T
F F
F F
T
F
F
F
T
F
F
F
T
F
T
F
T
F
T
F
Thus, we know that if ‘P’, ‘Q’, and ‘R’ are all true then we also know that ‘(¬P → (Q ˄ R))’ will
also be true.
A Note on Translation:
The intended interpretations of our logical constants are: ‘˅’ means ‘or’, ‘˄’ means ‘and’, ‘¬’
means ‘it is not the case that’, and ‘→’ means ‘implies’. Given this, a surprising large number of
natural language sentences can be translated into L. In this manner, we can determine the validity
of a surprisingly large number of natural language arguments by translating the sentences of the
argument into L and then using the truth-table method to determine if the conclusion of the
argument is entailed by the premises.
For example, consider the following argument.
If Jones goes to the bank, then he takes out money. Jones does not take out money. Therefore
Jones does not go to the bank.
Translation Manual:
Let:
P = Jones goes to the bank.
Q = Jones takes out money.
Premisses:
Conclusion:
(P → Q), ¬Q
¬P
Then we can use the truth-table method to verify that {(P → Q), ¬Q)} entails ¬P, which we will
do in the next homework assignment.
This simple language is already expressively rich enough to formulate some non-trivial
philosophical arguments. Consider:
Translation Manual:
Let:
P = God exists.
Q = An all powerful, all good, creature exists.
R = Evil exists.
Premisses:
Conclusion:
(P → Q), (R → ¬Q), R
¬P
15
Again, the truth-table method will show this argument to be valid. That is, it is not logically
possible for the premises to be true and the conclusions to be false. This does not mean that the
premises actually are true though. The traditional free will defense response to the problem of
evil argues that (R → ¬Q) is false. Process theology argues that (P → Q) is false. One might take
Anselm to argue that R is false.
Some English words translate into L in ways that seem surprising at first. For example ‘P unless
Q’ translates into ‘(¬Q → P)’ and ‘P only if Q’ translates ‘P → Q’, and ‘P if Q,’ translates as ‘Q
→ P.’ One completely essential bit of advice- if you get negated sentences in the argument, only
translate the non-negated part of the sentence in your translation manual. When you use the
negated sentence in an argument, just put a ‘¬’ in front of it, as in the argument above.
One might reasonably ask right now if the semantic interpretation of our formulas really do
license the translations that we perform. In particular, does the ‘˅’ of L really mean the same
thing as the ‘or’ of English, and does the ‘→’ of L really mean the same thing as ‘implies’ or ‘if.
. . then’? The answer to both of these questions is ‘no’, but they do mean the same thing as their
English translates in a wide variety of cases. Moreover this wide variety of cases cover much of
the deductively relevant English usage.
First we’ll consider ‘˅’. Remember that the truth table for ‘˅’ is,
|
T |
T |
F |
F |
(v 
T ||
T
F ||
T
T ||
T
F ||
F

The ‘˅’ for our language is often called ‘inclusive disjunction’ because it is true if both of its
disjuncts are true. This strikes some people as weird because they reason that the value at the
first row of the table should be ‘F’. In fact one could define truth-functionally an ‘exclusive
disjunction’ with the following truth table:
|
T |
T |
F |
F |
(˅E 
T ||
F
F ||
T
T ||
T
F ||
F
There are four kinds of reasons that we don’t take the exclusive disjunct to be an operator in our
language. The first is that most English uses of ‘or’ are really inclusive disjunctions. For
example, if I say, ‘I’m going to the store or I’m going to a movie’, and I end up doing both,
translating the ‘or’ as exclusive disjunct would have made me say something false. For
embedded uses of ‘or’, such as in, ‘If I eat at the restaraunt, then I will fall asleep from the MSG
in the food, or I will get hungry thirty minutes later,’ it is even clearer that exclusive disjunct
doesn’t as an adequate translation.
16
Second, in cases where someone says a disjunction, both disjuncts are true, and we feel we’ve
been mislead, this is arguably (according to Grice)3 always only because the person knew both
disjuncts were true but did not assert a conjunction. If I am in a position to know whether “P and
Q” is true and I only assert “P or Q,” it would be rational for you to conclude that it is not the
case that “P and Q.” Let’s say I actually knew that “P and Q” was true, yet I just said “P or Q.”
What I said was true, but still misleading. It would be a little bit like saying there is an animal
behind someone when you see that it is a bear. You said something true, but violated the
conversational maxim to say the logically strongest thing for which you have evidence.
Third, we can express exclusive disjunct in L. The sentence ((P ˅ Q) ˄ ¬(P ˄ Q)), is logically
equivalent to exclusive disjunction. Check it by doing its truth-table. The truth table for the
sentence only yields false when both are true and both are false.
The third reason concerns using inclusive disjunction as a basic operator when doing certain
mathematical proofs about logical languages. If you continue to study logic you will see that in
these contexts it is much better to deal with a language that takes the inclusive disjunction as
primitive.
Many people are perplexed by the truth table for ‘→’.
|
T |
T |
F |
F |
(→ 
T ||
T
F ||
F
T ||
T
F ||
T
Some of this perplexity can be cleared up by the realization that we are taking every sentence of
our language to be true or false. Thus there has to be some value for the last two rows of the truth
table for ‘-->’. Consider the three other possibilities (Since everyone will agree that the first two
rows of the truth table are correct there are only three other possibilities).

| (→1  | (→2  | (→3 
T | T ||
T
T | T ||
T
T | T ||
T
T | F ||
F
T | F ||
F
T | F ||
F
F | T ||
T
F | T ||
F
F | T ||
F
F | F ||
F
F | F ||
T
F | F ||
F
None of these correspond to anything like the intuitive meaning of ‘if. . . then’. The first one
would have (A → B) mean the same thing as B! The second one is the same truth table as ‘if and
only if’ (which we can formulate in our language as ((A → B) ˄ (B → A)), and the third one is
the same truth table as ‘˄’. Since we know that none of these mean the same thing as ‘if. . .
then,’ and there are only four choices, we use the truth table we’ve picked. Later in the quarter,
after we have done work on deduction, we will be able to show why this choice does not get us
into any trouble.
3
17
This being said, the overwhelming majority of philosophers now accept that many natural
language uses of ‘if. . . then’ do not have the same meaning. Consider the sentence “If I’m going
to the store, then I’m going to buy some cigarettes.” Now assume I do not go to the store and I
do not buy cigarettes. Is the sentence true or false?
Some people have the intuition that the sentence is true if it were the case that had I gone to the
store then I would have bought cigarettes and false if it were the case that had I gone to the store
then I would not have bought cigarettes. But that means that the truth-value of the sentence is not
merely a function of the truth-value of its parts at the actual world. We must consider what
happens in possible worlds where I do go to the store. As a result of this, the logical semantics of
natural language conditionals must involve more than just truth-values. In Stalnaker semantics,4
for example, we say that a counterfactual conditional is true if the closest possible world where
the antecedent is true is also a world where the consequent is true. It typically also involves
possible worlds, and is typically taught in modal logic classes (such as LSUs 4011).
Homework 4
Construct truth-tables for the following sentences.
1. ((¬P ˄ Q) ˅ P)
2. ((P → Q) → ¬P)
3. ¬(P ˅ Q) → (¬P ˄ Q)
4. ((P ˄ R) ˄ (P ˅ ¬Q))
5. (((P → ¬R) → ¬Q) → ¬P)
5- Propositional Logic: (In)consistency, Tautologies, Contingency, and
Entailment via Truth Tables
Here we will see how much easier it is to determine our semantic properties via truth tables.
TautologyA sentence  is a tautology if and only if  is true for all interpretations of the
propositional variables in  (i.e. if  is true on every row of ’s truth table).
Claim: ‘(P ˅ ¬P)’ is a tautology.
Proof:
P || (P ˅ ¬P)
T || T T F T
F || F T T F
Contradiction- A sentence  is a contradiction if and only if  is false for all interpretations of
the propositional variables in  (i.e. if  is false on every row of ’s truth table).
4
18
Claim: ‘(P ˄ ¬P)’ is a contradiction.
Proof:
P || (P ˄ ¬P)
T || T F F T
F || F F T F
Contingent sentence- A sentence  is a contingent sentence if and only if  is neither a tautology
nor a contradiction. (i.e. if  is true on one or more rows of ’s truth table, and  is false on one
or more rows of ’s truth table)).
Claim: ‘P’ is a contingent sentence.
Proof:
P || P
T || T
F || F
Entailment- A set of sentences , entails a sentence  if and only if, for all interpretations of the
propositional variables in  and , it is not the case that all of the sentences in  are true and  is
false. (We also say ‘ |= ,’ or ‘’ is a logical consequence of ‘,’ or the inference from the
premises  to the conclusion  is valid.)
Claim: {‘P’, ‘(P → Q)’} entails ‘(P ˄ Q)’.
Proof:
P
T
T
F
F
|
|
|
|
|
Q
T
F
T
F
||
||
||
||
||
P | (P → Q) | (P ˄ Q)
T
T
F
F
|
|
|
|
T
T
F
F
T T |
F F |
T T |
T F |
T
T
F
F
T
F
F
F
T
F
T
F
Notice four important things about our definition of entailment: (1) It is equivalent to saying that
 entails  if and only if wherever all of the sentences of  are true then  is true, (2) a
tautology is entailed by any sentence (since a tautology is true for any assignment there will be
no row where the premises are true and the tautology is false), (3) if a contradiction is among the
premises, then that set of premises entails any sentence (if a contradiction is among the premises,
then there will be no row where all of the premises are true, and thus no row where all of the
premises are true and the conclusion is false), and (4) a tautalogy is merely a case of entailment,
where there are no premises.
Homework 5
Using the truth table method, determine whether the following sentences are contradictory,
19
tautologies, or contingent (you don’t have to construct truth tables here if you don’t need to).
1. (P ˅ ¬P)
2. (P ˄ ¬(P ˅ Q))
3. (P ˅ ¬(Q ˄ R))
4. ((P → Q) ˅ (Q → P))
5. ((P → Q) → ¬P)
6. ((¬P → R) ˄ R) ˄ ¬P)
7. (P ˄ (Q ˅ P))
8. (P ˄ ¬P)
9. (((P → Q) ˄ P) → (Q ˄ R))
10. ((P ˅ Q) → (¬P → Q))
Using the truth table method, determine whether the following are correct or not (Show the truth
tables for these exercises).
1. P, P → Q |= Q
2. P, Q → P |= Q
3. P → Q, ¬Q |= ¬P
4. P → Q, ¬P |= ¬Q
5. P → Q, Q → R, P |= R
6. (P ˄ ¬P), R |= Q
7. R, Q |= (P ˅ ¬P)
8. P, Q, ((P ˄ Q) → R) |- L- (R ˅ S)
9. ((P ˅ Q) → R), P |-L- R
20
Proof Theory for Propositional Logic
Miminal Logic
6- Propositional Minimal Logic Part 1: Rules without Discharge (Reflexivity, ˄
Introduction and Elimination, ˅ Introduction, → Elimination, ¬ Elimination)
Here we will explore an easy proof system based upon a fragment of L; we’ll call this fragment
L- . Consider the following schematic rules:
Reiteration:
n)


n reiteration
Elimination rules:
m.
n.
n. ( ˄ )

n ˄ elimination
n. ( ˄ )

n ˄ elimination
n.
n.


m) ( → )
n) 

n,m → elimination
m. 
n. ¬

Introduction rules:


( ˄ ) m, n ˄ introduction

( ˅ ) n, ˅ introduction
 
( ˅ ) n, ˅ introduction

m,n ¬ elimination
The conventions for reading these schematic rules are exactly the same as those for reading the
rules for syntactic proofs of well-formedness. If a sentence  is provable from a set of sentences
 by one of the above rules, we can say that:
 |-L- .
21
The rule of reflexivity simply says that any sentence guarantees its own truth. For example, take
the following proof:
Claim: (P ˄ Q) |-L- (P ˄ Q)
Proof:
1.
(P ˄ Q)
2.
(P ˄ Q)
1 reflexivity
This is a valid proof of our system. ‘(P ˄ Q)’ substitutes for the  in the statement of the rule of
inference. The number on the right hand side shows which line the conclusion rests upon. We
say that we used the rule of reflexivity to derive the conclusion from the premise.
The rule of ˄ -introduction says that whenever two sentences are true then the conjunction of the
two sentences is true. For example:
Claim: (P ˅ Q), R |-L- ((P ˅ Q) ˄ R)
Proof:
1.
(P ˅ Q)
2.
R
3.
((P ˅ Q) ˄ R) 1,2 ˄ introduction
‘(P ˅ Q)’ substitutes for the  and ‘R’ substitutes for the in the statement of the rule of
inference. The numbers on the right hand side shows which lines the conclusion rests upon.
(Here 1 substitutes for m, and 2 substitutes for n).
The rule of ˄ elimination says that whenever a conjunction is true, both of its conjuncts are also
true. For example:
Claim: ((P → Q) ˄ R) |-L- (P → Q)
Proof:
1.
((P → Q) ˄ R))
2.
(P →Q)
1 ˄ elimination
Now we have the ability to construct proofs using more than one rule of inference. For example:
Claim: ((P ˄ Q) ˄ R) |-L- ((P ˄ R)
Proof:
1.
((P ˄ Q) ˄ R))
2.
(P ˄ Q)
3.
P
4.
R
5.
(P ˄ R)
1 ˄ elimination
2 ˄ elimination
1 ˄ elimination
3,4 ˄ introduction
The following proof uses ˅ introduction, which says that if a sentence is true then that sentence
disjoined with any other sentence is also true.
22
Claim: (P ˄ R) |-L- (P ˅ Q)
Proof:
1.
(P ˄ R)
2.
P
1 ˄ elimination
3.
(P ˅ Q)
2 ˅ introduction
The following proof uses → elimination.
Claim: ((P ˅ Q) → R), P |-L- R
Proof :
1.
((P ˅ Q) → R))
2.
P
3.
(P ˅ Q)
4.
R
2 ˅ introduction
1, 3 → elimination
It is extremely important to be clear about where the conditional is (here on line 1), where that
conditional’s antecedent is (here on line 3), and where that conditional’s consequent is (here on
line 4). The rule of → elimination
m) ( → )
n) 

n,m → elimination

says that if you already have a conditional and that conditional’s antecedent on lines, then you
can infer the consequent.
For some reason, a non-trivial number of students will misread this rule, as if it licensed inferring
an antecedent from a conditional. But this would be invalid! “If I go to the store, then I will buy
smokes” does not entail that I will go to the store! Rather “If I go to the store, then I will buy
smokes” and “I go to the store” together entail that I will buy smokes.
The following proof uses ¬ elimination.
Claim: ((P ˄ Q) ˄ ¬P) |-L- 
Proof:
1.
((P ˄ Q) ˄ ¬P))
2.
(P ˄ Q)
3.
P
4.
¬P
5.

1 ˄ elimination
2 ˄ elimination
1 ˄ elimination
3,4 ¬ elimination
But what does  mean? We call it the absurdity constant, and it intuitively means that whatever
entails it cannot, as a matter of logic, be true.
Note that in even very weak logics (Minimal Logic) one can use the absurdity constant to do
without negation! That is, once we have conditional introduction it will be the case that the
23
sentence (P → ) has exactly the same inferential role as ¬P. So in some sense, “if P, then
absurdity” just means “it is not the case that P.” But then, to the extent that you understand what
“it is not the case that” means, you understand what the absurdity constant means.
However, we will below get Intuitionist Logic by adding the intuitionist absurdity rule, we will
start to see absurdity do more things. We will discuss that there. For now just take absurdity as a
signpost that all of the stand-alone premises in a proof cannot be true. That is, if you can prove
absurdity from some premises, then it is not logically possible for all of those premises to be
true. Any logically possible world will be one where at least one of them is false.
Homework 6
Verify the following via proofs:
1.
2.
3.
4.
5.
6.
7.
8.
P |-L- P
((P ˄ Q) ˄ R) |-L- (P ˄ Q)
(((P ˄ Q) ˄ R) ˄ S) |-L- P
(((P ˄ Q) ˄ R) ˄ S) |-L- (P ˄ S)
P |-L- ((P ˅ S) ˅ R)
(P ˄ Q) |-L- (P ˅ R)
P, P → Q, Q → R |-L- R
(P ˄ Q), ¬P) |-L- 
7- Propositional Miminal Logic Part 2: → Introduction
As you’ve probably discerned from doing the homework assignments, using truth tables to
determine the validity of arguments can be very tedious. In L and many languages formed by
extending L (adding to the syntax and semantics of L) logicians typically use the semantics of L
to show that purported deductive arguments are not valid. However, to show that deductive
arguments are valid, they usually utilize a system of deductive proof. Why do this? The search
tree for solving truth-tables is exponential, as a truth-table has 2n rows, where n is the number of
distinct propositional variable types in the sentences being modeled. Exponential growth like this
is a very quick way to crash a computer. And the truth tables are tedious.
We can define ‘deductive entailment’ the following way:
Deductive entailment- A set of sentences , deductively entails a sentence  if and
only if, there exists a proof of  resting upon a subset of the sentences in in our
proof system for L.(If this is the case we say  |- , ‘|-‘, is called ‘single turnstile’.
We can also say that an argument with the sentences in  as premisses and  as a
conclusion is deductively valid.)
Though we won’t prove this result in this class, two wonderful properties of the proof system CL
(full Classical Logic, which we are building up to), are the soundness and completeness results.
For anyone who holds that the truth table semantics really gives you the meaning of the logical
24
operators, the obtaining of these results tell us that our proof system is good enough.
Soundness- If  |- , then  |= .
Completeness- If  |= , then  |- 
These two results tell us that the relations of deductive validity and semantic validity coincide.
Soundness is good because it tells us that if we can use the truth-table method to determine that 
does not semantically entail  then we know that there does not exist a proof in our system of 
from . Since semantic validity is taken as primary by most logicians (most logicians take the
semantics to really specify the meanings of the logical constants), the soundness result is usually
taken to be an assurance that every deduction in proof-system in question is really an instance of
validity. Likewise the completeness result is taken to show that the proof system in question is
capable of illustrating all instances of validity.
Before we can appreciate the force of the soundness and completeness results we need to learn
how to construct proofs in our system. First I’ll give the formal definition of proof in our system
and then I will explain how to use it in constructing proofs. Proof construction is very much a
practical ability, so if you find the formal definition a bit Byzantine, don’t worry too much. Most
of us couldn’t explain the physics involved in riding a bicycle, yet we can still ride them.
However, as you start to work through the proofs it might be helpful to look back at the
definition to gain a better understanding of what you’re doing.
X is a proof of  from  if and only if
X is a numbered sequence of sentences such that:
(1) All of the premises in X are members of , and the premises, if any, occur in the
first lines of the proof.
(2) All arbitrary assumptions introduced for the rules ‘˅ elimination’, ‘→
introduction’, or ‘¬ introduction’ are discharged at a succeeding line,
(3) All sentences in the proof which are neither premises at the beginning of the
proof, nor arbitrary assumptions, are inferred by either derived or underived rules
from a previous line not in the scope of assumptions already discharged by a derived
or underived rule.
(4) The last line of the proof has  on it.
There are several terms of art in this definition. Rather than defining each one, I will proceed to
give examples and explanations of proofs using following rules of inference.
Note that line 3 doesn’t specify the other rules. This is important, because we are only defining
Miminal Logic in this section, then we will add the Intuitionist absurdity rule to get Intuitionist
Logic, and finally a classical negation rule to get Classical Logic. Note that Minimal and
Intuitionist logics are incomplete vis a vis truth table semantics, normally understood!5
5
Kalmer’s theorem, Tennett’s discussion
25
Rules of Inference for Minimal Logic
In the statement of the rules below, the Greek letters are variables which stand for any wff of L.
The lowercase English letters represent numerals. Note- it is usually not the case that n = m + 1,
and only necessarily the case that n > m in the rules ‘˅ elimination’ ‘→ introduction’, and ‘¬
introduction’.
Reiteration:
n.


n reiteration
Elimination rules:
m.
n.
n. ( ˄ )

n ˄ elimination
n. ( ˄ )

n ˄ elimination
m. |  assumption for ˅ elimination
|
|
n. | 




o. |  assumption for ˅ elimination
|
|
p. | 




q. ( ˅ )
n-m,o-p, q ˅ elimination 
n.
m. ( → )
n. 

n,m → elimination

( ˅ )
n ˅ introduction
 
( ˅ )
n ˅ introduction



m.
|
assumption for → introduction

|
|
( → ) m-n → introduction
n.

n.
m. 
n. ¬
 m,n ¬ elimination
Introduction rules:


( ˄ )
n, m ˄ introduction
m.
n.


|
assumption for  introduction
|
|
|
m-n  introduction
26
The rules other than ‘˅ elimination’, ‘→ introduction’, and ‘ introduction’ are exactly the same
as those for L- . ‘˅ elimination’, ‘→ introduction’, and ‘ introduction’ are a little more difficult
to than those of L-, as they involve hypothetically assuming the truth of some sentence in the
proof, and then drawing a line from the assumption to the sub-conclusion sanctioned by the rule
of inference.
For example the following proof uses → introduction.
Claim; |- ((P ˄ R) → (P ˅ Q))
Proof:
1.
| (P ˄ R)
2.
|P
3.
| (P ˅ Q)
4.
((P ˄ R) → (P ˅ Q))
assumption for → introduction
1˄ elimination
2 ˅ introduction
1-3 → introduction
But how did we do this? First we set up the problem:
Claim; |- ((P ˄ R) → (P ˅ Q))
Proof:
n.
((P ˄ R) → (P ˅ Q))
Since there are no premises, we cannot use an elimination rule. So we look for the dominant
logical operator in the conclusion. The operator that would have been added last had we derived
the sentence by a syntactic proof of the sort we did at the beginning of the semester. The
dominant operator is →. So I immediately look to the → introduction rule:
m.

n.
|
assumption for → introduction

|
|
( → ) m-n → introduction
So we next have to determine what the  and  are and then set up the proof the way the rule
requires us to do so. Cleary, in ((P ˄ R) → (P ˅ Q)) the is (P ˄ R) and the  is (P ˅ Q), so our
proof will look like this:
Claim; |- ((P ˄ R) → (P ˅ Q))
Proof:
1.
| (P ˄ R)
|
|
m.
| (P ˅ Q)
n.
((P ˄ R) → (P ˅ Q))
assumption for → introduction
1-m → introduction
27
It’s important to see that this is the only choice one has. The rule tells you exactly how to
construct the proof. The dominant operator of the conclusion is a conditional, so we must start a
subproof by assuming the antecedent for conditional introduction and end the subproof with the
consequent. In the above proof we assume (P ˄ R) in line one, and then prove from this (P ˅ Q).
When discovering proofs you should follow procedures like this. What you’ll find out is that you
start at the top and bottom, using elimination rules on stuff you have at the top and introduction
rules to get to the bottom, and that the proofs will connect in the middle.
And now that we have a conjunction as something assumed for further discharge (which is what
the line shows), we can proceed easily, since conjunction elimination gives you so much
freedom. Let’s just get P out of it:
Claim; |- ((P ˄ R) → (P ˅ Q))
Proof:
1.
| (P ˄ R)
2.
|P
m.
| (P ˅ Q)
n.
((P ˄ R) → (P ˅ Q))
assumption for → introduction
1˄ elimination
1-m → introduction
What’s missing? We have no justification for line m. of the proof. Since it’s something we are
trying to prove (at the bottom of the proof) we find its dominant operator and look to the
introduction rule for that operator:
n.
n.

( ˅ )
n ˅ introduction
 
( ˅ )
n ˅ introduction
In the context of the proof, this means if we can get either P or Q on a line higher up, we are
home free. But we already have P on line 2, so we just need to cite that line, and then everything
connects. Again:
Claim; |- ((P ˄ R) → (P ˅ Q))
Proof:
1.
| (P ˄ R)
2.
|P
3.
| (P ˅ Q)
4.
((P ˄ R) → (P ˅ Q))
assumption for → introduction
1˄ elimination
2 ˅ introduction
1-3 → introduction
The vertical bar stopping on line 3 shows that the assumption on line 1 has been discharged.
That is, the conclusion does not rest upon (P ˄ R) as a premise, as the conclusion just says that if
(P ˄ R) is true then (P ˅ Q) must be true. The truth of the conclusion however does not depend
on (P ˄ R) being true though.
28
With the rule of → introduction we can see the importance of the rule of reflexivity. Since (P →
P) is clearly a logical truth, we should be able to prove it not resting on any premises, in this
manner:
Claim; |- (P → P)
Proof:
1.
|P
2.
|P
3.
(P → P)
assumption for → introduction
1 reflexivity
1-2 → introduction
Likewise, let us consider a weird fact in our logic, that for any P and any Q, if you have P as a
premise, it follows that if Q, then P. You can check this with truth tables as we defined
entailment for truth tables. We establish it in our deductive system in the following manner:
Claim; P |- (Q → P)
Proof:
1.
P
n.
(Q → P)
This is just how we start proofs, writing the premises at the top and the conclusions at the
bottom. Then we check to see what elimination rules we might use on the dominant logical
operators of our premises and what introduction rules we might use on the dominant logical
operators of our conclusion. Since the premise has no logical operators, we cannot use an
elimination rule. Since the conclusion’s dominant logical operator is a conditional, we use the
conditional introduction rule, as above, and get as the next step:
Claim; P |- (Q → P)
Proof:
1.
P
2.
|Q
|
m.
|P
n.
(Q → P)
a for → introduction
2-m → introduction
So now we just have to prove P on line m. from the resources we already have. But we already
have P on line 1. So, again, we just use our reiteration rule:
Claim; P |- (Q → P)
Proof:
1.
P
2.
|Q
3.
|P
4.
(Q → P)
a for → introduction
1 reit.
2-3 → introduction
Proofs like this are a little weird, because Q isn’t cited further down in the proof (this is called
29
“vacuous discharge” in the biz). If you did not think that such claims were in general valid, you
would want to impose restrictions in the system such that any claim assumed for further
discharge has to be actually cited in a line in which the subconclusion at the bottom of the
subproof depends. Or equivalently, that tracing up the citations from the subconclusion leads one
in a path to the line with the premise of the subproof on it. This is exactly what does not happen
in the above proof (nor needn’t it in general in the proof systems we are learning).
Let us consider one more example.
Claim: (P ˄ Q) → R, P |- Q → R
The first step is to write our premises at the top and conclusion at the bottom:
Claim: (P ˄ Q) → R, P |- Q → R
Proof:
1. (P ˄ Q) → R
2. P
Q→R
Then we check the premises to see what elimination rules we might be able to use and the
conclusion to see what introduction rules we might use. Premise one is logically complex and it’s
dominant operator is the conditional. So we check conditional elimination:
m. ( → )
n. 

n,m → elimination

And it tells us that we would only be able to eliminate on line 1 if we also had the antecedent (P
˄ Q) on a line, at which point we would be able to conclude R. This is useful information, but
not anything we can utilize yet. So we look at the conclusion and try to use the introduction rule
on the dominant operator of that sentence. Conditional introduction requires then that the proof
will look like the following.
Claim: (P ˄ Q) → R, P |- Q → R
Proof:
1. (P ˄ Q) → R
2. P
3.
|Q
assumption for → introduction
|
m.
|R
n.
Q→R
3-m → introduction
So the next question is how to get R on line m. But we have already answered this, if we could
get (P ˄ Q) on a line, it would follow by conditional elimination. So let’s consider the following:
30
Claim: (P ˄ Q) → R, P |- Q → R
Proof:
1. (P ˄ Q) → R
2. P
3.
|Q
assumption for → introduction
|
i.
| (P ˄ Q)
m.
|R
1,i → elimination
n.
Q→R
3-m → introduction
But how do we get (P ˄ Q)? Since it’s something we are trying to get (at the bottom of the proof)
we must look at the introduction rule for the dominant operator.
m.

n.

 ( ˄ )
n, m ˄ introduction
So we could get it if we had both P and Q at earlier lines. But, wait a minute, we do have those
on lines 1 and 3. So I just need to connect the top and bottom of the proof now.
Claim: (P ˄ Q) → R, P |- Q → R
Proof:
1. (P ˄ Q) → R
2. P
3.
|Q
assumption for → introduction
4.
| (P ˄ Q)
2, 3 ˄ introduction
5.
|R
1,4 → elimination
6.
Q→R
3-5 → introduction
Cool beans.
Homework 7
1. (P → Q), (Q → R) |- (P → Q)
2. (P → Q) |- ((Q → R) → (P → R))
8- Propositional Minimal Logic Part 3:  Introduction.
The following proof uses  introduction.
Claim; (P → Q), Q |- P
Proof:
1.
(P → Q)
2.
Q
31
3.
4.
5.
6.
|P
|Q
|
P
assumption for  introduction
1,3 → elimination
2, 4  elimination
3-5  introduction
This rule of inference is very intuitive if you think about it. If we were to reason through the
above proof the mental process would go like this. O.K. We know that ‘if P then Q’ is true, and
we also know that ‘it is not the case that Q’ is true. Well let’s suppose that ‘P’ were true. But
then if ‘P’ were true, then ‘Q’ would also have to be true (since we know that ‘if P then Q’ is
true). But then ‘Q and it is not the case that Q’ would be true. Well since ‘Q and it is not the case
that Q’ can never be true then the original supposition that ‘P’ is true must have been completely
wrong. Therefore if it’s true that ‘if P then Q’ and it’s true that ‘it is not the case that Q’, then it
must be true that ‘it is not the case that P’.
At this point we can see how P essentially just means (P → ). Consider the homologous
proof:
Claim; (P → Q), Q |- (P → )
Proof:
1.
(P → Q)
2.
Q
3.
|P
4.
|Q
5.
|
6.
(P → )
assumption for  introduction
1,3 → elimination
2, 4  elimination
3-5 → introduction
The only difference is line 6! This is important because it shows that nothing new is going on
with negation introduction. The rule tells you that if you want to prove a negation assume the
embedded claim and prove absurdity. This is just like conditional introduction telling you to
assume the antecedent and prove the consequent.
And negation elimination is homologus to condtional elimination when applied to a conditional
with absurdity as the consequent. Negation elimination says that P and it is not the case that P
entail absurdity. But P and if P, then absurdity also entail absurdity by conditional elimination.
Homework 8
1.
2.
3.
(P → Q), Q |- P
P |- (P ˄ R)
(P → Q), (R → Q) |- (P → R)
9- Propositional Miminal Logic Part 4: ˅ elimination.
Our last rule of inference, ‘˅ elimination’ is also very intuitive if you think about it. It says that
32
if a disjunction is true, and you can prove some sentence from each disjunct, then that sentence is
itself true. For example:
Claim: (P ˅ Q), (P → R), (Q → R) |- R
Proof:
1)
(P ˅ Q)
2)
(P → R)
3)
(Q → R)
4)
|P
assumption for ˅ elimination
5)
|R
2,4 → elimination
6)
|Q
assumption for ˅ elimination
7)
|R
3,6 → elimination
8)
R
1, 4-5, 6-7 ˅ elimination
Reasoning through this proof in natural language would look like this. Suppose that ‘P or Q’, ‘if
P then R’, and ‘if Q then R’ are all true. Now suppose that ‘P’ is true, then it follows (since ‘if P
then R’ is true) that R is true. Now suppose that ‘Q’ is true, then it follows (since ‘if Q then R’ is
true) that R is true. Well since we know that either P or Q is true, then in either case R must be
true.
Things to watch out for. (1) Our definition of proof says that any assumption introduced for
later discharge must be discharged at a succeeding line. For example, the following proof is
incorrect.
1.
2.
3.
(P → Q)
|P
|Q
assumption for → introduciton
1, 2 → elimination
If this were a correct proof it would mean that the truth of ‘if P the Q’ ensured the truth of ‘Q’,
which is crazy.
(2) Clause 3 of the initial definition of proof is hard for some people to wrap their mind around.
However, given our conventions for reading the schematic rules of inference it can easily be seen
what clause 3 prohibits. Basically, you should never have rows of discharge lines overlapping
like this:
|
|
|
| |
| |
|
|
Assumptions introduced for later discharge in the scope of another assumption introduced for
discharge should always be discharged prior to the assumption introduced earlier. An instance of
33
this correct pattern will look like this:
|
| |
| |
| |
|
Here’s an example of a correct proof that does this.
Claim: (R → (P → (Q ˄ Q))) |- (R → P)
Proof: 1. (R → (P → (Q ˄ Q)))
2. | R
3. | (P → (Q ˄ Q))
4. | | P
5. | | (Q ˄ Q)
6. | | Q
7. | | Q
8. | | 
9. | P
10. (R → P)
assumption for → introduction
1,2 → elimination
assumption for  introduction
3,4 → elimination
5 ˄ elimination
5 ˄ elimination
6,7  elimination
4-8  introduction
2-9 → introduction
Now let’s do one more using ˅ elimination.
Claim: (R ˅ (S → (P → T))), R → T, P |- T
First we write the premises at the top and conclusion at the bottom.
Claim: (R ˅ (S → (P → T))), R → T, P |- T
Proof:
1. (R ˅ (S → (P → T)))
2. R → T
3. P
T
Then we do what the disjunction elimination rule tells us we need to do.
Claim: (R ˅ (P → T)), R → T, P |- T
Proof:
1. (R ˅ (P → T))
2. R → T
3. P
4. | R
assumption for ˅ elimination
|
34
m. | T
n. | (P → T)) assumption for ˅ elimination
|
o. | T
T
1, 4-m, n-o ˅ elimination
But, since we now have R on line 4 we can eliminate on the condition in line 2.
Claim: (R ˅ (P → T)), R → T, P |- T
Proof:
1. (R ˅ (P → T))
2. R → T
3. P
4. | R
assumption for ˅ elimination
5. | T
2,4 → elimination
6. | (P → T)
assumption for ˅ elimination
|
o. | T
T
1, 4-5, 6-o ˅ elimination
Now we turn to the second subproof, and see that the subconclusion follows because of the
premise on line 3.
Claim: (R ˅ (P → T)), R → T, P |- T
Proof:
1. (R ˅ (P → T))
2. R → T
3. P
4. | R
assumption for ˅ elimination
5. | T
2,4 → elimination
6. | (P → T)
assumption for ˅ elimination
7. | T
3,6 → elimination
8. T
1, 4-5, 6-7 ˅ elimination
And we’re done.
Homework 9
1. (R ˅ S), (R → P), (S → Q), (Q → P) |- P
2. (P ˅ (R → Q)), (P → (S ˄ Q)), R |- Q
10- Intuitionist Logic
We can get Intuitionist Logic by just adding the following rule to the introduction and
elimination rules of Minimal Logic.
________________________________________
Intuitionist Absurdity Rule:
|
35
m. 
|
m 
|
_______________________________________|
This says that if you have absurdity on a line, you can infer anything you like.
Why believe such a thing? Well first consider how you get absurdity on a line. Absurdity
introduction requires that some sentence and its negation occur as lines in the proof. But
remember with our truth table account of entailment that a sentence and a negation always entail
any other sentence gamma, because there is no line in the truth table making both a sentence and
its negation true, while making gamma false.
Thus, if we want to develop a system of deduction complete with respect to truth table semantics
we better incorporate the intuitionist absurdity rule. Again, consider the following proof, using
the intuitionist absurdity rule.
Claim: ((P ˄ Q) ˄ ¬P) |- R
Proof:
1.
((P ˄ Q) ˄ ¬P))
2.
(P ˄ Q)
3.
P
4.
¬P
5.

6.
R
1 ˄ elimination
2 ˄ elimination
1 ˄ elimination
3,4 ¬ elimination
5
Truth table semantics show that ((P ˄ Q) ˄ ¬P) |= R. So we need the absurdity rule if we want
to have a complete deductive system.
In one sense though, this just pushes the question back though. Might the weirdness of the
intuitionist absurdity rule show that there is something wrong with truth table semantics?
Defenders of paraconsistent logics think that this is the case. To see why this is compelling, note
that everyone in the world has inconsistent beliefs about various things. But it would be weird
then to conclude that everyone is therefore committed to the truth of every sentence (whether
that sentence is indeed true or false).
Unfortunately, paraconsistent logics come with a heavy price, for it is very difficult to see how
one can be committed to disjunctive syllogism without being committed to something like the
intuitionist absurdity rule. Disjunctive syllogism can be represented in the following manner:
m. P ˅ Q
n. P
------------Q
m, n disjunctive syllogism
This says that if P or Q is true and it is not the case that P, then it follows that Q.
36
Some logic systems make disjunctive syllogism a basic rule of inference, like one of our
introduction or elimination rules. In our system it is a derived rule of inference (we will show
how in a few minutes). But in either case, if disjunctive syllogism is valid, then anything seems
to follow from a contradiction. Consider the following proof.
1. P
2. P
3. P ˅ Q
4. Q
1 ˅ introduction
2,3 disjunctive syllogism
So if you want to deny the intuitionist absurdity rule, your system must restrict disjunctive
syllogism in some way. Note that disjunctive syllogism is not provable in Minimal Logic. To
derive it in our system you actually have to use the intuitionist absurdity rule.
Claim: (P ˅ Q), P |- Q
Proof:
1. (P ˅ Q)
2. P
3. | P
assumption for ˅ elimination
4. | 
2,3  elimination
5. | Q 4  rule
5. | Q assumption for ˅ elimination
6. | Q 5 reit.
7. Q
1, 3-4, 5-6
Most interesting uses of the absurdity rule are like this, where the absurdity constant is blocking
out various possibilities.
One final thing to be very careful of. If you are in a subproof where the assumption has been
assumed for negation introduction, do not use the intuitionist absurdity rule when you get to
absurdity. For the negation introduction rule tells you that when you get absurdity you must
immediately end the subproof and conclude with the negation of the assumption.
Homework 10
1. (P → (Q ˄ ¬Q)), P |-L- R
2. (((P ˄ Q) ˄ R) ˄ S), (R → ¬P) |-L- Z
3. (R ˅ S), (R → P), P |- S
11- Classical Logic
Classical Double Negation Elimination Rule: |
n. ¬¬
|
 n DNE



37
______________________________________|
The following proof uses Double Negation Elimination.
Claim; ¬¬(P ˄ R) |-L- (P ˅ Q)
Proof:
1.
¬¬(P ˅ R)
2.
(P ˅ R)
1 DNE
3.
P
2 ˄ elimination
4.
(P ˅ Q)
3 ˅ introduction
[To Be Added: (1) equivalence (mod Intuitionist Logic) of DNE to excluded middle and
classical reductio, (2) something on intuitionist critique of the rule, (3) how intuitionist
disjunction rule is often violated by such proofs.]
Exercises:
1. ¬¬(P → Q), P |- Q
2. ¬¬¬¬(P → S), ¬¬P |- S
38
Semantics for Predicate Logic
11- Translating First Order Logic into Natural Language
Contemporary standard logical notation for first order logic translates into English in the
following manner.
(x) = “for all x,”
(x) = “there exists an x such that,” (meaning, there exists at least one x such that)
(A) = “It is not the case that, (A)”
(A  B) = “(If A, then B)”
(A  B) = “(A, or B)”
(A  B) = “(A, and B)”
(Rx) = “x has the property R,”
(Rxy) = “x is in the relation R to y”
(Rxyz) = “w, x, y, and z are in the relation R”
(Rxyzz1) = “x, y, z, and z1 are in the relation R”
Etc. (one can do this for any finite number of variables or constants)
[Note that we have not included translations for sentences involving the identity sign (e.g. “x =
y”) or function symbols (e.g. f(x) = y), which are usually used when formalizing mathematics.
However, it is provable that the above kind of language is just as expressively powerful as
languages with identity and function symbols, so nothing other than convenience is lost.]
Here is how one expresses the Aristotelian syllogistic figures A, E, I, O in this language.
A: All S is P.
(x)(Sx  Px) =
“for all x, (if x has the property S, then x has the property P)”
E: No S is P.
(x)(Sx  Px) =
“for all x, (if x has the property S, then it is not the case that x has the property P)”
I: Some S is P.
(x)(Sx  Px) =
“there exists an x such that, (x has the property S, and x has the property P)”
(another way to say “Some Ss are Ps”),
O: Some S is not P.
(x)(Sx  Px) =
“there exists an x such that, (x has the property S, and it is not the case that (x has the
property P))”
39
One might be asking, “O.K. Why not just stick to the simpler Aristotelian way of expressing
these things?” But Frege’s notation allowed a fundamental advance that had bedeviled logicians
for two thousand years. It allowed for the expression of relations. Consider the inference,
Sam is bigger than Bill.
Everyone who is bigger than Bill is bigger than Fred.
Sam is bigger than Fred.
This is intuitively logically valid in that it is not logically possible for the premises to be true and
the conclusion false. But if we are stuck with Aristotle’s notation, there is no way to express
these sentences such that the logical properties that render the inference valid are exposed. In a
Fregean language such as the above we simply denote them in the following manner.
Bsb
(x)(Bxb  Bxf)
Bsf
Thus we can express the key premises in the argument. A system of deductive logic gives
us logical rules such that proofs of inferences such as the above can be checked by a
machine. An example of a proof in such a system is the following.
1. Bsb
2. (x)(Bxb  Bxf)
3. Bsb  Bsf
4. Bsf
2,  elimation
1,3  elimation
Notice how the system uses rules we have already developed for propositionals. In this manner
the Stoic (propositional) and Aristotlean logical traditions are wed. Of course it took over a
millennia and a half for people like Gottlob Frege, Bertrand Russell, and Charles Peirce to figure
out how to do this.
Mathematics constitutively involves relations that cannot be expressed in Aristotelian logic, but
which can be expressed in Frege’s language. For example, when we say that three is the sum of
two and one, we are asserting a relationship between three numbers; when we say that three is
greater than two we are asserting a relationship between two numbers.
One last thing. Note that in using logic to translate natural language, philosophers and linguists
diverge in one small way. Philosophers will usually let the letters such as “P” and “S” stand for
natural language words via a translation manual, while linguists will often use natural language
words themselves. For example, if a philosopher wanted to represent the argument, “Every
person is a mortal. Socrates is a person. Therefore, Socrates is a mortal,” she would first translate
“man,” “mortal” and “Socrates” into symbols of the predicate logic in this manner:
Let “person” translate into “P,”
Let “mortal” translate into “M,” and
Let “Socrates” translate into “s.”
40
Then the argument can be translated into this.
(x)(Px  Mx)
Ps
Ms
In Linguistics literature you are much more likely to see something like this
(x)(person'(x)  mortal'(x))
person'(s)
mortal'(s).
Here “Socrates” has been translated to avoid confusion, but “person” and “mortal” are only
translated by putting the dash after them.
In reading these, one can slavishly follow the kind of translation given at the beginning of this
handout, for example, reading “(x)(person'(x) --> mortal'(x))” as “for all x, (if x has the
property person' then x has the property mortal').” This is easy to do, as you only need to cut
and paste from the above translation suggestions. For example, xy(Oxy  Yyx) can be
translated step by step.
1 For all x, y(Oxy  Yyx)
2 For all x, it is not the case that y(Oxy  Yyx)
3 For all x, it is not the case that there exists a y such that (Oxy  Yyx)
4 For all x, it is not the case that there exists a y such that (x is in the relation O to y 
Yyx)
5 For all x, it is not the case that there exists a y such that (x is in the relation O to y
and Yyx)
6 For all x, it is not the case that there exists a y such that (x is in the relation O to y
and it is not the case that Yyx)
7 For all x, it is not the case that there exists a y such that (x is in the relation O to y
and it is not the case that y is in the relation Y to x)
(1) Note that, the method of translation requires us to keep the parentheses in step 4! This is as is
dictated by the translation manual that starts this handout. Always leave in the parentheses so
that your translation does not become logically ambiguous! (2) Note how we are underlining the
property term. This is a good thing to do because a mechanical translation according to the above
will usually lead to forced English grammar, and the underlining lets you know that you are
referring to the property denoted by the word. For example, assume that “Oxy” = “x is older than
y,” “Yx,y” = “x is younger than y,” then we can continue the translation.
8 For all x, it is not the case that there exists a y such that (x is in the relation is older than
to y and it is not the case that y is in the relation is younger than to x)
From this we can use common sense to finally get the following.
41
9 For all x, it is not the case that there exists a y such that (x is older than y and it is not the
case that y is younger than x)
In this manner, when you get more comfortable with the language it is easy to thus shorten “has
the property” in the above to something closer to natural language. Our earlier sentence “for all
x, (if x has the property person' then x has the property mortal')” becomes “for all x, (if x is a
person, then x is a mortal).” This isn’t mechanical though, since logic predicates can be translates
of many nouns, adjectives, and verbs. A quick rules of thumb- if the predicate is a mass noun,
change “has the property” to “is,” if the predicate is a singular count noun, change “has the
property” to “is a,” if the predicate is an intransitive verb then simply delete “has the property.”
More than one place predicates get more complicated.
Homework 11
Where “c” = “Charlie” (and by “=” here we mean “translates into”), “m” = “Mary,” “d” =
“Dan,” “Mx” = “x is male,” “Fx” = “x is female,” “Hx” = “x is human,” “Cx” = “x is canine,”
“Txy” = “x is taller than y,” “Oxy” = “x is older than y,” “Yx,y” = “x is younger than y,” “Sxy”=
“x is the same age as y,” translate the following sentences of first order logic into English. If you
want to stop with a sentence that mentions the properties (as in line 8 in the above derivation),
that is fine. If you want to go further (as in line 9) that’s fine too.
1 Mc
2 Fm
3 Fc
4 Hc  Mm
5 Hd  Hm
6 x(Hx  Cx)
7 x(Hx  Cx)
8 x(Hx  Cx)
9 x(Hx  Cx)
10 Omc  Odc
11 xy(Oxy  Yyx)
12 xy(Oxy  Oyx)
13 xy((Oxy  Oyx)  Sxy))
14 xy((Oxy  Yxy)  Sxy))
15 x(Hx  Cx)
16 x(Hx  Cx)
17 x(Hx  Cx)
18 x(Txc)
19 x(Tcx  Sxc)
20 xy(Sxy)
21 xy(Yxy  Sxy)
42
12- A Fregean functional semantics, part 1: functions, names, and predicates
In the first handout we gave rough translations of the parts of speech of first order predicate logic
in the following manner.
(x) = “for all x,”
(x) = “there exists an x such that,” (meaning, there exists at least one x such that)
(A) = “It is not the case that, (A)”
(A  B) = “(If A, then B)”
(A  B) = “(A, or B)”
(A  B) = “(A, and B)”
(Rx) = “x has the property R,”
(Rxy) = “x is in the relation R to y”
(Rxyz) = “w, x, y, and z are in the relation R”
(Rxyzz1) = “x, y, z, and z1 are in the relation R”
Etc. (one can do this for any finite number of variables or constants)
We did not rigorously define a grammar for such a language though.
Why do such a thing? Frege developed his language in the hope that one could formalize
mathematical reasoning so that each step in a mathematical proof is shown to be clearly
unproblematic. Then the hope was that difficult mathematical theorems could be shown to be
provable from clear premises that are obviously true. To the extent that one can do these two
things (provide a logic where each step in a proof is clearly unproblematic and also specify
unproblematic premises), then one will have shown that the mathematical theory in question
is not philosophically problematic.
But this will only work if the grammar for the logic can itself be characterized such that the
question of whether a sentence is a sentence of the logical language is itself clearly
unproblematic. The limiting case of “unproblematic” here (and in the case of a system of
proof) is if we can specify dumb mechanical procedures that determine whether a string of
symbols is a sentence of the language or not (or a proper step in a proof, for that matter). This
is always possible for a logical language if one can provide a recursive specification of the set
of sentences in the language.
Moreover, in order to come up with a precise notion of proof, it is absolutely essential that
certain forms of ambiguity are prohibited. For example, consider again the following English
language sentence.
I’m going to the store, and she’s buying smokes, or I’m watching T.V.
This sentence is ambiguous. If we use parentheses to disambiguate, on the one hand it could
mean “(I’m going to the store) and (she’s buying smokes or I’m watching T.V).” From this
reading of the sentence we would know that I’m going to the store. If we parsed the sentence as
“(I’m going to the store and she’s buying smokes) or (I’m watching T.V),” we wouldn’t know
that I’m going to the store.
43
Logical syntax prohibits all such ambiguities.
Vocabulary:
All capital English letters (A, B, C. . . .Z), as well as all numerically subscripted
capital Zs (Z1,Z2, Z3. . .), are predicates, each one of one or more places.
The lower case English letters (a, b, c, d), as well as all numerically subscripted
lower case ds (d1,d2, d3. . .), are proper names.
The lower case English letters (x, y, z), as well as numerically subscripted lower case
zs (z1,z2, z3. . .), are variables.
, , , and  are propositional connectives (respectively: negation, conditional,
disjunction, and conjunction).
 and  are quantifiers (respectively: universal and existential).
( and ) are parentheses.
Formation rules:
(1)
If Φ is an n-place predicate and every one of α1. . . .αn is either a variables or
proper name, then Φ(α1. . . .αn) is a sentence of L.
(2)
If  is a sentence of L, then  is a sentence of L.
(3)
If and  are sentences of L, then (  is a sentence of L.
(4)
If and  are sentences of L, then (  is a sentence of L.
(5)
If and  are sentences of L, then ( is a sentence of L.
(6)
If  is a wff and α is a propositional variable, then α( is a sentence of L.
(7)
If  is a wff and α is a propositional variable, then α( is a sentence of L.
(8)
All and only the sentences of L of L are generated by the above 7 rules.
If we were to spend the time checking, we would see that all of the logical sentences given in the
previous handout can be constructed from the above vocabulary and rules.
The logical tradition that stems from Frege and others is philosophically interesting for a number
of reasons. First and foremost, philosophers have always been interested in codifying valid
argumentation. This is because philosophy itself consists in using reason to produce arguments
concerning the things that centrally concern us as humans. However, the Fregean revolution is of
special interest to philosophers of language since Frege’s language was the first to be provably
compositional, in that (when understood via Frege’s semantic theory) it satisfies Alexander
Miller’s
Thesis 2: The semantic value of a complex expression is determined by the semantic
value of its parts.6
For the Fregean, “semantic value” of an expression is defined as “that feature of it which
determines whether sentences in which it occurs are true or false.” When put together with
Thesis 2, this means that the truth or falsity of a sentence is a function of properties of parts of
that sentence.
6
Miller, 11.
44
Part of Frege’s genius is that his semantic theory for his logic showed in detail how this is the
case. Again though, why does this matter? From the perspective of the philosophy of
mathematics and logic, it matters because logically valid proofs must show how it is not logically
possible for the premises to be true and the conclusions false, and this ends up requiring a theory
of how the logical parts of speech (the logical operators above) contribute to the truth or falsity
of sentences in which they occur.
As noted earlier, from a broader perspective, compositionality has even more interest. In
Descartes’ Discourse on Method7 the actual argument given for the conclusion that the mind/soul
is different from the physical brain involves language. Descartes argues that the brain behaves
purely mechanistically, like a complicated machine. But since human language is not something
a machine could do, this proves that the human mind is not purely mechanistic. Therefore the
human mind is more than just the brain. And the reason Descartes thought that human language
was non-mechanical was because of the massive creativity involved in language use. Most
sentences we understand and speak have never been uttered before, and will never be uttered
again. Yet, for the most part, without effort we manage to use and understand language.
Descartes thought it impossible that a piece of biological clockwork (the brain) could do
anything like this.8
One way to respond to Descartes is to argue that the linguistic creativity he noticed is not quite as
creative as he thought. In particular, if there are a finite number of words, a finite number of
syntactic principles by which those words and the resulting phrases can be combined, and a finite
number of semantic principles governing how meaning of phrases and sentences is a function of
those words and how they are combined, then it seems more plausible that a machine-like entity
(assuming with Descartes that the brain is machine-like) could master language. But Frege’s
syntactic requirements for his logical language combined with his form of compositionality (the
way in which Thesis 2 is worked out in his semantic theory) actually ends up showing in detail a
language with a finite number of words, a finite number of syntactic principles by which those
words and the resulting phrases can be combined, and a finite number of semantic principles
governing how meaning of phrases and sentences is a function of those words and how they are
combined.
So if one could plausibly argue that human languages are similar enough to languages like
Frege’s, then one would have a response to Descartes. This is in part why so much contemporary
philosophy of language has been a footnote to Frege.
Here we want to see how Frege’s language9 is compositional. Then we will look at Frege’s own
argument that the semantic account for his language as a language of mathematical proof is
insufficient.
[citation needed] Modern philosophy of language arguably begins with Descartes’ argument.
Incidentally, this was the reason Cartesians thought that animals did not have minds. Since language was the
reason minds are different from mere brains, and animals did not possess language, it followed that animals did not
possess minds.
9
[should note with reference to Frege’s Begrifschrift (sp?) earlier] It is important to note that the actual logical
symbols Frege used are completely different from the symbols that have become standard in modern logic. But in
the other important respects, the language is the same.
7
8
45
In mathematics, when we talk about some things being determined by other things, we usually
are talking about functions. So Thesis 2 could have said something like, “The semantic value of
a complex expression is a function of the semantic value of its parts.” Unfortunately, at the time
Frege was writing, mathematics did not have a well worked out theory of functions, so we have
to reconstruct his views a little bit here.
In particular, today mathematical functions can be understood either extensionally or
intentionally (we will see in a few days how this very distinction follows from Frege’s insights).
Intensionally, a function is a rule or procedure that takes you from one ordered group of values to
another value. So consider the following two functions:
f(x) =
(x + 4) – 2,
g(x) =
(x +6) – 4.
Intuitively, they are different functions because they give different procedures for solving them.
So:
f(5) =
(5 + 4) – 2
9–2
7
g(5) =
(5 + 6) – 4
11 – 4
7
So we can say that by the intensional notion of functions, f and g are different functions.
However, there is clearly a sense in which they are the same functions. For any numerical input,
f and g return the same values. By the extensional notion of functions, when this happens we say
that the two functions are actually one and the same, that we have one function characterized
differently. When we think this way, we characterize functions just in terms of the set of inputs
and outputs, for one-place functions like f and g, this will be a set of ordered pairs. If f and g are
defined on the natural numbers ({0,1,2,3,4. . . .) then we have:
f = g = {<0,2>,<1,3>,<2,4>,<3,5>. . . .}
So extensionally, functions can be understood as just a set of inputs and outputs. Frege’s theory
of semantic value is extensional in this sense, which is why we have:
Thesis 6: Functions are extensional: if function f and g give the extension, then f =
g.10
Miller, 16. Miller notes that this isn’t quite right as a sketch of Frege’s views, since understanding functions this
way is post-Fregean. As I understand Frege, he explicitly only thought of functions intensionally. This being said,
once we understand them extensionally we can understand the simplicity and systematicity of Frege’s semantic
10
46
When modern mathematicians are being clear they almost always treat functions extensionally,
referring explicitly to rules or procedures when intensional functions are being compared. For
Frege, what is amazing is that his basic insights into the semantics of his language can be
presented entirely in terms of extensional functions. Again, the basic compositionality claim
(Thesis 2) can be understood as the claim that the truth or falsity of a sentence is entirely a
function of the semantic value of the parts of that sentence, and (for the language presented
above) one can present Frege’s theory entirely in terms of our modern extensional notion of
function. Thus, we get:
Thesis 7: The semantic value of a predicate is a first-level function from objects to
truth-values; the semantic value of a sentential connective is a first-level function
from truth-values to truth-values; the semantic value of a quantifier is a second-level
function from concepts to truth-values.
Function, function, function. Pretty cool. Of course this only works if one can actually specify
the functions in question. But for the above language, we can.
To do this we will consider part of the fragment of the above language we gave in the previous
handout. That is, our language will be just like the one defined above, except that the only proper
names are c, m, and d, and the only predicates are M, F, H, C, T, and O. For simplicity, we will
also amend the above to be able to stipulate that M, F, H, and C are one-placed predicates, in that
they can only bind to one proper name or variable at a time (e.g. Mc), and T and O are two place
predicates (e.g. Tdc). Here we will be able to show how if one knows the semantic values of the
names and predicates11 then one can determine whether sentences in which they occur are true or
false. First we have:
Thesis 4: The semantic value of a proper name is the object which it refers to or
stands for.12
So let’s assume that the semantic value of c is Charlie, m is Mary, and d is Dan. We can write
this functionally.
sv(c) = Charlie
sv(m) = Mary
sv(d) = Dan
Again, we are pretending here that Charlie, Mary, and Dan are actual objects in the world. The
semantic values of the names c, m, and d are the things to which those names refer.
Then, the semantic values of predicates are functions from objects to truth values. Again, since
theory much more clearly.
11
Footnote to self- interesting reason “course of values” for reference of predicates undermines compositionality if
you don’t have multiple domains- with extensional function reading you get the anti-extension too and hence the
whole domain. Frege had a fixed domain so the difference didn’t matter. Some theorems here?)
12
Miller, 12.
47
we understand these extensionally, we can treat one place predicates as ordered pairs. For
simplicity’s sake let’s assume the concrete objects in the universe just consist in Charlie, Mary,
Dan, and a short, canine hermaphrodite we can call Frank, but who has no name in the language
(surely most things in the universe don’t have names in any language).
sv(M) = {<Charlie, True>,<Dan, True>,<Frank, True>,<Mary, False>}
sv(F) = {<Charlie, False>,<Dan, False>,<Frank, True>,<Mary, True>}
sv(H) = {<Charlie, False>,<Dan, True>,<Frank, False>,<Mary, True>}
sv(C) = {<Charlie, True>,<Dan, False>,<Frank, True>,<Mary, False>}
From this one can determine the semantic values of an atomic formula (sentence with no
quantifiers or propositional variables) by applying the semantic value of the predicate (which is a
function) to the semantic value of the names. That is sv(Md) is equal to sv(M)(sv(d)). Since the
semantic value of d is Dan, and the semantic value of M takes Dan to true, we know that the
semantic value of Md is True. We can write this as a derivation in the following manner.
1. sv(Md) is = sv(M)(sv(d))
2. sv(M)(sv(d)) = sv(M)Dan
3. sv(M)Dan = True
by Frege’s theory
by the model
by the model
Likewise, we can derive the semantic value of Hc as follows.
1. sv(Hc) is = sv(H)(sv(c))
by Frege’s theory
2. sv(H)(sv(c)) = sv(H)Charlie by the model
3. sv(H)Charlie = False
by the model
------------Exercise 12, number 1
Write out the above kind of derivations Mc, Fd, Hf, and Cm. Note that each one will
have three steps, of the form,
1. sv(Φα) is = sv(Φ)(sv(α))
2. sv(Φ)(sv(α)) = sv(Φ)Γ
3. sv(Φ)Γ = Σ
by Frege’s theory
by the model
by the model,
Where Φ is a predicate of the formal language, α is a name of the formal language, Γ
names one of the objects denoting the names, and Σ is True or False.
------------All of the mathematical functions discussed above have been one-place functions, that is
functions that take in one argument to return a value. However, most interesting mathematical
functions take more than one argument. For example the addition and multiplication functions
take two arguments and return values. And just like extensional one-place functions can be
understood in terms of ordered pairs, two-place functions can be understood in terms of ordered
triplets (e.g. + = {<0,0,0>, <0,1,1>, <1,1,2>, <1,0,1>, <0,2,2>, <1,2,3>, <2,2,4>, <2,1,3>,
<2,0,3>, <0,3,3>. . .}). In fact any n place extensional function (function that takes n number of
inputs) can be represented as a set of n+1-tuples in this manner.
48
From this we can see how n-place predicates are handled in the Fregean way. Let’s say that “T”
intuitively means “is taller than” and that Dan is taller than Mary, who taller than Charlie, who is
taller than Frank. Then, the semantic value for T is the following.
sv(T) = {<Charlie, Charlie, False>,<Charlie, Dan, False>,<Charlie, Frank,
True>,<Charlie, Mary, False>,<Dan, Charlie, True>,<Dan, Dan, False>,<Dan,
Frank, True>,<Dan, Mary, True>,<Frank, Charlie, False>,<Frank, Dan,
False>,<Frank, Frank, False>,<Frank, Mary, False>,<Mary, Charlie, True>,<Mary,
Dan, False>,<Mary, Frank, True>,<Marie, Mary, False>}
Let’s say that “S” intuitively means “is older than” and that Mary is older than Dan, who is older
than Frank, who is older than Charlie. Then, the semantic value for O is the following.
sv(O) = {<Charlie, Charlie, False>,<Charlie, Dan, False>,<Charlie, Frank,
False>,<Charlie, Mary, False>,<Dan, Charlie, True>,<Dan, Dan, False>,<Dan,
Frank, True>,<Dan, Mary, False>,<Frank, Charlie, True>,<Frank, Dan,
False>,<Frank, Frank, False>,<Frank, Mary, False>,<Mary, Charlie, True>,<Mary,
Dan, True>,<Mary, Frank, True>,<Marie, Mary, False>}
But then we can do derivations just like we did above. Consider.
1. sv(Ocd) is = sv(O)(sv(c),sv(d))
2. sv(O)(sv(c),sv(d)) = sv(O)(Charlie,Dan)
3. sv(O)(Charlie,Dan) = False
by Frege’s theory
by the model
by the model
Thus, since sv(Φαβ) is determined by applying the semantic value of Φ to the ordered pair
containing α and β’s semantic values, every such derivation will look like the following,
1. sv(Φαβ) is = sv(Φ)(sv(α),sv(β))
by Frege’s theory
2. sv(Φ)(sv(Φα),sv(Φβ)) = sv(Φ)(Σ,Ψ) by the model
3. sv(Φ)(Σ,Ψ) = Γ
by the model,
Where Φ is a two-place predicate of the language, α and β are names of the language, Σ is equal
to the semantic value of α, Ψ is equal to the semantic value of β, and Γ is either truth or falsity.
I realize that sv(Φαβ) is = sv(Φ)(sv(α),sv(β)) is a mouthful. Think about addition again. When
we represent addition as a set of ordered three-tuples (+ = {<0,0,0>, <0,1,1>, <1,1,2>, <1,0,1>,
<0,2,2>, <1,2,3>, <2,2,4>, <2,1,3>, <2,0,3>, <0,3,3>. . .}) what we are saying is that + is applied
to any of the first two members uniquely yields the third member. So the fact that <0,1,1> is a
member of the set consisting of addition is the fact that +(<0,1>) = 1. Likewise, the fact that
<Charlie, Dan, False> is a member of sv(0) is the fact that sv(0)(Charlie, Dan) = False. That’s all
we’re doing in the derivations above.
------------Exercise 12, number 2
Do the above type derivations and the model above to determine the truth or falsity
of the English language sentences “Charlie is taller than Mary,” “Dan is taller than
49
Frank,” “Frank is older than Charlie,” “Mary is older than Dan.” Your derivations
must be of the same form as the above.
-------------
13 - A Fregean functional semantics, part 2: propositional logic
Here we will go part of the way towards fully understanding Miller’s
Thesis 7: The semantic value of a predicate is a first-level function from objects to truthvalues, the semantic value of a sentential connective is a first-level function from truthvalues to truth-values; the semantic value of a quantifier is a second-level function from
concepts to truth-values.
The first part of this we have from handout 2. The second part, we can formalize in a similar
manner. Since the semantic value of the sentential connectives are functions (understood
extensionally), we can represent them as sets of the relevant kind of ordered objects
Definition of semantic values of propositional connectives:
sv() = {<True,False>,<False,True>}
sv() = {<True,True,True>,<,True,False,False>,<False,True,False>,<False,False,False>}
sv() = {<True,True,True>,<,True,False,True>,<False,True,True>,<False,False,False>}
sv() = {<True,True,True>,<,True,False,False>,<False,True,True>,<False,False,True>}
Then, the semantic clauses can again be given in straightforward function-application fashion.
(1) Where Φ is a predicate, and α1 through αn are proper names, then
sv(Φα1. . . .αn) = sv(Φ)(sv(α1). . . . sv(αn)),
(2) Where Γ is a sentence, sv(Γ) = sv()(sv(Γ))
(3) Where Γ and Ψ are sentences, sv(Γ  Ψ) = sv()(sv(Γ),sv(Ψ))
(4) Where Γ and Ψ are sentences, sv(Γ  Ψ) = sv()(sv(Γ),sv(Ψ))
(5) Where Γ and Ψ are sentences, sv(Γ  Ψ) = sv()(sv(Γ),sv(Ψ))
To see how this works, we will again use the language and model from the previous handout.
Model:
sv(c) = Charlie
sv(m) = Mary
sv(d) = Dan
sv(M) = {<Charlie, True>,<Dan, True>,<Frank, True>,<Mary, False>}
sv(F) = {<Charlie, False>,<Dan, False>,<Frank, True>,<Mary, True>}
sv(H) = {<Charlie, False>,<Dan, True>,<Frank, False>,<Mary, True>}
sv(C) = {<Charlie, True>,<Dan, False>,<Frank, True>,<Mary, False>}
50
sv(T) = {<Charlie, Charlie, False>,<Charlie, Dan, False>,<Charlie, Frank, True>,<Charlie,
Mary, False>,<Dan, Charlie, True>,<Dan, Dan, False>,<Dan, Frank, True>,<Dan, Mary,
True>,<Frank, Charlie, False>,<Frank, Dan, False>,<Frank, Frank, False>,<Frank, Mary,
False>,<Mary, Charlie, True>,<Mary, Dan, False>,<Mary, Frank, True>,<Marie, Mary,
False>}
sv(O) = {<Charlie, Charlie, False>,<Charlie, Dan, False>,<Charlie, Frank,
False>,<Charlie, Mary, False>,<Dan, Charlie, True>,<Dan, Dan, False>,<Dan, Frank,
True>,<Dan, Mary, False>,<Frank, Charlie, True>,<Frank, Dan, False>,<Frank, Frank,
False>,<Frank, Mary, False>,<Mary, Charlie, True>,<Mary, Dan, True>,<Mary, Frank,
True>,<Marie, Mary, False>}
Then, any sentence of propositional logic with these predicates and proper names will have a
unique truth value just as a function of the truth values of the model. We can show this by
derivations, as in the previous handout. Instead of “Frege’s Theory” we will cite the relevant
clause in the definition of the above functions.
sv(Mc)
= sv()(sv(Mc))
= sv()(sv(M)(sv(c))
= sv()(sv(M)(Charlie))
= sv()(True)
= False
(2)
(1)
model
model
definition of sv()
Each such derivation follows the same pattern! We first follow the semantic clauses to unpack
the function, then appeal to the model to get the truth values of the propositions, and finally
appeal to the definitions for the functions that are the semantic values for the propositions.
sv(Ocd  Cc)
= sv(sv(Ocd),sv(Cc))
= sv(sv(O)(sv(c),sv(d)),sv(C)(sv(c)))
= sv(sv(O)(Charlie,Dan),sv(C)(Charlie))
= sv(False,True)
= True
(4)
(1)
model
model
definition of sv()
sv((Hc  Mm)  Fc)
= sv(sv(Hc  Mm),sv(Fc))
= sv(sv(sv(Hc),sv(Mm)),sv(Fc))
= sv(sv(sv(Hc),sv(Mm)),sv(sv(Fc)))
= sv(sv(sv(H)(sv(c)),sv(M)(sv(m))),sv(sv(F)(sv(c))))
= sv(sv(sv(H)(Charlie),sv(M)(Mary)),sv(sv(F)(Charlie)))
= sv(sv(False,False),sv(False))
= sv(sv(False,False),True)
= sv(False,True)
= True
(4)
(3)
(2)
(1)
model
model
definition of sv()
definition of sv()
definition of sv()
51
Exercise 13
Using the language and model above, derive the truth or falsity of the following sentences.
1. Fd
2. Tcm
3. Hf  Cm.
4. Tdf  Ofc
5. Omd  Ocd
6. Cc  (Hm  Hd)
14- A Fregean functional semantics, part 3: quantifiers
To Be Added. Use treatment in Dowty, et. al. Montague Semantics.
15- Entailment
To Be Added.
52
Proof Theory for Predicate Logic
16- The Universal Quantifier
The Universal Quantifier has a very simple elimination rule, which we can represent thus:
m.xP[x]
-----------P[a]
m elim.
Here “a” can be any individual constant in the language. As individual constants we’ll use the
first five letters of the English alphabet (i.e. a, b, c, d, e) and subscript e’s just in case we need
more (i.e. e1, e2, e3. . .). For the variables bound by the quantifiers we will use: x, y, z, z1, z2, z3. . .
In any case, the rule basically says that if something is true of every x (or y or z or z1 or z2, etc.
depending upon which variable is bound) then it is true of any particular thing for which you
have a name.
Here’s an example:
Claim:x(Px → Qx), Pa |- Qa
Proof:
1. x(Px → Qx)
2. Pa
3. Pa → Qa
4. Qa
1 elim.
2,3 → elim.
Very straightforward. However, there is one proviso. Many sentences will have more than one
quantifier, and you can only apply the elimination rule to the outermost quantifier. Here is a
correct one:
Claim:xy(Px ˄ Qy) |- y(Pb ˄ Qy)
Proof:
1. xy(Px ˄ Qy)
2. y(Pb ˄ Qy)
1 elim.
But we could not have eliminated the quantifier before the y at line 1. The only way to eliminate
the y would be from line 2, where its quantification is outer-most. Thus, we can continue the
proof in the following manner.
3. (Pb ˄ Qc) 2 elim.
The introduction rule for the universal quantifier is more interesting. We can represent it thus:
53
m. | [a]
a for  intro.
|
[Note:“a” must be arbitrary in that it cannot occur outside of lines m-n!]
n. | P[a]
----------xP[x] m-n  intro.
This rule says that if you are trying to prove a universally quantified statement that you must
prove that the claim holds for any arbitrary object. The way we do this is to introduce a new
individual constant at the top of the subproof and then prove that the claim in question is true of
the bearer of that arbitrary name. But since the name is arbitrary, it could name anything. But
then it follows that the claim must be true of everything. Consider the following claim:
Claim:x(Px → Qx),x(Qx → Rx) |- x(Px → Rx)
We start our proof in the normal way, by writing the premises at the top and the conclusion at the
bottom.
Proof:
1.x(Px → Qx)
2.x(Qx → Rx)
x(Px → Rx)
Then we have a choice of whether to go ahead and eliminate on the premises or start the
subproof for the introduction. Here is an essential point. Since the arbitrary name cannot occur
outside of the subproof, you must start the subproof on line 3! Thus your first step should be
like this:
Proof:
1.x(Px → Qx)
2.x(Qx → Rx)
3. | [b]
for  intro.
|
|
| Pb → Rb
x(Px → Rx) 3-?  intro.
At this point things are simple. You can either go ahead and eliminate on lines 1 and 2, using the
name “b,” or you can start your conditional introduction. Let’s do the latter.
Proof:
1.x(Px → Qx)
2.x(Qx → Rx)
3. | [b]
for  intro.
54
4. | | Pb
a for → intro.
| |
| | Rb
| Pb → Rb 4-? → intro.
x(Px → Rx) 3-?  intro.
And then you just do the two eliminations on the first line to get what you need.
Proof:
1.x(Px → Qx)
2.x(Qx → Rx)
3. | [b]
4. | | Pb
5. | | (Pb → Qb)
6. | | (Qb → Rb)
| |
| | Rb
| Pb → Rb
x(Px → Rx)
for  intro.
a for → intro.
1  elim.
2  elim.
4-? → intro.
3-?  intro.
And then the problem just reduces to a simple problem in propositional logic:
Proof:
1.x(Px → Qx)
2.x(Qx → Rx)
3. | [b]
4. | | Pb
5. | | (Pb → Qb)
6. | | (Qb → Rb)
7. | | Qb
8. | | Rb
9. | Pb → Rb
10. x(Px → Rx)
for  intro.
a for → intro.
1  elim.
2  elim.
4,5 → elim.
6,7 → elim.
4-8 → intro.
3-9  intro.
The really important thing is to think of universal introduction as analogous to conditional
introduction. When you are trying to prove a conditional you start a subproof with the antecedent
and have to end it with the consequent. When you are trying to prove a universal you start a
subproof by introducing an arbitrary name and then proving that the quantified claim holds of
that arbitrary name.
Here’s another proof involving two quantifiers:
Claim: x(Bxb → Lx), x(Bxb) |- y(Ly)
Before proving this, let’s consider what it might mean. Let’s interpret “Bxy” as saying that x is as
big as y, and “Lx” as saying that x is large. Then we can translate the Claim into English as:
55
Claim: For all x, (if x is as big as b, then x is large). For all x, x is bigger than b
|- For all y, y is large.
The premises say (1) that everything that is as big as b is large, and (2) that everything is as big
as b. The conclusion says that everything is large. And our proof system allows us to show how
this works. Remember, since the universal introduction rule requires you to use a name that
cannot occur outside of the subproof, if your conclusion is a universal, you must start by
instantiating the arbitrary name. And since “b” already occurs in the premises, you cannot use it.
So your proof must start like this:
Proof:
x(Bxb → Lx)
x(Bxb)
3. | [c]
for  intro.
|
|
| Lc
y(Ly)
3-? intro
Then, just as before, we can do our universal eliminations, using the new name we have
introduced.
Proof:
x(Bxb → Lx)
x(Bxb)
3. | [c]
for  intro.
4. | (Bcb → Lc) 1 elim.
5. | Bcb
2 elim.
|
| Lc
y(Ly)
3-? intro
And now again we can just use propositional logic to connect the top part and bottom part of the
proof.
Proof:
x(Bxb → Lx)
x(Bxb)
3. | [c]
for  intro.
4. | (Bcb → Lc) 1 elim.
5. | Bcb
2 elim.
6. | Lc
4,5 → elim.
y(Ly)
3-6 intro
56
Again, if your conclusion is a universally quantified statement, you should start the subproof for
the introduction immediately. In the previous proof, if we had done the universal eliminations
first then we would not have been able to use the name that replaced the variable, because it
would have occurred outside of the subproof for the universal introduction. So you have to
introduce the arbitrary name as soon as possible. The only exception to this is if there is an
existentially quantified statement in the premises, where you might need to introduce an arbitrary
name to eliminate on that, but we will consider that next week.
Let us do one more proof illustrating how negation rules interact with the rules for the quantifier.
Claim: x(Bxb → Lx), y(Ly) |- x(Bxb)
The conclusion is a negation, so we start doing a negation introduction:
Proof:
1. x(Bxb → Lx)
2. y(Ly)
3. | x(Bxb)
a for  intro.
|
|
n. | 
x(Bxb)
3-n  intro.
This is no different from what we would have done if the proof was in propositional logic. The
conclusion is a negation, so we must assume the claim with the negation taken away and prove
absurdity.
And how do we get absurdity? By negation elimination! So just as before we look at our
premises for negated claims that we will be able to do negation elimination on. In this case it is
premise 2. So we know that our proof will look like this:
Proof:
1. x(Bxb → Lx)
2. y(Ly)
3. | x(Bxb)
a for  intro.
|
|
m.| y(Ly)
n. | 
2,m  elim.
x(Bxb)
3-n  intro.
Again, this is no different from all of the proofs we’ve done using negation introduction thus far.
You have to get the absurdity by negation elimination, so you have to prove something that is
negated above. And you look at the dominant operator of the thing you have to prove and use the
introduction rule for that operator. Here the dominant operator is a universal quantifier, so we
57
have to use universal introduction. Which tells us we will have to start a new subproof starting
with an arbitrarily assumed name and endingour proof will look like the following:
Proof:
1. x(Bxb → Lx)
2. y(Ly)
3. | x(Bxb)
a for  intro.
4. | | [c]
for  intro.
||
||
o. | | Lc
m.| y(Ly)
4-o  intro.
n. | 
2,m  elim.
x(Bxb)
3-n  intro.
And now that our arbitrary name is introduced it is safe to do universal eliminations on lines 1
and 3.
Proof:
1. x(Bxb → Lx)
2. y(Ly)
3. | x(Bxb)
4. | | [c]
5. | | (Bcb → Lc)
6. | | Bcb
o. | | Lc
m.| y(Ly)
n. | 
x(Bxb)
a for  intro.
for  intro.
1  elim.
3  elim.
4-o  intro.
2,m  elim.
3-n  intro.
Then the top and bottom of the proof connect via simple propositional logic.
Proof:
1. x(Bxb → Lx)
2. y(Ly)
3. | x(Bxb)
4. | | [c]
5. | | (Bcb → Lc)
6. | | Bcb
7. | | Lc
8. | y(Ly)
9. | 
10. x(Bxb)
Cool beans!
a for  intro.
for  intro.
1  elim.
3  elim.
5,6 → elim.
4-7  intro.
2,8  elim.
3-9  intro.
58
To Be Added: illustration of what goes awry when restriction of eigenvariables is not
upheld.
Homework 15:
1.x(Px → Qx),x(Qx → Rx), x(Rx → Sx) |- x(Px → Sx)
2.x(Bxb → Lx), (Bab) |- La
3. x(Bxb → Lx), La |- (Bab) 
x(Px → Qx), x(Qx) |- x(Px)
x(Px → Qx),x(Px → Rx) |- x(Qx → Rx)
17- The Existential Quantifier
The Existential Quantifier has a very simple introduction rule, which we can represent thus:
m. P[a]
-----------xP[x] m intro.
[Note: x must not be bound in P[a/x]!]
Here “a” can be any individual constant in the language. Just as with the language extended to
include the universal rule, for individual constants we’ll use the first five letters of the English
alphabet (i.e. a, b, c, d, e) and subscript e’s just in case we need more (i.e. e1, e2, e3. . .). For the
variables bound by the quantifiers we will use: x, y, z, z1, z2, z3. . .
In any case, the rule basically says that if something is true of any particular thing for which you
have a name then it is true for at least one x (or y or z or z1 or z2, etc.).
Here’s an example:
Claim: Pa, Qa |- x(Px ˄ Qx)
Proof:
1. Pa
2. Qa
3. (Pa ˄ Qa)
4. x(Px ˄ Qx)
1,2 ˄ intro.
3  intro.
Very straightforward. However, there are a couple of provisos. Many sentences will have more
than one quantifier, and you can only apply the existential introduction rule at the outside of the
sentence. Here is a correct example:
Claim: y(Pb ˄ Qy) |-xy(Px ˄ Qy)
59
Proof:
1. y(Pb ˄ Qy)
2. xy(Px ˄ Qy)
1  intro.
But we could not have introduced “x” in between the “y” and “(Pb ˄ Qy)”.
And consider the note: “x must not be bound in P[a/x]!” This says that the variable we pick to
replace the a in the sentence cannot be bound by another quantifier prior to adding the
existential. So the following is incorrect:
1. x(Rxa)
2. xx(Rxx)
1,  intro.
The x replacing the a would have already been bound! Note that if it were valid, we would be
able to prove that all relations are reflexive, which is not true. If we wanted to do an existential
introduction on the a, we would have had to pick a different variable, such as the following:
1. x(Rxa)
2. xy(Rxy)
1,  intro.
This is correct.
The elimination rule for the existential quantifier is more interesting. We can represent it thus:
m. x(P[x])
n. | P[a]
|
“a” must be arbitrary in that it cannot occur outside of lines n-o!
o. | R
----------R m,n-o  elim.
This rule says that if you are trying to prove something from an existentially quantified statement
that you must first assume that the claim holds for an arbitrary object and then prove the claim in
question from that assumption. Just as with the universal introduction rule, the way we do this is
to introduce a new individual constant at the top of the subproof, but here we also assume that
our existentially quantified statement is instantiated by the arbitrary object named by our
arbitrary object. But since the name is arbitrary, it could name anything. But then it follows that
the claim proved at the bottom of the subproof must be true, no matter what our arbitrary name
picked out. Consider the following claim:
Claim:x(Px ˄ Qx),x(Qx → Rx) |- x(Rx)
We start our proof in the normal way, by writing the premises at the top and the conclusion at the
bottom.
60
Proof:
1.x(Px ˄ Qx)
2.x(Qx → Rx)
x(Rx)
Then we have a choice of whether to start the subproof for the existential elimination or to go
ahead and do the universal elimination. Here is an essential point analogous to our earlier point
that if one is proving a universally quantified claim one must start the subproof for the universal
introduction. Since the arbitrary name for an existential elimination cannot occur outside of the
subproof, you must start the subproof for your existential elimination on line 3! Thus your
first step should be like this:
Proof:
1. x(Px ˄ Qx)
2.x(Qx → Rx)
3. | Pb ˄ Qb
a for  elim.
|
|
| x(Rx)
x(Rx)
1, 3-?  elim.
Note how similar existential elimination is to disjunction elimination. In both cases you cite the
line with the dominant operator being eliminated and the relevant subproof(s). In both cases the
sentence at the end of the subproof(s) is exactly the same sentence as what you are trying to
prove. Note something important for existential elimination here though. Since the arbitrary
name picked at the beginning of the subproof cannot occur outside of the subproof, and since the
line at the end of the subproof is duplicated in the conclusion, the arbitrary name will not occur
at the last line of the subproof either.
At this point things are simple. You just eliminate on line 1, using the name “b.”
Proof:
1. x(Px ˄ Qx)
2.x(Qx → Rx)
3. | Pb ˄ Qb
a for  elim.
4. | (Qb → Rb) 2  elim.
|
| x(Rx)
x(Rx)
1, 3-?  elim.
And then you just do normal propositional logic to get “Rb,” which will allow you to introduce
the existential you need at the end of the subproof.
61
Proof:
1. x(Px ˄ Qx)
2.x(Qx → Rx)
3. | P(b) ˄ Qb a for  elim.
4. | (Qb → Rb) 2  elim.
5. | Qb
3 ˄ elim.
6. | Rb
4,5 → elim.
7. | x(Rx)
6  intro.
8. x(Rx)
1, 3-8  elim.
Just as universal introduction is analogous to conditional introduction, existential elimination is
existential elimination analogous to disjunction introduction. When you have a disjunction you
start a subproof from each disjunct and end it with whatever (sub)conclusion you need at that
point in the proof. When you have an existential you start a subproof by instancing that claim on
an arbitrary name and then proving your (sub)conclusion from the instanced claim.
Here’s another proof involving two quantifiers:
Claim: x(Bxb → Lx), x(Bxb) |- y(Ly)
As with the previous handouts let’s interpret “Bxy” as saying that x is as big as y, and “Lx” as
saying that x is large. Then we can translate the Claim into English as:
Claim: For all x, (if x is as big as b, then x is large). There exists an x such that, x is
bigger than b |- There exists a y, such that y is large.
Again, our proof system allows us to show how this works. Remember, since the existential
elimination rule requires you to use a name that cannot occur outside of the subproof, if your
premise is an existential, you must start by instantiating the arbitrary name. And since “b”
already occurs in the premises, you cannot use it. So your proof must start like this:
Proof:
x(Bxb → Lx)
x(Bxb)
3. | (Bcb)
a for  elim.
|
|
| y(Ly)
y(Ly)
2, 3-? elim.
Then, just as before, we can do our universal elimination, using the new name we have
introduced.
Proof:
x(Bxb → Lx)
x(Bxb)
62
3. | (Bcb)
a for  elim.
4. | (Bcb → Lc) 1 elim.
|
| y(Ly)
y(Ly)
2, 3-? elim.
And now again we can just use propositional logic to connect the top part and bottom part of the
proof.
Proof:
x(Bxb → Lx)
x(Bxb)
3. | (Bcb)
a for  elim.
4. | (Bcb → Lc) 1elim.
5. | Lc
3,4 → elim.
6. | y(Ly)
5  intro.
y(Ly)
2, 3-6 elim.
Again, if your premise is an existentially quantified statement, you should start the subproof for
the elimination immediately. In the previous proof, if we had done the universal elimination first
then we would not have been able to use the name that replaced the variable, because it would
have occurred outside of the subproof for the existential elimination. So you have to introduce
the arbitrary name as soon as possible. The only exception to this is if there is an universally
quantified statement in the conclusion, where you might need to introduce an arbitrary name to
introduce on that.
Let us do one more proof illustrating how negation rules interact with the rules for the quantifier.
Claim: x(Bxb → Lx), y(Ly) |- x(Bxb)
The conclusion is a negation, so we start doing a negation introduction:
Proof:
1. x(Bxb → Lx)
2. y(Ly)
3. | x(Bxb)
a for  intro.
|
|
n. | 
?,?  elim.
x(Bxb)
3-n  intro.
This is no different from what we would have done if the proof was in propositional logic. The
conclusion is a negation, so we must assume the claim with the negation taken away and prove
absurdity.
63
And how do we get absurdity? By negation elimination! So just as before we look at our
premises for negated claims that we will be able to do negation elimination on. In this case it is
premise 2. So we know that our proof will look like this:
Proof:
1. x(Bxb → Lx)
2. y(Ly)
3. | x(Bxb)
a for  intro.
|
| y(Ly)
n. | 
2,?  elim.
x(Bxb)
3-n  intro.
Again, this is no different from all of the proofs we’ve done using negation introduction thus far.
You have to get the absurdity by negation elimination, so you have to prove something that is
negated above. And you look at the dominant operator of the thing you have to prove and use the
introduction rule for that operator. Here the dominant operator is an existential quantifier, so we
have to use existential introduction, but we have no idea what name will be instantiated to allow
the introduction. So let’s look at our premises.
The key thing here is that on line three we have an existential that we are going to have to
eliminate on, and given the restriction that the arbitrary name occurring as the premise in the
subproof must be completely arbitrary we have to, have to, have to, get that subproof going
before we eliminate the universal on line one.
Proof:
1. x(Bxb → Lx)
2. y(Ly)
3. | x(Bxb)
a for  intro.
4. | | Bcb
a for  elim.
||
||
| | y(Ly)
| y(Ly)
3, 4-?  elim.
n. | 
2,?  elim.
x(Bxb)
3-n  intro.
Now we can eliminate on the universal in line 1, picking the name “c” to instantiate on, since
that will allow us to derive something from line 4.
Proof:
1. x(Bxb → Lx)
2. y(Ly)
3. | x(Bxb)
a for  intro.
64
4. | | Bcb
5. | | Bcb → Lc
||
| | y(Ly)
| y(Ly)
n. | 
x(Bxb)
a for  elim.
1 elim.
3, 4-?  elim.
2,?  elim.
3-n  intro.
And then we can use propositional logic to get “Lb” which will allow us to do the existential
introduction we need to connect the top and bottom of the proof.
Proof:
1. x(Bxb → Lx)
2. y(Ly)
3. | x(Bxb) a for  intro.
4. | | Bcb
a for  elim.
5. | | Bcb → Lc 1 elim.
6. | | Lc
4,5 → elim.
7. | | y(Ly) 6  intro.
8. | y(Ly)
3, 4-7  elim.
9. | 
2,8  elim.
10. x(Bxb) 3-9  intro.
It is important to realize that that we also could also have proven absurdity as the end of the
subproof starting on line 4, i.e.:
Proof:
1. x(Bxb → Lx)
2. y(Ly)
3. | x(Bxb) a for  intro.
4. | | Bcb
a for  elim.
5. | | Bcb → Lc 1 elim.
6. | | Lc
4,5 → elim.
7. | | y(Ly) 6  intro.
8. | 
2,7  elim.
9. | 
3, 4-8  elim.
10. x(Bxb) 3-9  intro.
This proof is just as licit as the previous, and in some ways is more intuitive. Very early on we
realized the proof would look like this:
Proof:
1. x(Bxb → Lx)
2. y(Ly)
3. | x(Bxb)
a for  intro.
65
|
|
n. | 
x(Bxb)
3-n  intro.
Then if we follow our general rule to always start an existential elimination (or universal
introduction) as soon as possible so as to introduce the arbitrary name we can use as soon as
possible, the next step would have looked like this:
Proof:
1. x(Bxb → Lx)
2. y(Ly)
3. | x(Bxb)
a for  intro.
4. | | Bcb
a for  elim.
||
||
||
n. | 
4, 4-?  elim.
x(Bxb)
3-n  intro.
Then the proof fills in in to yield the proof given three proofs ago.
The moral here is that sometime you might get absurdity by disjunctive or existential
elimination, not always by negation elimination, in this system (given that we are not allowing
sentences such as (P → ) the overwhelming majority of proofs where absurdity is derived by
disjunction or existential elimination can be normalized into one where it is only derived by
negation elimination (exceptions should only involve deriving negated existentials and negated
disjunctions).
One final note: In propositional logic any two of conditional, conjunction, or disjunction can be
defined in terms of one of the others and negation. Examples:
(P → Q) -| |- (P ˄ Q) -| |- (P ˅ Q)
(P ˄ Q) -| |- (P → Q) -| |- (P ˅ Q)
(P ˅ Q) -| |- (P → Q) -| |- (P ˄ Q)
These equivalences can be verified via truth values or by doing all of twelve of the proofs (I
think we’ve done most of them in class already). Together they show that one could form an
expressively complete propositional with just negation and one of the other propositional
operators, as you can always use the above translational manuals to translate away the two
propositional connectives that you want to show to be superfluous. Say you only wanted to use a
logic with negation and conditional, then just translate every conjunction and every disjunction
into the form given in the middle column above. Of course the resulting system would be
ungainly.
66
Also note that (as we saw in class) many of these equivalences are not valid in intuitionistic
logic. For example, one can only prove (P → Q) |- (P ˅ Q) if one uses DNE (or, as we showed
in class, the law of excluded middle, which is equivalent to DNE mod intuitionistic logic).
We can now ask if something similar holds for our quantified system of logic. Is a system with
just existential expressively complete with respect to the system of first order logic with
existential and universal? Does the same claim hold if we switch existential and universal? Yes,
and yes, because we have the following results.
x(Px) -| |- x(Px)
x(Px) -| |- x(Px)
Again: (1) it would be horrendous to actually use a system without both quantifiers, and (2) the
full equivalences require us to go beyond intuitionist logic and use DNE. Here I will provide
proofs of all four claims and we will see which ones require DNE.
Claim: x(Px) |- x(Px)
Proof:
1. x(Px)
2. | Pa
a for  elim.
3. | | x(Px) a for  intro.
4. | | Pa
3  elim.
5. | | 
2,4  elim.
6. | x(Px) 3-5  intro.
7. x(Px) 1, 2-6  elim.
Note that we could also do the same proof by starting the negation introduction first, as long as
we don’t eliminate on the universal until after the existential elimination subproof has begun,
i.e.:
1. x(Px)
2. | x(Px)
3. | | Pa
4. | | Pa
5. | | 
6. | 
7. x(Px)
a for  elim.
a for  intro.
2  elim.
2,4  elim.
1, 3-5  elim.
2-6  intro.
This proof is just as good as the previous one establishing the same result. Now let’s prove the
other direction, which requires strictly classical logic. The proof starts like this:
1. x(Px)
x(Px)
67
But at this point you clearly can’t use negation elimination on line 1 since you don’t have
“x(Px).” And there’s no way to use existential introduction yet to get the conclusion because
you have no premises to get you something of the form “Pa.” When you get stuck like this, the
only option is to go strictly classical, assuming (for negation introduction) the negation of what
you are trying to prove, so that you can prove the double negation of the conclusion and then use
DNE. So your proof is going to look like the following.
1. x(Px)
2. | x(Px)
|
|
x(Px)
x(Px)
a for  intro.
2-?  intro.
? DNE
But then, as with all such proofs, you have to figure out how to get the absurdity symbol, and it
is a safe bet that you will arrive at it by negation elimination. This means you need to find
something negated higher up in the proof to be the basis of that negation elimination. And clearly
it is not going to be “x(Px),” because “x(Px)” is what you are trying to prove in the first
place. So the negation elimination is going to occur on line 1, which will make your proof look
like this:
1. x(Px)
2. | x(Px)
|
|
|
| x(Px)
|
x(Px)
x(Px)
a for  intro.
1, ?  elim.
2-?  intro.
? DNE
But this means you are going to have to prove a universal, using universal introduction. So now
you have to find an arbitrary name and build up the relevant kind of subproof.
1. x(Px)
2. | x(Px)
3 | | [a]
||
||
| | (Pa)
| x(Px)
|
x(Px)
x(Px)
a for  intro.
for  intro.
3-?  intro.
1, ?  elim.
2-?  intro.
? DNE
68
But now you have to do a negation intro to get (Pa), so the proof looks like this:
1. x(Px)
2. | x(Px)
3. | | [a]
4. | | | Pa
|||
|||
| | (Pa)
| x(Px)
|
x(Px)
x(Px)
a for  intro.
for  intro.
a for  intro.
2,?  elim.
4-?  intro.
3-?  intro.
1,?  intro.
2-?  intro.
? DNE
Again, we have to get absurdity, so let’s start looking for negated claims higher up in the proof.
And now we finally can get what we need to use negation elimination on line 2! This connects
the top and bottom of the proof.
1. x(Px)
2. | x(Px)
3. | | [a]
4. | | | Pa
5. | | | x(Px)
6. | | | 
7. | | (Pa)
8. | x(Px)
9. | 
10. x(Px)
11. x(Px)
a for  intro.
for  intro.
a for  intro.
4  intro.
2,4  elim.
4-6  intro.
3-7  intro.
1,8  intro.
2-9  intro.
10 DNE
So we have shown how one could get rid of existential quantifiers by replacing them with
negated universal negations. We’ll close this out by providing the two proofs that accomplish the
analogous task with respect to the universal quantifier
Claim: x(Px)
Proof:
1. x(Px)
2. | x(Px)
3. | | Pa
4. | | Pa
5. | | 
6. | 
7. x(Px)
|- x(Px)
a for  intro.
a for  elim.
1  elim.
3,4  elim.
2, 3-5  elim.
2-6  elim.
69
Claim: x(Px) |- x(Px)
Proof:
1. x(Px)
2. | [a]
for  intro.
3. | | Pa
a for  intro.
4. | | x(Px) 3  intro.
5. | | 
1,4  elim.
6. | Pa
3-5  intro.
7. | Pa
6 DNE
8. x(Px)
2-7  intro.
Notice how the last equivalence requires use of DNE as well.
To Be Added: illustration of what goes awry when restriction of eigenvariables is not
upheld.
Homework 16
1. x(Px ˅ Qx), x(Px → Rx) |- x(Qx ˅ Rx)
2.x(Px ˅ Qx),x(Qx) |- x(Px)
x(Px ˅ Qx), x(Px) |- x(Qx)
x(Bxb → Lx), y(Ly) |- x(Bxb)
5. y(Ly), x(Bxb) |- x(Bxb → Lx)
Identity
To be Added: Intro and Elimination rules for identity, proof of symmetry and transitivity
as derived. Maybe addition of functions after that.
70
Modal Logic
Here are some notes about how to extend our framework to include the logic of necessity and
possibility. Since most interesting philosophical issues touch on modal notions, it is an important
part of the toolbox.13
Fitch style proper natural deduction formulation of propositional modal logic K
Alex Simpson's treatment of intuitionist modal logic in a Prawitz style tree system has made this
possible a very clear presentation of modal logic as an extension of CL as we have developed it
in these notes. Here I'll just present K and in the next section show how one can get T, S4, and
S5 by progressively adding inference rules that force accessibility to be reflexive, transitive, and
symmetric.
First, some sample key proofs, the characteristic K axiom as well as the interderivability of
possibility and necessity in K (as one would expect, one direction involves non-intuitionistic
resources).
Claim: |-K [](A --> B) --> ([]A --> []B)
Proof:
1. | x: [](A --> B)
for --> introduction
2. | | x: []A
for --> introduction
3. | | | xRy
for [] introduction
4. | | | y: A
2,3 [] elimination
5. | | | y: A --> B
1,3 [] elimination
6. | | | y: B
4,5 --> elimination
7. | | x: []B
3-6 [] introduction
8. | x: ([]A --> []B)
2-7 --> introduction
9. x:[](A --> B) --> ([]A --> []B) 1-8 --> introduction
Claim: |-K <>A --> ~[]~A
Proof:
1. | x: <>A
for --> introduction
2. | | xRy
for <> elimination
3. | | y: A
for <> elimination
4. | | | x: []~A for ~ introduction
5. | | | y: ~A
2,4 [] elimination
6. | | | y: #
3,5 ~ elimination
7. | | x: ~[]~A 4-6 ~introduction
8. | x: ~[]~A
1,2/3-7 <>elimination
9. <>A --> ~[]~A 1-8 --> introduction
2b. Claim: |-K ~[]~A --> <>A
13
Much of this was worked out during a modal logic independent study with Graham Bounds, Michael Morrissey,
Joel Musser, and Caitlin O Malley in Summer of 2012.
71
Proof:
1. | x: ~[]~A
for --> introduction
2. | | x: ~<>A
for ~ introduction
3. | | | xRy
for [] introduction
4. | | | | y: A
for ~ introduction
5. | | | | x: <>A 3,4 <> introduction
6. | | | | x: #
2,5 ~ elimination
7. | | | y: ~A
4-6 ~introduction
8. | | x: []~A
3-7 [] introduction
9. | | x: #
1, 8 ~elimination
10.| x: ~~<>A
2-9 ~introduction
11. | x: <>A
10 DNE
12. x: ~[]~A --> <>A 1-11--> introduction
The system just makes two changes to a normal Fitch style proper natural deduction system of
classical logic. First, lines in the proofs are indexed to worlds. Second, introduction and
elimination rules for possibility and necessity are added. For years people have kind of
understood that something analogous to the way arbitrary names/eigenvariables are treated in
first-order logic proofs is happening in modal logic proofs too, and I think people first did this
explicitly with tableu systems of modal logic. But Simpson really nailed it down with his system.
In my presentation of K, the only major departure from Simpson (besides the system being Fitch
style as opposed to Prawitz style) is that I do not define negation in terms of absurdity and the
conditional. If you do it Simpson's original way, you can't define an independent minimal modal
logic! I thought this was actually an interesting enough result to get another Analysis paper (my
and Roy Cook's first publication showed something similar with respect to negation, implication,
and zero equals one). But via e-mail Simpson showed me that you could still get a proper
minimal modal logic with negation defined in terms of conditional and absurdity by adding a
rule to his system that states that if absurdity holds at one world, it holds at all worlds. Since I
can't see anything wrong with such a rule, there goes the sequal to me and Roy's paper (which is
very sad, because we all know that The Empire Strikes Back is by far the best in the series). But
nonetheless, I still treat negation in the manner of Neil Tennant, with its own set of rules,
because if the sequal to me and Roy's paper can be rebooted in light of Simpon's response, it's
going to be in terms of what is necessary for pursuing Fitch style proper natural deduction
systems of relevant modal logics.
Otherwise, everything for K is strictly analogous to Simpson's Prawitz tree style treatment (for T,
S4, and S5 there are more extreme deviations which simplify the Fitch proofs; I'll show those in
the next post).
So first, here are the propositional rules
1. World indexed Minimal Logic Propositional Rulesm. x: A
n. x: A m, reiteration
72
m. x: A
n. x: B
o. x: (A & B) m, n & introduction
m. x: (A & B)
n. x: A m & elimination
m. x: (A & B)
n. x: B m & elimination
m. x: A
n. x: (A v B) m v introduction
m. x: B
n. x: (A v B) m v introduction
m. x: (A v B)
n. | x: A for v elimination
| :
o. | y: C
p. | x: B for v elimination
| :
q. | y: C
r. y: C
m, n-o,p-q v elimination
[Note: y can be equal to x!]
m. |x: A for --> introduction
| :
n. | x: B
o. x: (A --> B) m-n --> introduction
m. x: (A --> B)
n. x: A
o. x: B m, n --> elimination
m. | x: A for ~ introduction
|:
n. | y: #
o. x: ~A
m-n ~ introduction
[Note: y can be equal to x!]
m. x: A
n. x: ~A
o. y: # m, n ~ elimination
[Note: y can be equal to x!]
2. World Indexed Intuitionist Absurdity rule:
m. x: #
:
n. y: A m # elimination
[Note again that y can be equal to x.]
3. World Indexed Double Negation Elimination (to get full classical logic)
m. x: ~~A
n. x: A
m, DNE
4. Introduction and elimination rules for [] and <>.
These rules allow us to introduce statements that world y is accessible from world x, via xRy.
Then, possibility elimination and necessity introduction have restrictions with the variables
referring to arbitrary worlds that are exactly analogous to restrictions on arbitrary names in
73
existential elimination and universal introduction in proper natural deduction presentations of
first-order predicate logic.
If you look at the rules you will see that the disanalogy comes with respect to possibility
introduction and necessity elimination, which require much more than existential introduction
and universal elimination. For example, the K rule for inferring <>P at world x requires that P be
true at some world y, and that xRy. As far as I can tell, the normal modal logics that strengthen K
basically make possibility introduction and universal elimination easier to use because they give
you more facts about the accessibility relation. In the next post I'll show how this works with T,
S4, and S5. But here are the rules. I'm not writing a textbook here, so I won't explain much more,
but if you re-examine the proofs above you can see how cleanly they were derived both from the
bottom-up in terms of what is required to introduce the operator in question and from the topdown in terms of what is licensed by an assertion of a sentence with that operator dominant. It's
very, very easy to discover proofs in this system if you've been taught something like Barwise
and Etchemendy's approach to first order logic. And it's consonant with anti-representationalist
philosophy of logic that some bloggers here (and by "some" I mean at least one) and friends of
the blog like. But cool even if you don't like that philosophy of logic. It's still cool.
m. y:A
n. xRy
:
:
o. x: <>A m, n <> introduction
m. x: <>A
n. | xRy for <> elimination
o. | y: A for <> elimination
| :
| :
p. | z: B
q. z: B m, n/o-p <> elimination**
m. | xRy
|:
|:
n. | y: A
o. x: []A
m. x: []A
n. xRy
:
:
o. y: A m,n [] elimination
for []introduction
m-n [] introduction*
[Note again that with <> introduction and [] elimination the y in question can be equal to the x,
though in most proofs it will not. But, for example, showing that the system KD is included in
KT requires having them be equal.]
*Restriction on [] introduction: y can only occur in lines m-n [Note that as a result y must be
different from x.]
**Restriction on <> elimination: y can only occur in lines n/o-p! [Note that as a result y must be
different from x and z. However z and x might be identical!]
These restrictions on the world variables in <> elimination and [] introduction are exactly
analogous to restrictions on the use of arbitrary names in Fitch style natural deduction proofs for
first order predicate logic. It is this fact that makes proving things in our version of K so easy for
74
anyone competent with a properly presented Fitch style natural deduction system for first order
predicate logic.
Fitch style proper natural deduction formulation of propositional modal logics
T, S4, and S5.
In the above, we have presented proofs of the characteristic K axiom as well as the
interdefinability of possibility and necessity in classical K. Here I will show that adding
inferences that force the accessibility relation to behave in the predicted ways allows you to
prove the characteristic inferences for T, S4, and S5.
In the earlier post I should have been clearer about what I meant by a "proper" natural deduction
system, since this is what is at issue. A number of textbooks present deductive systems for modal
logic that involve more inference rules than modus ponens and substitution and call these
"natural deduction" systems. But as far as I am aware, they are all improper. A proper natural
deduction system: (1) proceeds by giving an introduction and elimination rule for each operator,
that are (2) provably harmonious (2a: the ability to prove uniqueness in the system of an
operator shows that the elimination rule is strong enough, given the introduction rule (a relative
form of completeness), 2b: the existence of a normal form proof for the system shows that the
elimination rule is not too strong given the introduction rule (a proof theoretic relative form of
soundness), and such that (3) the system is conservative over at least atomic formula (this is
Restall's weak conservativity, to be contrasted with Tennant's strong conservativity, which
requires the rules for each operator to be conservative with respect to the fragment of logic with
those rules removed). Conservativity is a proof theoretic system-wide soundness requirement.
There is no agreement in the literature on what a system-wide completeness requirement would
be. I posted an interesting post on this a while ago, using some ideas of Neil Tennant's. A
consequence of my idea is that proof theorists motivated by constructivism (and who thus accept
a meta-theoretic disjunction property) must accept that undecidability entails incompleteness,
and that as a result first-order logic is actually incomplete. I'll do another post on this soon since I
was not clear enough about two intuitionistically inequivalent formulations of completeness (the
weaker being intuitionistically provable for some semantics, the stronger provably false for first
order logic for anyone committed to the disjunction property). Also see this cool post on using
Tennant's ideas to come up with a non-monotonic proof theory for modal countermodels.
Anyhow, here are the proofs. Brief discussion afterwards about how we depart from Simpson.
Proof of characteristic T axiom, using reflexivity of the accessibility relation:
1.
2.
3.
4.
| x: []A
| xRx
| x: A
x: []A --> A
for --> Intro
reflexivity
1,2 []Elim: 1, 2
1-3 -->Intro
Proof of characteristic S4 axiom, using transitivity of the accessibility relation
1.
| x: []A
for --> Intro
75
2.
3.
4.
5.
6.
7.
8.
| | xRy
| | | yRz
| | | xRz
| | | z: A
| | y: []A
| x: [][]A
x:[]A-->[][]A
for []Intro
for []Intro
2,3 transitivity
1,4 []Elim
3-5 []Intro
2-6 []Intro
1-7 -->Intro
Proof of characteristic S5 axiom, using symmetry and transitivity of the accessibility relation
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
| x: <>A
for -->Intro
| | xRy
for []Intro
| | | z: A
for <>Elim
| | | xRz
for <>Elim
| | | yRx
2 symmetry
| | | yRz
5,4 transitivity
| | | y: <>A
3, 6 <>Intro
| | y: <>A
3/4-6<>Elim
| x: []<>A
2-8 []Intro
x: <>A-->[]<>A
1-9 -->Intro
Notes: (1) Differences from Simpson: (1a) Simpson’s rules involve subproofs that show that if
something is provable with the relevant assumption about accessibility then it is provable
without it. This involves discharging the assumption and recopying the result. This kind of thing
is necessary in a Fitch presentation of existential elimination for predicate logic (and hence <>
Elim. in our K), to be able to state the restrictions on eigen-variables, but I can't see why it’s
necessary here. With Fitch style proofs, the proofs are shorter and easier to discover if we just
allow the relevant statements of accessibility to be inferences that follow from no assumptions. I
don't know if this makes normalization proofs more difficult, but I don't think it leads to
unsoundness. Consider that completely arbitrary worlds in T will be accessible from themselves,
so there is no analogous worry about eigenvariables. (1b) Simpson uses Euclideanness for S5
rather than reflexivity, symmettry, and transitivity. I think it's prettier to build up the systems this
way. (2) Note that reflexivity is very similar to the standard identity introduction rule of first
order predicate logic. And that the standard elimination rule of first order predicate logic allows
you to prove transitivity and symmetry. So one could almost certainly get S5 by adding an
analog to the identity elimination rule to reflexivity. I've played with this a bit and it seems to
work.
Predicate Modal Logic
To be Added: discussion of Marcus formula.
Actuality
To be Added: discussion of Lance/Restall on 2 dimensionalism.
76
Higher-Order Modal Logic
To be Added: Extended intro and elimination rules.
Montague’s Intensional Logic
To be Added: intro and elimination rule for lambda.
Limitations
Is our proof system incomplete, or classical semantics unsound?
77
Appendices
Rules for First-Order Classical Predicate Logic
m. P
-----P m reit.
----------------------------------------------------------------------m. (P  Q)
m.(P  Q)
|
m. P
-------------|
n. Q
P m  elim
Q m  elim |
---------|
(P  Q) m,n  intro
----------------------------------------------------------------------m. (P  Q)
|
m. P
m. Q
n. | P
a for v elim
|
-------|
|
(P  Q) m  intro (P  Q) m  intro
o. | R
|
p.
| Q
a for v elim
|
|
|
q.
| R
|
--------|
R m, n-o, p-q  elim
|
----------------------------------------------------------------------m. (P  Q)
|
m. | P
a for  intro
n. P
|
|
-----|
n. | Q
Q m,n  elim
|
--------|
(P  Q) m-n  intro
----------------------------------------------------------------------m.
P
|
m. | P a for intro
n.
P
|
|
-----|
n. | 
 m,n  elim
|
--------|
P m-n intro
----------------------------------------------------------------------m. 
|
-----|
m. P
P
m  rule
|
--------|
P m DNE
-----------------------------------------------------------------------
(Quantifier Rules on the next page!)
78
----------------------------------------------------------------------m. xP[x]
|
m. | [a] for  intro
------|
|
P[a] m  elim
|
n. | P[a]
|
---------|
xP[x] m-n  intro
|
| {Note: “a” cannot occur outside of the
| subproof given in lines m-n}
----------------------------------------------------------------------m. xP[x]
|
m. P[a]
n. |[a] P[a] a for  elim
|
----|
|
xP[x] m  intro
o. | R
|
--------|
R
m, n-o  elim
|
{Note: “a” cannot occur outside
|
of the subproof given in lines n-o} |
-----------------------------------------------------------------------
Test Helps
Test 1 Help
Be able to do truth tables and use them to tell if a sentence is tautologous, contradictory, or
contingent.
Test 2 Help
I won’t say if any of these will be on the exam, but I will say that if you can do these (and
problems like them) well you will do fine on the exam. Since all of the basic proof strategies are
manifest in the solutions to these, understanding them will make the exam easy.
1. (P  Q), (R  S) |- (P S)
2. (P  S) |- ((R  S)  P)
3. (P  Q), (Q  R), P |- R
4. (P  S), P |- 
5. P, (P  Q) |- 
6. (P  S), P |- R
7. P |- P
8. (P  S) |- P
9. (P  Q), (Q  R) |- (P  R)
10. (P  Q) |- ((R  S)  (P  S))
11. |- (P  S)  P
12. P |- (S  P)
79
13. |- (P  P)  R
14. (P  Q), Q |- P
15. (P  S) |- P
16. (P  Q), (Q  R), R |- P
17. (P  Q) |- P
18. (P  R), (P  Q), (R  S), (S  Q) |- Q
19. (P  R), (R  S) |- (P  S)
20. (P  (Q  R)), (R  (S  T)), Q, (P  T) |- (S  T)
21. (P  Q), P |- Q
22. (P  (Q  S)), (Q  R), (S  R) |- P
23. (P v Q) |- (P  Q)
24. (P  (Q  R)) |- ((P  Q)  (P  R))
25. ((P  Q)  (P  R)) |- (P  (Q  R))
26. (P  (Q  R)) |- ((P  Q)  (P  R))
27. (P  Q) |- (P  Q)
28. (P  Q) |- (P  Q)
29. (P  Q) |- (P  Q)
30. (P  Q) |- (P  Q)
31. (P  Q) |- (Q  P)
32. (Q  P) |- (P  Q)
33. (P  Q) |- (P  Q)
34. (P  Q) |- (P  Q)
35. (P  Q) |- (P  Q)
36. (P  Q) |- (P  Q)
Test 3 Help
2010 FINAL STUDY GUIDE
Do all of these in study groups, making sure that you understand why each proof is valid by the
rules below. Then memorize each proof. If you can do this, you will have no problem on the
final, as all of the strategies you will need for proof construction are exhausted by the following.
1. (P  Q), (R  S) |- (P S)
2. x(Px  Qx), x(Rx  Sx) |- x(Px  Sx)
3. (P  S) |- ((R  S)  P)
4. x(Px  Sx) |- x((Rx  Sx)  Px))
5. (P  Q), (Q  R), P |- R
6. x(Px  Sx), x(Qx  Rx), x(Px) |- x(Rx)
7. (P  S), P |- 
8. x(Px  Sx), x(Px) |- 
9. P, (P  Q) |- 
10. x(Px), x(Px  Sx) |- 
80
11. (P  S), P |- R
12. x(Px  Sx), x(Px) |- R
13. P |- P
14. (P  S) |- P
15. x(Px  Sx) |- x(Px)
16. (P  Q), (Q  R) |- (P  R)
17. x(Px  Qx), x(Qx  Rx) |- x(Px  Rx)
18. (P  Q) |- ((R  S)  (P  S))
19. x(Px  Qx) |- x(((Rx  Sx)  (Px  Sx))
20. |- (P  S)  P
21. |- x((Px  Sx)  Px)
22. P |- (S  P)
23. x(Px) |- x(Sx  Px)
24. |- (P  P)  R
25. |- x((Px  Px)  Rx)
26. (P  Q), Q |- P
27. x(Px  Qx), xQx |- xPx
28. (P  Q), (Q  R), R |- P
29. x(Px  Qx), x(Qx  Rx), xRx |- xPx
30. (P  Q) |- P
31. x(Px  Qx) |- x(Px)
32. (P  R), (P  Q), (R  S), (S  Q) |- Q
33. x(Px  Rx), x(Px  Qx), x(Rx  Sx), x(Sx  Qx) |- xQ
34. (P  R), (R  S) |- (P  S)
35. x(Px  Rx), x(Rx  Sx) |- x(Px  Sx)
36. (P  (Q  R)), (R  (S  T)), Q, (P  T) |- (S  T)
37. x(Px  Qx), xPx |- xQx
38. x(Px  Qx), xPx |- xQ
39. (P  (Q  S)), (Q  R), (S  R) |- P
40. x(Px  (Qx  Sx)), x(Qx  Rx), x(Sx  Rx) |- xP
41. (P v Q) |- (P  Q)
42. x(Px v Qx) |- x(Px  Qx)
43. (P  Q) |- (P  Q)
44. x(Px  Qx) |- x(Px  Qx)
45. (P  Q) |- (P  Q)
46. x(Px  Qx) |- x(Px  Qx)
47. (P  Q) |- (P  Q)
48. x(Px  Qx) |- x(Px  Qx)
49. (P  Q) |- (P  Q)
50. x(Px  Qx) |- x(Px  Qx)
51. (P  Q) |- (Q  P)
52. x(Px  Qx) |- x(Qx  Px)
53. (Q  P) |- (P  Q)
54. x(Qx  Px) |- x(Px  Qx)
81
55. (P  Q) |- (P  Q)
56. x(Px  Qx) |- x(Px  Qx)
57. (P  Q) |- (P  Q)
58. x(Px  Qx) |- x(Px  Qx)
59. (P  Q) |- (P  Q)
60. x(Px  Qx) |- x(Px  Qx)
61. (P  ~Q) |- (P  Q)
62. x(Px  ~Qx) |- x(Px  Qx)