Download Proof Search in Modal Logic

Document related concepts

Foundations of mathematics wikipedia , lookup

Bayesian inference wikipedia , lookup

Peano axioms wikipedia , lookup

Inquiry wikipedia , lookup

Abductive reasoning wikipedia , lookup

Structure (mathematical logic) wikipedia , lookup

Rewriting wikipedia , lookup

Non-standard calculus wikipedia , lookup

Model theory wikipedia , lookup

Quasi-set theory wikipedia , lookup

Propositional formula wikipedia , lookup

Gödel's incompleteness theorems wikipedia , lookup

Combinatory logic wikipedia , lookup

Interpretation (logic) wikipedia , lookup

Mathematical logic wikipedia , lookup

Axiom wikipedia , lookup

Law of thought wikipedia , lookup

Intuitionistic logic wikipedia , lookup

Laws of Form wikipedia , lookup

Modal logic wikipedia , lookup

Curry–Howard correspondence wikipedia , lookup

Propositional calculus wikipedia , lookup

Mathematical proof wikipedia , lookup

Theorem wikipedia , lookup

Accessibility relation wikipedia , lookup

Natural deduction wikipedia , lookup

Transcript
Proof Search for Modal Logical systems (S4, S5, GL)
Ramyaa
Date of your defense
Department of Philosophy
Carnegie Mellon University
Pittsburgh, PA 15213
Thesis Committee:
Wilfried Sieg
Joseph D. Ramsey
A Thesis Submitted for the Degree of Master of Science in logic, Computation and Methodology
Copyright 2012 Ramyaa
1
Contents
1. Background
1
1.1. Modal Logic
2
1.2. Logic of Provability
4
1.3. Natural deduction
12
2. Proof search calculi
23
2.1 Systems S4, S5 and GL
23
2.2 Natural Deduction Rules
30
2.3 Intercalation Calculus Rules
34
2.4 Soundness
35
2.5 Completeness
39
3. Implementation
51
3.1 AProS
51
3.2 Proof search in Modal Logic
54
3.3 Implementational Details
56
3.4 Examples
62
Appendix
65
References
117
2
CHAPTER 1
Background
This thesis presents the work done on proof search for the modal logical systems S4, S5
and the Logic of Provability (GL). An intercalation calculus ([10]) was used as the underlying
logical calculus, and the proof search was automated using the theorem prover AProS [1]. The
inference rules in the intercalation calculus for the systems S5 and GL, and their soundness and
completeness results were based on the work done for the system S4 in [9].
This chapter gives the theoretical background. Chapter 2 describes the systems GL, S4
and S5, intercalation calculus rules along with their soundness and completeness results. This
gives the framework for proof search in these systems. Chapter 3 explains the implementation of
the automated proof search procedure. The current chapter is organized as follows: section 1
introduces modal logic - language and semantics of (classical propositional) modal logical
systems, in particular normal modal logical systems; section 2 discusses the Logic of Provability,
- formal systems and provability, provability represented inside the systems, and properties that a
logical system reasoning about provability should have, along with a brief sketch of the use of
modal logic as the logic of provability; section 3 gives details about natural deduction systems
including Prawitz’ system, normal proofs and intercalation calculus.
1.1 Modal Logic
An expression used to qualify the truth of a statement is called a modal – for instance, "it
is necessary that …”. Modal logic is in part the study of reasoning with concepts that qualify
truth – strictly speaking, the notions of "necessity" (and its dual, "possibility"). Modal logic can
3
also be used to reason about a variety of other modal expressions, such as knowledge (and
belief), obligation (and permission), and provability (and consistency.)
This thesis deals with the modal logic of provability and consistency - the Logic of
Provability (GL), and with the modal logics of necessity and possibility – systems S4 and S5.
These systems are normal modal logical systems with Kripke's possible worlds semantics.
This section gives
- language of classical propositional modal logic,
- axioms and inference rules of normal modal logical systems
- Kripke semantics of modal logical systems.
1.1.1
Language
The language of a modal logical system contains sentential letters Pi (i  N), logical
connectives (&, V, →, ↔, ¬) and the modal operator (□). The operator ◊ is defined as ¬□¬1.
The set of sentences of GL, S4 and S5 is the minimum set containing:
i.
All sentential letters,
ii. (φ1 & φ2), (φ1 V φ2), (φ1 → φ2), (φ1 ↔ φ2), (¬φ1) and (□φ1), where φ1 and φ2 are
sentences.
1.1.2 Normal modal logical systems
A normal modal logical system contains the following axioms:
i. all propositional tautologies,
ii. distribution axioms i.e., all sentences of the form (□(φ1→φ2) → (□φ1→□φ2)).
1
The connective □ is used to represent necessity, knowledge, obligation or provability and its dual ◊ is used to
represent possibility, belief, permission or consistency.
4
Such a system is also closed under the following operations:
i. modus ponens,
ii. necessitation i.e., if φ is provable, so is □φ.
An arbitrary normal modal logical system contains the axioms and inference rules described
above, along with some additional axioms that distinguish different such systems from each other.
The axioms for the normal systems S4, S5 and GL are given in chapter 2.
A logical system S proves a formula φ, written as S
⊦
φ, if there exists a finite sequence of
formulae whose last formula is φ, such that each formula in the sequence is either an axiom of S
or follows from preceding formulae by one of the inference rules of S.
1.1.1 Semantics
Kripke’s possible worlds semantics is used as the semantic model for the systems studied.
The Kripke model for a modal logical system is a triplet M = 〈W, R, ⊩〉, where
i. W is a non-empty set,
ii. R is a binary relation on the elements of W, and
iii.
⊩ is a binary relation between elements of W and formulae.
The elements of the set W are known as possible worlds. R is called the accessibility
relation; for any two elements, u and v of W, if uRv holds, then v is said to be accessible to u.
The relation
⊩ is called valuation. For propositional connectives, this relation mimics the truth
value assignment of propositional logic. Regarding □, for any element u of W, u⊩□φ if and only
if (∀v)(uRv → v⊩φ).
5
In a given model M, a sentence φ is said to be true at a world u (u ∈ W) if and only if
u⊩φ. A sentence φ is said to be valid in a model M = 〈W, R, ⊩〉 if and only if for all u in W, φ
is true at u. (We write this as M⊩φ). A sentence φ is said to be satisfiable in a model M = 〈W, R,
⊩〉 if and only if for some u in W, φ is true at u.
Finally, we can define when a sentence φ is a
semantic consequence of a set of sentences Γ and write Γ⊩φ. This relationship holds just in case,
for any model M such that M⊩ψ for all ψ Γ, it also holds that M⊩φ.
Given a model M, the properties of its accessibility relation, such as reflexivity,
symmetry, transitivity etc. determine it as a model for a particular modal logic. The properties of
the accessibility relation of a model required to make it a model of S4, S5 and GL are described in
chapter 2. Thus all models considered are models of some normal modal logical system. Since
all systems considered here are normal modal logical systems, all propositional tautologies and
all distribution axioms are valid in them, as are the sentences derived using modus ponens, and
necessitation.
1.2 Logic of Provability
The Logic of Provability is a modal logic which studies the concept of formal provability
(and consistency). The system considered here is the modal logical system called GL after Gödel
and Löb. GL was introduced by Boolos [2]. This section introduces formal systems (in particular
Peano Arithmetic), provability within a formal system, the incompleteness theorems, conditions to
derive them and a brief description of GL (showing that it reasons about provability in PA).
6
1.2.1 Formal systems and provability
Peano Arithmetic (PA) is a formal system whose axioms are the axioms of classical firstorder logic (including those for falsum), axioms for zero and successor, recursion axioms for
addition and multiplication, and the induction axiom scheme. PA’s inference rules are modus
ponens and generalization.
A proof of a formula φ in a formal axiomatic system S is a finite sequence of formulae
whose last formula is φ, such that each formula is either an axiom of S or follows from preceding
formulae by one of the inference rules of S. If there is a proof of the formula φ in S, φ is said to
be provable in (or a theorem of) S, written as S ⊦ φ. A formula φ is refutable in S if the negation
of φ is provable in S. A formal system is consistent if it does not both prove and refute a sentence
i.e., S does not prove a contradiction; S is (syntactically) complete if for every sentence φ, φ is
either provable or refutable in S..
Peano Arithmetic reasons about arithmetic. In order to reason about concepts like
provability inside PA, these concepts have to be arithmetized. This “arithmetization of metamathematics” (called Gödel numbering is done by mapping syntactic objects such as formulae
into a number (called its Gödel number) using a constructive, one-to-one mapping. For this
process, it is necessary to have names within the system for syntactic objects; in Peano
Arithmetic, when x′ is the successor of x, the number 3 has the name 0‴; the latter syntactic
object is called a numeral; if the Gödel numbering would associate 3 to the conjunction symbol
“&”, then 0‴ would be the Gödel numeral for “&”. Here, the Gödel numeral of a syntactic
object A is written in bold font where A is the numeral for the Gödel number of A. Using this, all
syntactic objects (variables, constants, connectives, formulae, proofs, etc.) except theorems are
7
recursive (i.e., given an object O, it is decidable whether or not O is an object of type T, say of
type formulae); theorems are recursively enumerable (i.e, given an object O, it is semidecidable
whether O is a theorem). This is explained in more detail below.
As mentioned, to reason about concepts like provability inside PA, they need to be
represented inside PA, i.e., a predicate in PA should, through Gödel numbering, represent the
proof relation (⊦) of PA. As shown by Gödel, there is a predicate binary proof in PA such that if
x is the Gödel number of a proof for the formula whose Gödel number is y, then PA proves
proof(x, y), and if it is not the case that x is a proof of y, then PA proves ¬proof(x, y). Using this,
a predicate Bew(y), (provable(y)) can be formulated as (∃x)proof(x, y). Bew(x) is called the
provability predicate or theorem predicate in PA. If PA proves φ, then PA proves Bew(φ). This
cannot be proved inside PA, i.e., PA cannot prove (φ → Bew(φ)) for all sentences φ2. We cannot
prove its converse i.e., we cannot prove in PA (Bew(φ) → φ) for all sentences φ. In reasoning
about provability via a logical system, we aim to capture this predicate Bew.
Incompleteness theorems
If a formal system is incomplete, there exists a sentence that is neither provable nor
refutable in the system. This limits the provability inside the system, and thus very relevant to the
logic that reasons about the system’s provability, i.e., logic of provability for that system.
Let Z be an arbitrary theory which contains a modicum of number theory and can prove
basic arithmetical facts, represent provability etc3. Provability of Z is represented inside Z by a
predicate Bewz (analogous to the provability predicate Bew in PA). Consistency is represented in
PA can prove this for all Σ1 sentences.
Although we are interested in provability of PA, the incompleteness theorems hold for more general systems.
Hence, we are going to consider such general systems which can represent number theory and prove some basic
arithmetic facts.
2
3
8
the system as "cannot prove a contradiction" i.e., ¬Bewz(⊥), where ⊥ is a placeholder for any
contradiction.
Gödel's First Incompleteness Theorem (Rosser’s version): If Z is consistent, there is a
statement φ, which is neither provable nor refutable in Z.
Gödel's Second Incompleteness Theorem: If Z is consistent and satisfies the derivability
conditions (described below), then the consistency of Z is not provable inside Z ([4]).
Related theorems
Löb’s Theorem: Given a formula φ for which Z proves the reflection principle
(Bewz(φ)→φ), Z also proves φ ([6]), i.e., If Z ⊦ Bewz(φ) → φ4, then Z ⊦ φ.
The second incompleteness theorem can be directly derived from Löb’s theorem.
Self-Reference-Lemma: Given any formula φ(y) of Z in which y is the only free variable,
there exists a sentence ψ of the language of Z such that Z ⊦ ψ ↔ φ(ψ).
This lemma is used in crucially proving the first incompleteness theorem and Löb’s theorem.
The concept of provability is reflected inside a formal system by the provability predicate
Bewz. Hence, a logical system that reasons about provability needs a version of the Bewz
predicate which should mirror Bewz in terms of its power. As the incompleteness theorems are
relevant to provability inside the system, the version of the Bewz predicate used should have the
properties of Bewz that are needed to prove the incompleteness results. These properties are
discussed below.
4
Bewz(φ) → φ is the reflection principle and Löb theorem states that this is provable only if φ is.
9
1.2.2 Derivability conditions
Hilbert, Bernays and Löb proved that given an arbitrary theory Z (as above), and an
arbitrary formula B(x) of Z, the second incompleteness theorem for Z (with B(x) playing the role
of Bewz(x)) is derivable in Z if the following conditions hold ([5], [6]).
For any two sentences φ and ψ of Z,
Dl: if Z ⊦ φ, then Z ⊦ B(φ)
(semirepresentability of theorem predicate)
D2: Z ⊦ B(φ → ψ) → (B(φ) →B(ψ))
(provable closure under modus ponens)
D3: Z ⊦ B(φ) → B(B(φ))
(formalization of semirepresentability)
All these conditions hold for the predicate Bew for PA. Hence, the second incompleteness
theorem can be proved for PA. In addition to these three conditions, the self-reference lemma is
needed to capture provability. For instance, consider Löb’s theorem formalized as
DL: Z ⊦ B(B(φ) → φ) → B(φ)
A system Z with a predicate B satisfying D1, D2 and D3 uses the self-reference lemma to prove
DL. System Z as described can prove the self-reference lemma. So, this is not an issue. However,
if a modal logic is used to capture provability (i.e., has a representation of the predicate B, along
with axioms that capture the derivability conditions), it is easy to represent D1, D2, and D3 (in
terms of the predicate B), but not the self-reference lemma. However, Z with a predicate B
satisfying D1, D2 and DL can prove D3 without using the self-reference lemma (also shown
below). Further, DL can be represented in terms of B. Due to this, the Logic of Provability is
defined to have a predicate B(x) such that D1, D2 and DL are true for it.
10
Equivalence of D3 and DL (in the presence of D1 and D2)
(i) From Dl, D2, and D3, Löb’s theorem can be derived (using self-reference-lemma).
Proof: Consider the sentence B(x) → φ with one free variable, x. By the Self-Reference-Lemma,
there is a sentence ψ such that
(1) Z ⊦ ψ↔(B(ψ) → φ)
Self-Reference-Lemma
(2) Z ⊦ ψ→ (B(ψ) → φ)
from (1)
(3) Z ⊦ B(ψ→ (B(ψ) → φ))
by D1
(4) Z ⊦ B(ψ→ (B(ψ) → φ)) →(B(ψ)→ B(B(ψ) → φ))
by D2
(5) Z ⊦ B(ψ)→ B(B(ψ) → φ)
from (3), (4)
(6) Z ⊦ B(B(ψ) → φ) → (B(B(ψ)) →B(φ))
by D2
(7) Z ⊦ B(ψ) → (B(B(ψ)) →B(φ))
from (5), (6) by transitivity
(8) Z ⊦ B(ψ) → B(B(ψ))
by D3
(9) Z ⊦ B(ψ) → B(φ)
using (7), (8)
(10) Z ⊦ (B(φ) → φ) → (B(ψ) → φ)
using (9)
(11) Z ⊦ (B(φ) → φ) → ψ
10,1 transitivity
(12) Z ⊦ B((B(φ) → φ) → ψ)
by D1
(13) Z ⊦ B((B(φ) → φ) → ψ) → (B(B(φ) → φ) → B(ψ)
by D2
(14) Z ⊦ B(B(φ) → φ) → B(ψ)
from 12, 13
Hence, DL: (15) Z ⊦ B(B(φ) → φ) → B(φ)
14, 9 transitivity
Note: This proof can be divided into inferences about the provability predicate (which involve
the derivability conditions) and those that do not. This distinction can be formalized as metatheory (which reasons about the provability) and object theory. Though we are interested in
provability, Z need not differentiate between meta-theory and object-theory, and inference rules
11
that allow to shift between them. So, the proof above does not make this distinction. The proof
formalized in the appendix is done in AProS, and makes this distinction explicit.
(ii) From Dl, D2 and DL, D3 can be proved.
Proof: Distribution of B over conjunction is assumed and not proved here.
(1) Z ⊦ (B(B(φ) & φ) → (B(B(φ)) & B(φ))
&distribution5
(2) Z ⊦ (B(B(φ) & φ) → B(φ)
from (1)
(3) Z ⊦ φ → ((B(B(φ) & φ) → (B(φ) & φ))
from (2)
(4) Z ⊦ B(φ→((B(B(φ) & φ) → (B(φ) & φ))
from D1
(5) Z ⊦ B(φ→(B(B(φ) & φ) → (B(φ) & φ))
→ (B(φ)→(B(B(B(φ) & φ) → (B(φ) & φ))))
by D2
(6) Z ⊦ B(φ)→(B(B(B(φ) & φ) → (B(φ) & φ)))
from (4), (5)
(7) Z ⊦ B(B(B(φ) & φ) → (B(φ) & φ)) → B(B(φ) & φ)
by DL
(8) Z ⊦ B(φ) → B(B(φ) & φ)
from (6), (7)
(9) Z ⊦ B(φ) → (B(B(φ)) & B(φ))
&distribution
Hence Z ⊦ B(φ)→ (B(B(φ))
These theorems were formulated in AProS and the proofs are attached at the end of the chapter.
1.2.4 Logic of provability for PA
A logic of provability for PA focuses on the provability predicate. One way to do this is to
use a modal logic with □ representing the predicate Bew. (The dual ◊, defined as ¬□¬, represents
consistency.) Since □ represents the provability predicate, it has to satisfy the derivability
conditions. Any normal modal logical system will satisfy D1 and D2 (D1 correponds to
5
&distribution: B(A&C) → B(A) & B(C). Proof sketch: 1.(A&C) → A; 2.B((A&C) → A) by D1; 3. B((A&C) → A) →(B(A&C) →
B(A)) by D2; 4.B(A&C) → B(A) from 2,3. Similarly for C; &-introduction gives then B(A&C) → B(A) & B(C).
12
necessecitation, and D2 to the distributivity axiom). Adding D3 (□A→□□A) gives the system K4.
But, K4 cannot prove some theorems about provability, such as Löb’s theorem. As mentioned
earlier, this is because K4 cannot prove the self-reference-lemma (or even represent it), which is
needed for proving Löb’s theorem. Adding DL (□(□A→A)) instead of D3 gives GL. Since Löb’s
theorem can prove D3 without using self-reference lemma, GL can prove D3 i.e., K4 ≤ GL. This
can be shown directly by showing that GL proves D3. This proof proceeds exactly as the proof
(ii) above. Proof (ii) gives the proof inside a system Z that is similar to PA, and simply replacing
the predicate B by □ gives the proof in GL, as all the axioms and inference rules used in proof (ii)
are available in GL.
The normal modal logical system GL (described in chapter 2) captures the provability of
PA. This is proved by translating GL sentences into PA sentences by means of a translation
function. A realization (^) is a function that assigns to each sentence letter of GL a sentence of
the language of PA. A Translation (*) of the modal sentence under realization ^ is defined
inductively as:
(1) ⊥* = ⊥
(2) P* = P^
where P is a GL sentence letter
(3) (φ # ψ)* = (φ* # ψ*)
where # is a binary logical connective and φ, ψ are GL sentences
(4) (¬φ)* = ¬ (φ)*
where φ is a GL sentences
(5) (□φ)* = Bew(φ*)
where φ is a GL sentence
Thus, the translation of a GL sentence is a sentence of PA.
Solovay 's completeness theorem: GL ⊦ φ if and only if for all realizations ^, PA ⊦ φ*
13
([8]).
An arbitrary sentence of GL gets mapped to different sentences of PA under different realizations.
However, any sentence of the form □φ gets mapped into a sentence of the form Bew(x) under all
realizations. Since GL proves exactly those statements whose translations are provable under all
realizations, it reasons about provability inside PA.
1.3 Natural Deduction
Natural deduction (ND) calculi formalize logical reasoning via inference rules which
correspond to steps used in informal proofs. Thus, when mathematical proofs are formalized in
ND systems, their structure can usually be preserved. The following sections describe two
classical propositional systems - Prawitz' natural deduction system ([7]) and Sieg's intercalation
calculus ([10]). First, Prawitz' system is presented, and then the concept of normal proof is
explained, after which the intercalation calculus is introduced which allows to search for normal
proofs directly.
1.3.1 Prawitz’ Natural Deduction Calculus
The classical propositional logic system considered here uses the logical connectives &,
∨, →, ↔ and ¬6.
Inference rules:
In a natural deduction calculus, the properties of each connective are expressed using a
pair of inference rules - an introduction rule and an elimination rule. Each rule infers one formula
called the conclusion from one or more formulae called the premises. An introduction rule for a
particular connective, #, is used to infer a formula whose main connective is #. An elimination
6
In Apros, falsum is used only in specifying proofs. In some systems (eg: Prawitz’s system) falsum is taken as a
basic atomic formula and negation of A is defined using falsum by A → ⊥.
14
rule for a particular connective, #, uses a formula whose main connective is # as a premise
(along with other formulae) to infer a conclusion. These rules are specified in the table below.
Inference rules are written with premises above their conclusion with a horizontal line
between them as in the table or as 〈φl, φ2, …, φn | ψ 〉 where φi (0 ≤ I ≤ n) are the premises and ψ is
the conclusion. The premise φ in the rules 〈φ, (φ→ψ) | ψ〉, 〈φ, (φ↔ψ) | ψ〉, the premise χ in the
rule 〈( φ∨ψ), χ, χ | χ 〉, and the premise ψ in the rule 〈ψ, (φ ↔ψ) | φ〉, are called minor premises.
A premise that is not a minor premise is called the major premise. Note that some rules include
derivations, which are defined and explained below. These inference rules discharge
assumptions. This is indicated in the inference rules using "[ ]".
&I:
&E:
φ
ψ
φ&ψ
∨I:
φ&ψ
φ&ψ
φ
ψ
∨E:
φ
ψ
φ∨ψ
φ∨ψ
[φ]
[ψ]
D1
φvψ
χ
φ∨ψ
χ
χ
D2
χ
χ
D1(2) a derivation from φ(ψ) to χ
→I:
[φ]
→E:
D
φ φ→ψ
ψ
ψ
φ→ψ
ψ
D a derivation from φ to ψ
15
↔I:
[φ]
[ψ]
D1
D2
ψ
φ
↔E:
ψφ↔ψ φ
φ φ↔ψ
ψ φ↔ψ
ψ
φ
D1 a derivation from φ to ψ
D2 a derivation from ψ to φ
¬I:
¬E:
[φ]
[¬φ]
D
D
⊥
⊥
¬φ
φ
D a derivation from φ to ⊥
D a derivation from ¬φ to ⊥
⊥I:
φ
¬φ
⊥
φ should be different from ⊥.
Formula-trees
A rule application obtains a consequence from some premises by means of one of the
inference rules, i.e., a particular inference rule is applied to particular formula instances
(premises) to infer a particular formula instance (conclusion.) This is indicated by writing the
premises and the consequence (below the premises) separated by a horizontal line (as in a rule
application).
Informally, rule applications can be joined so that the conclusion of one rule application
acts as the premise of another rule application to infer a new conclusion. To ensure that the rules
16
are not joined in a circular fashion, the structure built by joining rule applications should be a
tree. That will be insured by the inductive generation of formula trees discussed next.
Formula-trees are trees whose nodes are formulae. Nodes are distinct from one another,
though they may be associated with the same formula. Formula-trees are defined inductively as:
i. A formula is a formula tree.
ii. If Ф1 Ф2,…, Фn is a sequence of formula trees, then so is
Ф1 Ф2, …, Фn
ψ
where there exists a rule application R, whose premises are the roots of Ф i', and whose
conclusion is ψ (the root of the newly generated tree). The edges of the formula tree are the
horizontal line in the rule application that separates the premises from the conclusion. The edges
of the trees can be annotated with the name of the rule that was applied.
Notions concerning trees (to be applied to formula trees):
This section defines some terms pertaining to formula trees. The definitions presented
here differ in some cases from the ones given by Prawitz in [7]. An occurrence is a formula at a
certain place (node) in the formula-tree. Thus the same formula appearing twice at two distinct
nodes would correspond to two different occurrences. A path from a formula occurrence φ to a
formula occurrence ψ, in a tree is a sequence 〈φ1,…,φn〉 where φ1 is φ, φn, is ψ, and φi is a premise
of a rule application whose conclusion is φi+1. A formula occurrence φ is above a formula
occurrence ψ in a formula tree (ψ is said to be below φ) if ψ occurs in a path from φ to the goal
(root of the formula tree) and ψ is distinct from φ. A top formula (also called assumption) in a
formula tree is an occurrence that does not have an occurrence above it i.e., it is a leaf node. The
17
end formula (also called the conclusion) in a formula tree is an occurrence that does not have an
occurrence below it, i.e., it is the root of the tree. The height of a formula tree is the number of
formula occurrences on the longest path (from any occurrence to any other occurrence). A
branch is a path whose first formula is a top formula. The subtree of a formula tree F determined
by an occurrence of φ is the tree obtained from F by removing all the occurrences in it except φ
and the ones above φ.
Proof-trees
A proof-tree (or a proof or a deduction or a derivation) for φ is a formula tree whose
conclusion is φ and every top formula of the tree is a discharged assumption. If there exists a
proof for φ, then φ is said to be provable or to be a theorem (written as ⊦φ). A proof-tree (or a
proof) for φ from assumptions Γ is a formula tree such that every top formula is either a
discharged assumption or an element of Γ. If there exists a proof of φ from Γ, then φ is said to be
provable from Γ (written as Γ⊦φ). To determine whether a formula is provable from some
assumptions, a proof search algorithm has to search the space of formula trees to find a proof tree
from the assumptions to the conclusion. The space that an algorithm searches is called its search
space. The search space consisting of all formula trees is large and unwieldy, but it can be
reduced by considering normal proofs, as explained below.
Normal proofs
A proof is said to be normal if it contains no formula occurrence that is both the
conclusion of an application of an I-rule (or the falsum rule) and the major premise of an
application of an E-rule. Prawitz established that in a restricted version of classical logic7 a proof
7
the language is restricted to ⊥, &, → and ; and the ¬E-rule allows only atomic formulae as its conclusion.
18
of φ from Γ can be converted into a normal proof of φ from Γ [7]. The full result for the system
described above was given by a number of people.
(See references in Troelstra and
Schwichtenberg[11] as well as Byrnes [3].)
In a normal proof, every formula is either a subformula of an open assumption or of the
conclusion (or the negation of a formula that has been inferred by ¬E). A proof search algorithm
can make use of this fact to restrict its search space. However, in the natural deduction system
described, the normal proofs are not inductively specified, i.e., there is no direct way to generate
all and only the normal proofs. Intercalation calculi were introduced to specify and allow the
search for normal proofs.
1.3.2 Intercalation Calculus
The intercalation (IC) calculus ([10]) provides a framework which allows to search
directly for normal natural deduction proofs. The object of proof search is to find a normal proof
of the sentence G (called the goal) from a sequence of sentences α (called the assumptions.)
The proof search in the intercalation calculus is to reflect the informal idea of finding a
way to close the gap between the goal and the assumptions via logical rules. IC-rules are divided
into categories as follows: (i) Elimination rules which are applied to available assumptions to
infer new formulae which can then be added to the available assumptions to infer the current
goal. In case of ∨E, the rule adds the one disjunct to the available assumption, then the other,
requiring the goal to be proved using either disjunct. (ii) Introduction rules which have as their
conclusion the current goal. The premises of these rules generate new goals to be proven from
the available assumptions. Rules like →I and ↔I add to the available assumptions, in addition to
giving a new goal. (iii) Negation rules – to be used in indirect reasoning.
19
Notations
Let capital letters G, H…, and the lower case Greek letters φ (and φi (i ∈ N)) denote
individual formulae. Let lower case Greek letters α, β… denote finite sequences of formulae. Let
αβ denote the concatenation of the sequences α and β.
Let α,G denote the sequence α
concatenated with the sequence containing the single formula G. Let φ ∈ α denote that φ is an
element of the sequence α.
IC-rules
IC-rules operate on triplets (called questions) of the form 〈α; β? G〉
where α is the sequence of available assumptions;
G is the current goal;
β is a sequence of formulae obtained by &-elimination and →-elimination from
elements in α.
The question 〈α?G〉 denotes 〈α;{}?G〉.
The IC-rules are presented as (question1 ⇒ questioni (i ∈ N)). An IC-rule is said to be
applied to question1 generates a new question(s) – questioni. As mentioned, they are categorized
into elimination, introduction and negation rules.
Elimination rules:
&-E:
α; β ? G, (φ1&φ2) ∈ αβ, φi∉αβ
V -E:
α; β ? G, (φ1∨φ2)∈αβ, φ1∉αβ, φ2∉αβ ⇒
α,φ1; β ? G AND α,φ2; β ? G
→-E:
α; β ? G, (φ1→φ2)∈αβ, φ2∉αβ
⇒
α; β ? φ1 AND α; β, φ2 ? G
↔-E:
α; β ? G, (φ1↔φ2)∈αβ, φi∉αβ
⇒
α; β,φi ? G AND α; β? φ3-i (i = 1 OR 2 )
⇒
20
α; β,φi ? G,
( i = 1 OR 2)
Introduction rules:
&-I:
α; β ? (φ1&φ2)
⇒
α; β ? φ1 AND α; β ? φ2
-I:
α; β ? (φ1 V φ2)
⇒
α; β ? φ1 OR α; β ? φ2
→-I:
α; β ? (φ1→φ2)
⇒
α, φ1; ? φ2
↔-I:
α; β ? (φ1↔φ2)
⇒
α, φ1; ? φ2 AND α, φ2; ? φ
V
1
Negation Rules:
¬E:
α; β ? φ, φ ≠⊥
⇒
α, ¬φ; ? ⊥
¬I :
α; β ? ¬φ,
⇒
α, φ; ? ⊥
⊥I :
α; β ? ⊥, ¬φ ∈ F(α)
⇒
α; β ? φ AND α; β ? ¬φ
where F(α) is the class of all strictly positive subformulae of elements in α.
IC-tree
Informally, we can join IC-rule application similar to rule application in ND i.e., IC-rule
R1 can act on the question generated by the IC-rule R2. Rule applications are joined to form ICtrees. Though this is similar to the notion of a formula tree, there are differences as explained
below. An IC-tree is a tree whose nodes are questions (i.e., they contain available assumptions
and the goal to be proven) and whose edges are the IC- rules connecting them. At each node, all
possible rule applications are used to extend the tree.
IC-tree for the 〈α; β ? G〉 is specified inductively as the tree generated by applying (all
applicable) IC-rules to it or to the leaves of an already obtained partial tree (leaves have to be
“non-terminal” as explained in the algorithm below).
21
The algorithm to construct an IC-tree for 〈α ? G〉 works as follows:
i. Create a tree with 〈α ? G〉 as its root.
ii. At each leaf node 〈α; β ? G〉, the rules are applied as follows:
(a) If G∈αβ, then the branch8 containing the node is closed with a Y (terminal node).
(b) If G∉αβ, and every applicable rule leads to a node that is equivalent9 to one that is
already present in the branch, the branch containing the node is closed with an N
(terminal node).
(c) If G∉αβ, and there is an applicable rule that leads to a node that is not equivalent to
any that is already present in the branch, the tree is extended at the node by applying
all applicable rules and adding all questions generated (non-terminal nodes).
The algorithm terminates in step (ii), when there are no leaf nodes that are questions i.e., all
the branches are closed with terminal nodes. There are finitely many IC-rules, and finitely many
formulae to which they can be applied. Further, any new formula contained in a newly generated
question is a subformula of an existing question. Since repetitions are not allowed, the IC-tree is
finite and the algorithm always terminates, generating the IC-tree.
IC-proof
An IC-proof of G from α is a subtree T of the IC-tree Σ for 〈α; ? G〉 satisfying:
i. 〈α; ? G〉 is the root of T,
8
Branch is defined as in formula trees – as a sequence of joined rule applications from a leaf node.
A node 〈α; β ? G〉 is equivalent to the node 〈α′; β′ ? G〉 if the set of formulae present in the sequences αβ and
α′β′ are identical.
9
22
ii. all the branches of T are Y-closed branches of Σ,
iii. every question node (node corresponding to a question) in T that is not the root is
followed by exactly one rule.
The definition can be extended for and IC-proof of G from α;β in the obvious way.
IC-rules for classical propositional logic can be proved to be sound and complete. The
completeness result states that the IC-tree for G from assumptions α contains either an IC-proof,
or a branch from which a counterexample to the inference of α from G can be constructed [(9]).
It can be proved inductively on the height of IC-proofs that any IC-proof of G from
assumptions in α can be transformed into a normal natural deduction proof of G from the same
assumptions. Thus, if an IC-proof exists for G from assumptions in α, then a normal natural
deduction proof exists. Using this proof extraction theorem and the completeness result, a
sharpened completeness result can be given stating that the IC-tree for the question 〈α; ? G〉
allows us to determine either a normal proof of G from α or to construct a counterexample to the
inference from α to G ([10]). An automated algorithm for efficiently generating an IC-subtree
that is an IC-proof is implemented in AProS ([1]).
The IC calculus is based on Gentzen's sequent formulation of natural deduction systems.
The main differences between Gentzen's sequent formulation and the above formulation are as
follows. First, the elimination rules are applied only on the left side of the “?” and introduction
rules only on the right side (which ensures normality). Second, negation is taken to be a basic
connective, with ⊥ taken to be a placeholder for contradictions (used in indirect arguments).
Thus a variety of strategy restrictions can be brought to bear.
23
IC-rules (with soundness and completeness results) for modal logical systems S4, S5 and
GL are presented in the following chapter.
24
CHAPTER 2
Proof search calculi
This chapter describes the theoretical background for the proof search in the modal
logical systems S4, S5 and GL. The rules for these systems were formulated in the intercalation
calculus and implemented in the automated theorem prover AproS [1]. The first section introduces
the systems, giving the axioms and semantics. The second section gives the natural deduction
rules that would allow normal derivations. The third section gives the IC rules, their soundness
and completeness proofs. The natural deduction rules were formulated based on the rules for S4
and S5 given by Prawitz [7]. The IC rules and completeness theorems are based on the rules and
completeness results given for S4 by Sieg and Cittadini in [9].
2.1 Systems S4, S5 and GL
S4, S5 and GL are classical propositional systems. Their language was presented in
chapter 1.
2.1.1 Axioms for S4, S5 and GL:
As mentioned earlier, S4, S5 and GL are normal systems. Hence their axioms include
(i)
Necessitation (from ⊦conclude ⊦□
(ii)
distribution axioms (from ⊦□(→conclude ⊦□→□
In addition the system S4 contains
(a)
□ → 
(b)
□ → □□
In addition to the axioms (i) (ii), (a), and (b), the system S5 contains:
(c)
□ → □□
25
In addition to the axioms (i), (ii) of the normal systems, GL contains:
(d) □(□ → ) → □
The systems are closed under modus ponens.
As mentioned in chapter 1, (i) corresponds to derivability condition D1 and (ii) corresponds to
D2, and (d) corresponds to DL. Note that (b) corresponds to D3 and the axiom can be proved in
GL. GL, does not have the axiom →□(necessitation i.e., if GL ⊦ then GL ⊦ □ is not
internalized) as this would correspond to PA ⊦ φ→Bew(φ), (this is true if φ is a Σ1 sentence.)
2.1.2 Semantics
Kripke models for modal logic were presented in chapter 1.1.3. Given a model M, the
properties of its accessibility relation R makes it a model for a particular modal logical system. 
can be derived in a modal logical system L if is true in every world, in every model of L (i.e., a
model with the accessibility relation corresponding to L) under every evaluation. Here, we
describe the properties of accessibility relations that correspond to axioms of S4, S5 and GL.
Axioms true in any normal modal logical system:
Necessitation (D1): By the definition of the valuation relation, all tautologies are true in all
worlds, (including those accessible from the current one). So, □ is true for all tautologies .
Distribution (D2): Assume w⊩□(→andw⊩□. Let v be an arbitrary world such that wRv.
We have v⊩→ and v⊩So, v⊩Generalizing on v, w⊩□→□
26
Axioms corresponding to properties of accessibility relation R of model M:
(□→) corresponds to reflexivity: For any world w, assume w⊩□f R is reflexive, wRw, and
w⊩Conversely, if R is not reflexive, it is possible that NOT wRw. So, it is possible to have
w⊩□without w⊩
(□→□□) or(D3) corresponds to transitivity: For any world w, assume w⊩□For arbitrary
world u such that wRu, we have u⊩. If R is transitive, for any arbitrary world v such that uRv,
we have wRv, and v⊩Generalizing, we have u⊩□ Generalizing again, we can conclude
w⊩□□Conversely, if R is not transitive, it is possible to have worlds w, u and v such that wRu
and wRv but NOT wRu, with w⊩□u⊩but v⊩ Thus, w⊩□□
( → □□ corresponds to symmetry: Consider a world w, with w⊩Let u be an arbitrary
world with wRu. If R is symmetric, uRw and u⊩□Generalizing, w⊩□□Conversely,
if R is not symmetric, it is possible to have wRu without having uRw, and so, u⊩□and
w⊩□□may be true.
(□ → □□ corresponds to symmetry and transitivity: (□ → □□ can be
derived from ( → □□and□ → □□ From ( → □□we can derive (□ →
□□□ and ( → □□□ From □ → □□we can derive (□□→
□□), and we can conclude (□ → □□
(□(□ → ) → □) or (DL) corresponds to Converse-well-foundedness and transitivity: For any
world w, assume w⊩□ Let X be the set of worlds accessible from w in which is false.
Since R is converse-well-founded and X is nonempty, there is an element of X, say u, such that
for any v, if uRv, v is not in X (u is called R-greatest element). If uRv, by transitivity, wRv.
Since v is not in X, v⊩. So, u⊩□and (by the definition of X), u⊩. Thus, u⊩□ → .
By the definition of X, wRu. So, w⊩□(□→ →□ To prove the converse, (a) transitivity:
27
from (□(□→)→□we can prove □→□□using necessitation and distribution, as in
chapter 1) (b) converse-well-foundedness: Assume the contrary. Then, there exists a non-empty
set X with no R-greatest element. Consider a valuation such that for any world u⊩ iff u is not in
X. Consider a world w in X (w⊩. For any x in X with wRx, we have x⊩and, xRy for
some y in X (and y⊩ So, x⊩□So, x⊩□→ For any z not in X with wRy, we have
z⊩and so z⊩□→  So, w⊩□(□ → ). As w is in X, there is an x in X such that wRx (and
x⊩So, w⊩□
For any model M with accessibility relation R, M is a model of S4 iff the R is reflective and
transitive; M is a model of S5 iff R is reflexive, transitive and symmetric; M is a model of GL iff
R is transitive and its converse well-founded. (A converse-well-founded relation cannot be
reflexive.) It can be proved that if the logic L (S4/S5/GL) proves A, then A is true in all the
worlds of all the models (of L). It can also be proved that if in all the worlds of all (finite) models
M, A is true, then the corresponding logic proves it [2].
Some consequences:
It can be shown that S4, S5 have only a small set of distinct modalities (sequences of □s, ◊s,
and negations, i.e., formulae containing only these connectives)10. This can be derived from their
axioms (and can be proved in AproS). But, these can also be explained semantically. GL does not
have a small set of distinct modalities. (This can be shown semantically). However, the proofs
that are possible - such as the second incompleteness theorem can be explained semantically.
Here we give a (mainly) semantic justification for the distinct modalities and some GL proofs.
10
For ease of reading, we do not translate ◊s into □s and negations
28
S5: S5 has a few distinct modalities. We first list these, then show that they are indeed distinct,
and then show that there are no other modalities, i.e., all other modalities reduce to one of these.
The distinct modalities of S5 are unboxed formulae, formulae beginning with box and
formulae beginning with diamond, and their negations (i.e., for the atomic formula P, the distinct
modalities are P, □P, ◊P, ~P, ~□P, ~◊P; all other modal formulae involving P (and no other
connective other than negation) such as □□P are equivalent to one of these.)
To see that the modalities are distinct: P is distinct from □P as P can be true in the current
world but false in an accessible world (the converse is false). P is distinct from ◊P since P may
be false in the current world but true in some accessible world (the converse is false). □P is
distinct from ◊P, as P may be true in only one of multiple accessible worlds (the converse is
false). Negations are distinct for similar reasons.
To see that these are the only distinct modalities: We show that other modalities reduce
to one of these: Repeated □s reduce to one □ (i.e., □□…□P is equivalent to □P). □□…□P implies
□P because the accessibility relation is reflexive, and the converse is true since the relation is
transitive. Repeated ◊s reduce to one ◊ (i.e., ◊◊…◊P is equivalent to ◊P). ◊P implies ◊◊…◊P
because the accessibility relation is reflexive, and the converse is true since the relation is
transitive. The modality □◊ reduces to ◊ (i.e., □◊P is equivalent to ◊P). □◊P implies ◊P since the
accessibility relation is reflexive, and the converse is true since the relation is transitive and
symmetric (i.e., consider a worlds w and v such that wRv and v⊩P, then for all worlds u such
that wRu, we will have uRv). The modality ◊□ reduces to □, (i.e., ◊□P is equivalent to □P). □P
implies ◊□P since the accessibility relation is reflexive, and the converse is true since the
accessibility relation is an equivalence (i.e., consider worlds w and v such that wRv and v⊩P.
But, for every world u such that wRu, we also have vRu). Negations can be reasoned similarly.
29
S4: S4 has a few distinct modalities. We first list these, then show that they are indeed distinct,
and then show that there are no other modalities, i.e., all other modalities reduce to one of these.
Given an atomic formula P, the distinct modalities are: P, □P, ◊P, ◊□P, □◊P, ◊□◊P,
□◊□P and their negations.
To see that the modalities are distinct: P, □P and ◊P are distinct, as every model of S5 is
also a model of S4. ◊□ and □◊ need symmetry to reduce to other modalities, (in a model with
asymmetrical accessibility relation, they are different). Symmetry is also needed to reduce ◊□◊
and □◊□.
To see that these are the only distinct modalities: We show that the other modalities
reduce to one of these. : Repeated □s(◊s) reduce to one □(◊) – the proof given above holds, as it
does not need the accessibility relation to be symmetric. □◊□◊P reduces to □◊P: To show that
□◊□◊P implies □◊P, note that □◊P implies ◊P due to reflexivity, and this so □◊□◊P reduces to
□◊◊P which reduces to □◊P; to see the converse, consider arbitrary worlds u, v, and w such that
uRv and vRw such that w⊩P. Then we have w⊩□◊P (if not, there is some world w1 with wRw1
where ◊P is false. But, by transitivity, uRw1. So, □◊P cannot be true at u.) and so, u ⊩□◊□◊P.
The next reduction is: ◊□◊□P reduces ◊□P. To see that ◊□◊□P implies ◊□P, consider arbitrary
worlds u, v and w such that wRu and uRv and v⊩◊□P. But for every world x such that vRx, we
also have wRx. So, w⊩◊□P; To see the converse, consider arbireaty worldsw and u such that
wRu and u⊩□P. If u has no accessible worlds other than itself, then if u⊩□P, we also have u⊩
□◊□P, else if there is a world v distinct from u such that uRv, v⊩□◊□P (since by transitivity,
v⊩□P and vRv by reflexivity). So w⊩◊□◊□P.
30
GL: GL does not have a small set of distinct modalities because □P is distinct from □□P and so
on. This can be seen in a model with two worlds u and v such that uRv. If P is not true at v, □P is
false at u, but v⊩□P, since it has no accessible worlds, and so u⊩□□P. (The converse is false,
since the accessibility relation is not transitive.) However, there are some reductions possible.
Further, some theorems of GL can be explained semantically.
Some properties:
Unlike S4 and S5, models of GL do not have reflexive accessibility relation. So, it is
possible for a world to have no accessible worlds (i.e., no consistent accessible world, as every
world is by definition consistent). Such a world can prove □A for any A, including falsum (since
□A is provable iff the translation of A is probable in PA, this would imply that PA is
inconsistent).
The statement of the form ~□A implies that there is at least one accessible (consistent) world,
and this corresponds to the consistency of PA.
The statement of the form □◊A implies that in every accessible world, ◊A is true. Consider an
arbitrary world w with w⊩□◊A, and a world v with wRv. Since v⊩◊A and vRv is false, there has
to be a world v1 such that vRv1, with v1⊩A. But wRv1 since the accessibility relation is
transitive, and so v1⊩◊A, and as v1Rv1 is false, there has to be a world v2 … Continuing this
argument, this branch cannot be finite. So, this model does not have a converse well-founded
relation. So, w has no accessible worlds. But in this case, w⊩□Q for any Q. So □◊A implies □Q.
Conversely, ◊A implies that there is at least one accessible world, and so □◊A is not provable.
31
2.2 Natural deduction rules
The natural deduction rules for the connective □ are:
□-E: □

□-I

□
These rules, as presented, are not sound. For instance, using the □-I rule presented above, we
can prove  → □ So, these rules need restrictions. We present restrictions that are directly
derived from the semantics. However, we show that these are more suited to the intercalation
calculus than to natural deduction. Then, we present Prawitz’ syntactic restrictions (which we
motivate using the restrictions we derived semantically).
In a modal logical system L,  can be derived from assumptions  if in every model of L
(i.e., a model with the accessibility relation corresponding to L) under every evaluation, is true
in every world where  is true. Consider a model M of L and world w⊩. If w⊩□, in every
world v such that wRv, v⊩. Since we are considering all valuations,  has to be entailed by the
formulae in v that are forced by w i.e.  such that w⊩□. Using this, we can formulate the
following rule:
“□can be concluded using □I from□i and a derivation of φ from assumptions i”
This rule is sound, but not complete for the logics we consider. For instance, we cannot prove
(□A→□□ This is because the rule does not consider the properties of the accessibility relation
of the logics: If the accessibility relation is transitive, and □A is true in w, then □A and A are true
in any accessible world (Let wRv and w⊩□A; for any u, if vRu, then wRu, so, u⊩A. So, v⊩□A).
If the accessibility relation is symmetric and transitive, (in addition to what we have above), if
□A is true in w, □A is true in any accessible world v. (Let wRv and w⊩□A; for some u,
32
wRu, and u⊩A. Then for any world v such that wRv, we also have vRu. so, v⊩□A). If the
accessibility relation is transitive and converse-well-founded, Löb’s rule allows □to be
concluded using □I froma derivation of φ from assumption □Using this, we formulate the
following rules for the connective □ (□-E corresponds to reflexivity of the accessible relation
and is allowed without any restrictions in S4, S5, but not in GL). 

□-I(S4): □can be concluded using □I from□i and a derivation of φ from assumptions □i11
[□1], [2],
□1
…
…
[□k]
φ
□k
□φ
Here, the proof in black can be thought of as corresponding to a world w, and the part of the
derivation that is circled in blue can be thought of as a proof in a world v such that wRv.
□-I(S5): □can be concluded using □I from□i, ¬□χi and a derivation of φ from assumptions
□i, and ¬□χi.
□-I(GL): □can be concluded using □I from□i and a derivation of φ from assumptions □i, i
and φ(Here we need i as we do not have a separate elimination rule.)
11
i can be derived using □-E.
33
[□1],[1]…[□k],[k],[□φ]
□1
…
φ
□k
□φ
In the derivation, the part of the derivation that is circled in blue can be thought of as a proof in
the “object theory”. Since we have an extra assumption □φ, one proves, really, □φ → φ. Since
this implication is “proved in the object theory”, □(□φ → φ) can be concluded. Then, using
Löb’s rule, □φ can be deduced. Instead of proving this implication, and using Löb’s rule, the
formulation presented here simply allows □φ to be used as an assumption, so the rule reads:
□I/E:□can be proved from□i and a derivation of φ from assumptions □i, i and □
These rules are sound and complete, and give normal proofs. However, they but have
an unbounded number of premises (e.g., in the figure above, the instance of the inference rule
has k+1 premises). This is undesirable for natural deduction, but not a problem in the case of the
intercalation calculus that uses a modification of this.
So, Prawitz in [7] formulates □-I (for S4 and S5) as follows:
2.2.1 Prawitz’ restrictions on □-I12
Prawitz gives three versions of □-I. Only the last version gives normal proofs. For
simplicity, we explain the motivation and the details of the other versions only for S4.
12
Prawitz gives rules for S4 and S5. These do not have restriction on □E. When we give the rules for GL, we give
restrictions for □E as well.
34
Version 1: Consider the subderivaton of in the □-I formulated above (the region in blue). All
the assumptions are of the form □Using this, a version of the rule can be formulated: if can
be proved from □i (for 1 i  n) then □ can be concluded from using □I13.
This formulation is sound and complete, but does not give normal proofs. E.g.: In proving
□(A&B) from □A&□B, to apply the □-I rule on A&B, we have to prove A&B from □A and □B
(but not □A&□B). So, we apply →I twice to get □A→(□B →□(A&B)). We prove □A and □B
from □A&□B separately and use →E to finish the proof. This proof is not normal.
To get around this, Prawitz introduces the concept of essentially modal formulae. If  is a
formula for which the system can prove  → □, then is an essentially modal formula.
The S4 formula is an essentially modal formula with respect to S4 if is of the form:
iii. □,
iv.

v.
(1 & 2) where 1 and 2 are essentially modal
vi. (1  2) where 1 and 2 are essentially modal.
Essentially modal formulae with respect to S5 are S5 formulae such that is either:
(i) an essentially modal formula with respect to S4
or is of the from , and  is essentially modal with respect to S5.
Essentially modal formulae with respect to GL are those of S4.
Version 2: If can be proved from essentially modal formulae, then □ can be concluded from
using □I.
Though this version is better than the previous one, it does not always give normal proofs; e.g.,
To prove □□B from A&□B, a normal proof would use □I with the premise □B to conclude □□B;
13
For S5, we include the negations of such formula as well; for GL, we include □and i.
35
But □B cannot be derived only essentially modal formulae (it can be concluded from A&□B, but
this is not essentially modal). So, in this version, we have to prove □□B from □B using the
following detour: derive □B→□□B, and derive □B from A&□B, and then use →E to finish the
proof. To fix this, version 3 was introduced.
Version 3: If Σ is a proof-
□
(using □I)
along the path
such that
i.
ii.
 is essentially modal with respect to S4(S5/GL)
ψ does not depend on any assumptions χ which φ does not depend on.
Note: These restrictions we formulated for S4 and S5, which have a □E rule. In case of GL, we
also have the following restriction: On the proof tree described above, □E is allowed on (modal
formula) . (□E is allowed only when the conclusion is used in deriving a formula on which □I
is to be applied. Since, occurs in what corresponds to an accessible world, this corresponds to
the definition of □).
2.3 Intercalation calculus rules
These rules can be seen to be similar to the rules for natural deduction that used an
unbounded number of assumptions14.
2.3.1 IC rules for S4
□E :  ;  ? , □δ  ,  ; , δ? 
Here, we only consider □from . If □is derived using introductions, this can be pushed in inside the current
rule, and if it derived using inversion, the current rule can be pushed into it. The details are not given, since we give
a completeness proof.
14
36
□I :  ;  ? □   () ? 
where () is the set of formulae □ such that □ 
This is as presented in [10].
2.3.2 IC rules for S5
□E :  ;  ? , □δ  ,  ; , δ? 
□I :  ;  ? □  () ? 
where () is the set of formulae γ of the form □ or ¬□ such that γ  
Note that the rules for S4 and S5 differ only in the definition of ().
2.3.3 IC rules for GL
Here, as the natural deduction calculus, we have a □-E/I rule.
□E/I :  ;  ? □   () ? 
where () is the set of formulae γ of the form
iii.
□ such that □ ,
iv.
 such that □ ,
v.
□
Note that the rules of S4 and GL are similar, though this may not be apparent. There are
only two differences: (i) in GL, due to Löb’s theorem, the conclusion of □I can be used as an
assumption to derive its premise and (ii) □E rule is combined with □I rule in GL.
37
2.4 Soundness
2.4.1 Soundness proof for of S4
Soundness theorem: If an S4 IC-tree for Г ?  evaluates to Y, then in S4, Г ⊩ .
Proof: By induction on the height of the IC-trees. The classical rules are dealt with as usual.
□I: Let Σ be an IC-tree of height h for  from assumptions of Г, and let the last rule
applied be □I. So,  is of the form □C and the premise of □I is C. Let Σ restricted to C be Σ’. Σ’
is an IC-tree for C from some assumptions Δ; indeed, by the restriction on □I, Δ = .

'
C
□C
Σ′ is of height h-1, and by induction hypothesis, Δ
⊩ C. So, Г ⊩ C. We have to show that Г ⊩
□C. Consider an arbitrary model of Г, M. As Г is a subset of Г, M is a model of Г, and as Г ⊩
C, C is true in any world w of M. Choose any such world u, and for this fixed u, any v with uRv;
clearly, both u
⊩ C and v ⊩ C. Thus, u ⊩ □C. As u was an arbitrary world, of M and M an
arbitrary model of Г, Г ⊩ □C.
□E: Let Σ be an IC-tree of height h for  from assumptions of Г, and let the last rule
applied be □E. So,  is of the form C, and the premise of the □E is □C. Call Σ restricted to C be
Σ’. Σ’ is an IC-tree of height h-1 for □C from some assumptions Г. By induction hypothesis,
Г ⊩ □C. Let M be an arbitrary model of S4, and u a world in which all the elements of Г are true.
Then, u ⊩ □C. As the accessibility relation R is reflexive, u ⊩ C. Hence, Г ⊩ C.
38
2.4.2 Soundness proof for of S5
Soundness theorem: If an S5 IC-tree for Г ?  evaluates to Y, then in S5 Г ⊩ .
Proof: By induction on the height of IC-trees. The classical rules are dealt with as usual.
□I: Let Σ be an IC-tree of height h for  from assumptions of Г, and let the last rule
applied be □I. So,  is of the form □C, and the premise of the □I is C. Let Σ restricted to C be Σ’.
Σ’ is an IC-tree for C from some assumptions Δ; indeed, by the restriction on □I, Δ = .

'
C
□C
Σ’ is of height h-1, and by induction hypothesis, Δ
⊩ C.
So, Г
⊩ C. We have to show that
Г ⊩ □C. Consider an arbitrary model of Г, M. As Г is a subset of Г, M is a model of Г, and as
Г ⊩ C, C is true in any world w of M. Choose any such world u, and for this fixed u, any v with
uRv; clearly, both u ⊩ C and v ⊩ C. Thus, u ⊩ □C. As u was an arbitrary world of M and M an
arbitrary model of Г, Г ⊩ □C.
□E: Let Σ be an IC-tree of height h for  from assumptions of Г, and let the last rule
applied be □E. So,  is of the form C, and the premise of □E is □C. Let Σ restricted to C be Σ’.
Σ’ is an IC-tree of height h-1 for □C from some assumptions Г. By induction hypothesis, Г⊩□C.
Let M be an arbitrary model of S5, and u a world in which all the elements of Г are true. Then,
u⊩□C. As the accessibility relation is reflexive, u⊩C. Hence Г⊩C.
39
2.4.2 Soundness proof for of GL
Soundness theorem: If an S5 IC-tree for Г ?  evaluates to Y, then in S5 Г ⊩ .
Proof: By induction on the height of the IC-trees. The classical rules are dealt with as usual.
(i)Let Σ be an IC-tree of height h for  from assumptions of Г, and let the last rule applied
be □I. So,  is of the form □C, and the premise of the □I is C. Let Σ restricted to C be Σ’. Σ’ is an
IC-tree for C from some assumptions Δ; indeed, by the restriction on □I, Δ = .

'
C
□C
Σ’ is of height h-1, and by induction hypothesis, Δ
⊩
C. So, Г
⊩C. We have to
show that
Г ⊩ □C. Consider an arbitrary model of M. Consider an arbitrary world u which is a model of Г.
Let v be a world such that uRv. Since Г is true in u, for any formula of the form □χ that is
contained in Г, χ is true in v. Since M is a GL model it is transitive. So, □χ is also true in v.
Г
⊩C. By definition of Г, v ∪ {□C} = Г.
So, v ∪ {□C}
⊩C or v ⊩ (□C→C) Since v is an
arbitrary world accessed by u, generalizing on v, u⊩□ (□C→C). By Löb’s rule, u⊩C. Since u is
an arbitrary world which makes all the elements of Г true, Г⊩C. As elimination rule is combined
with introduction rule, the same reason holds for that as well.
40
2.5 Completeness
The completeness proofs for all three systems proceed along the same lines. The proof for S5
is presented first as it is the easiest. The proofs of the other systems differ slightly from that of
S5, and only these differences are presented.
2.5.1 Completeness of S5 rules
Completeness Theorem: Either the S5 IC-tree for  ? G contains a normal S5 derivation for 
? G or it allows the definition of a counterexample to the inference from  to G.
This is proved directly from the proof extraction theorem and counterexample extraction theorem
proved below.
Proof Extraction Theorem: For any  and G, if the S5 IC-tree for β ? G evaluates to Y, then
a normal nd-proof of G from the assumptions in  can be found.
Proof: We prove this by first showing that an IC-derivation D can be extracted from the IC-tree T
for  β ? G evaluates to Y, and then showing that from any IC-derivation D, a normal ndproof of G from can be extracted.
To show that an IC-derivation D can be extracted from the IC-tree T for  β ? G evaluates to
Y: An IC-derivation of G from α is a subtree D of the IC-tree T for 〈α β ? G〉 such that 〈α β?
G〉 is the root of D, all the branches of D are Y-closed branches of T, and every question node
(node corresponding to a question) in D that is not the root is followed by exactly one rule. Note
that since T is finite, and every leaf node ends with a Y or an N, this assignment can be
propagated down to all the nodes in the obvious way. Define D as f(height(T)) where f is defined
as: f(0) = 〈α β ? G〉;
f(2n) = g1(f(2n)); f(2n+1) = g2(2n)
41
where g1(x) is defined as the leftmost rule application that extends the derivation, i.e., has all of
its premises evaluating to Y if one exists, and x otherwise, and g2(x) returns the appropriate
questions for the rule chosen by g1(x) if g1(x) is a rule, and x otherwise.
It is obvious that D is an IC-derivation.
To show that from any IC-derivation D, a normal nd-proof of G from can be extracted:
The proof proceeds by induction on height(D). The base case is where height(D)=1. In this case,
D consists of just one question β ? G where G is in β. The nd-proof for this is just the node
G. If height(D) = h (> 1), the proof proceeds in cases depending on the last rule application used,
with the induction hypothesis that states that given any IC-derivation T such that height(T) < h, a
normal nd-proof can be extracted from it. The propositional rules are dealt as in [10]. For the
modal rules,
□I: If D is an IC-derivation for β ? G such that its last rule application is □I, then the
immediate subderivation of D, say D’ is an IC-derivation for β) ? Hwhere G = □H. By
induction hypothesis, we can extract a normal nd-proof P from D’. Construct P as follows: The
root of P is G and the immediate subproof if P is P’ and the last inference rule used is □I. P is a
normal nd-proof that is associated with D. Since D’ used β) as it premises, the restriction for
□I in nd proofs is satisfied (the formulae in are β)the essentially modal formulae that occur
from H to the assumptions it uses from β.
□E: If D is an IC-derivation for β ? G such that its last rule application is □E, then the
immediate subderivation of D, say D’ is an IC-derivation for β ? Hwhere H = □G. By
induction hypothesis, we can extract a normal nd-proof P from D’. Construct P as follows: If P’
contains occurrences of H as open assumptions, then replace them by the inference rule □E with
its premise being H and conclusion being G. P is a normal nd proof that is associated with D.
42
nd-proofs extracted from IC-derivations are normal, because the elimination rules are applied
from above whereas the introduction rules are applied only from below.
Counterexample Extraction Theorem: For any  and G, if the S5 IC-tree for  ? G evaluates to
N, then using this tree, it is possible to construct an S5 model in which all the elements of  are
true but G is false.
Proof: We want to construct a world w in an S5 model M where is true but G is not. We
construct an S5 model, i.e., a nonempty set W (with w), a binary equivalence relation R and the
relation ⊩. The following describes the construction of each of these.
Structure of the set W: Since there are only finitely many formulae involved, and inaccessible
worlds do not influence each other, a counter example can be found in a finite model.
Accessibility relation R: every world in W is accessible to every other world.
Construction of the set W: One of the worlds, w, makes  true. The other worlds are constructed
using the fact that □I rule uses the idea of accessible world – if formula F is allowed as an
assumption to prove a formula A to which □I is applied, F is true in all accessible worlds. Thus,
applying □I rules to all formulae to which this can be applied will generate all possible accessible
worlds. The truth of a formula depends only on the truth of its subformulae (in all accessible
worlds), and we need to construct only a finite number of worlds with all possible valuations of
subformulae. This is done as follows:
A subtree of the IC-tree is constructed as follows:
Select a single branch P0, all of whose nodes evaluate to N (using  rules).
Apply □I to all nodes of P0 to which this rule is applicable.15
This gives the root node of a new branch16.
15
Except for the root – reason for this is in footnote 7.
43
Repeat the process till no new branches can be obtained.
As the whole IC-tree is finite, this procedure halts and returns a subtree.
The top nodes of a branch Pi of this subtree is a question of the from (i, ? Gi). Let Ai be the
elements in i and Gi- (negation of Gi). For each branch Pi, Ai is added to the set W. A0
corresponds to w.
Relation R: For every i, j Ai, is related to Aj; ⊩ the standard valuation relation17.
(Detailed) Proof:
Define + and - as follows:
+ =  if  =  and + =  otherwise. - =  if  =  and - =  otherwise.
Enumerate F(, G) by Hi, 0  i  n. F(x) is the set of all unnegated proper subformulae of
formulae in x and the unnegated part of all negations which are subformulae of formulae in x.
Assume the IC-tree  for ( ? G) evaluates to N.
Construction of W:
Construction of the subtree:
Construction of P0: The sequence of nodes P0*(0),… is defined as follows:
Let 0 = , 0 = 0, G0 = G, H0 = G.
m+1 is defined according to the following cases.
Case1: (j)[ (m  j  n)
& (Hj is not of the form □)
& ((Hj  m) & (Hj  m))] .
16
For S5, we only consider the last branch got by □I – this is because the accessible world has formulae that are
negated modal as well.
17
To ensure that every branch gives a world that respects the definition of □, we choose P0 as follows: There are
several formula pairs on which can be applied; we choose the branch that considers the boxed formulae last
(thus all relevant formula are already incorporated into the branch’s premises)
44
Then m+1 is the least such j.
Case2: The previous case does not apply,
(j)[ (m  j  n)
& (Hj is of the form □)
& ((Hj  m) & (Hj  m))
& (m ? Hj evaluates to N)].
Then m+1 is the least such j.
Case3: The previous cases do not apply,
(j)[ (m  j  n)
& (Hj is of the form □)
& ((Hj  m) & (Hj  m))
& (m ? Hj evaluates to Y)].
Then m+1 is the least such j.
Case4: the previous cases do not apply.
Then let m+1 = 0.
Then, let Gm = Hm
Gm = Hm
if m ? Hm evaluates to N,
otherwise
m+1 = m, G-m
P0*(2m) = m ? Gm
P0*(2m+1) = i, Hm+1 if Gm is a negation
c, Hm+1 otherwise
Let  be the smallest m with m+1 = 0.
45
Define P0 to be P0* restricted to {m | m  2}.
Construction of remaining branches:
Now, consider the nodes of the form ’ ? □ in P0, (excluding the root). These nodes appear in
P0 only because of case 3, hence only after all the formulae of F in (, G-) not of the form □
have been used. To each of these nodes, the rule □I is applicable, leading to a node of the form
 ?  (which evaluates to N). Note that  contains all the formulae of the form □ψ in ,
which are obtained using case 3. So,  contains all formulae of the form □ψ obtained using case
2, and all the formulae of the form □ψ obtained using case 3.
Start the construction of the branch P0, choosing at each stage the following node according to
the cases for P0. Then repeat the process for each of the branches. For each Pi18, let ’i = i?Gi
be the top node of Pi. Define Ai = { |   i, Gi- }. Let W be the set of all Ai
Lemma: For 0  i, j  r, the following claims hold:
1. if  Ai, then - Ai
2. if  is a subformula of an element of an element in Ai, then either + Ai or -  Ai
3. if  Ai, then  Ai,
4. if (1 & 2)  Ai , then 1+ Ai and 2+ Ai
5. if (1 & 2)  Ai, then 1- Ai or 2- Ai
6. if (1  2)  Ai, then 1+ Ai or 2+ Ai
7. if (1  2)  Ai, then 1- Ai and 2- Ai
8. if (1→ 2)  Ai, then 1- Ai or 2+ Ai
Since we want formulae of the form □and □ in the accessible world, we only consider the last □I applied in
a branch. This means all the s4 worlds will have the same modal formulae (and their negation), which is a property
of the system. In S4 and GL, every application of □I results in a new branch.
18
46
9. if (1→ 2)  Ai, then 1+ Ai and 2- Ai
10. if □  Ai, then for every j, j contains +
11. if □  Ai, then there is an r such that j  r and r which contains -.
Proof: proof for (1) – (9) is the same as classical logic.
(10): if □  Ai, then □ must appear on the left side of the question mark below any node is put
in W. This is because of the ordering of the cases. The formulae of the form □ are never taken
out of the left side of the question mark. On application of □I, for all the formulae of the form □
and the corresponding s are transferred into a new branch. Thus, all the branches thus obtained
will contain □ and . Thus all worlds Aj such that i<j, Aj contains +. Further, consider world
Ak such that k∈i. If Ak contains -, then Ak does not contain □ (or using □E, we can get a
contradiction). Hence, Ak contains +.
(11): if □  Ai,, then □ has been dealt with in case 3. Thus, a new node j = ’ ?  has
been placed in W, and since the rule applied to j is either i or c, - appears on the left side of
the question mark in Pj, so -  Aj.
Hence, the ⊩ relation holds and so this provides a counter example.
Hence, either the S5 IC-tree for  ? G contains a S5 derivation for  ? G or it allows the
definition of a counterexample to the inference from  to G.
2.4.1 Completeness of S4 rules
Completeness Theorem: Either the S4 IC-tree for  ? G contains a S4 derivation for  ? G or it
allows the definition of a counterexample to the inference from  to G.
47
This is proved directly from the proof extraction theorem and counterexample extraction theorem
proved below.
Proof Extraction Theorem: For any  and G, if the S4 IC-tree for  ? G evaluates to Y, then a
normal nd-proof of G from the assumptions in  can be found.
Proof: The proof that a normal nd proof can be extraction from any IC-tree that evaluates to Y
proceeds exactly the same as the proof for S5. nd-proofs extracted from IC-derivations are
normal, because the elimination rules are applied from above where as the introduction rules are
applied only from below.
Counterexample Extraction Theorem: For any  and G, if the S4 IC-tree for  ? G evaluates to
N, then using this tree, it is possible to construct an S4 model in which all the elements of  are
true but G is false.
Proof: Similar to the proof above, with the following modifications:
Structure of the set W: Since there are only finitely many formulae involved, and inaccessible
worlds do not influence each other, a counter example can be found in a finite model.
Accessibility relation R: The accessibility relation for S4 is transitive and reflexive. So, any
model of S4 can be folded out as a tree (this may involve two worlds being identical in the tree).
The worlds are constructed out of the branches of the proof tree. The accessibility relation R: if
there is a (possibly empty) path from branch x to branch y, then world corresponding to y is
accessible from the world corresponding to x. This is implemented as explained below:
Branch Pi,j: jth branch constructed (running count for all the branches). j is the branch number and
i is the list of the branch numbers of the branch’s ancestors (branches containing its root node.)
Note that i includes j for S4.
The proof proceeds very similar to that of S5 except for the following differences:
48
(a) As mentioned, branches are considered at every application of □I, and a single branch has
two indices. Thus, construction of the main branch and the branches constructed from the main
branch differ from that of S5 branches – though mainly in their names.
(b) Relation R: Ai,j is related to Ak,l iff jk (where Ai,j is obtained from Pi,j in the same manner
that Ai was obtained from Pi in the proof for S4).
In the proof of the lemma, (10) sub proof is
(10) if □ Ai,j, then for each r such that j  r, r,k contains +
Proof: if □  Ai,j, then □ must appear on the left side of the question mark below any node is
put in W. This is because of the ordering of the cases. The formulae of the form □ are never
taken out of the left side of the question mark. On application of □I, for all the formulae of the
form □ and the corresponding s are transferred into a new branch. Thus, all the branches thus
obtained will contain □ and .
Note that the modification of the accessibility relation allows for a counter example to be
constructed for questions like ◊P ? □◊PThe counterexample constructed in this case will
contain a world where P, ◊P are true and an accessible world where P is not true. Such a model is
possible in this accessibility relation is an equivalence relation.
2.5.3 Completeness of GL rules
Completeness Theorem: Either the GL IC-tree for ( ? G) contains a normal GL derivation for 
? G or it allows the definition of a counterexample to the inference from  to G.
This is proved directly from the proof extraction theorem and counterexample extraction theorem
proved below.
49
Proof Extraction Theorem: For any  and G, if the GL IC-tree for  ? G evaluates to Y, then a
normal nd-proof of G from the assumptions in  can be found.
Proof: The extraction of a normal nd proof from an IC-tree that evaluates to Y proceeds similar to
the proof of S5, but with some differences. An IC-derivation can be extracted from an IC-tree
that evaluates to Y using the procedure given above. To prove that it is possible to extract a
normal nd proof from thie IC-derivation D: The proof proceeds by induction on height(D). The
base case is where height(D)=1. In this case, D consists of just one question β ? G where G
is in β. The nd-proof for this is just the node G. If height(D) = h (> 1), the proof proceeds in
cases depending on the last rule application used, with the induction hypothesis that states that
given any IC-derivation T such that height(T) < h, a normal nd-proof can be extracted from it.
The propositional rules are dealt as in [10]. For the modal rules,
□I: If D is an IC-derivation for β ? G such that its last rule application is □I, then the
immediate subderivation of D, say D’ is an IC-derivation for β) ? Hwhere G = □H.
β)contains the formulae of the form □ and G where □is in β). Let E denote the set of
formulae as above. By induction hypothesis, we can extract a normal nd-proof P from D’.
Construct P as follows: The root of P is G and the immediate subproof if P is P’ and the last
inference rule used is □I. Further, if P has occurrences of formulae □in E that are open
assumptions, replace them with the inference rule □□P is a normal nd-proof that is
associated with D. Since D’ used β) as it premises, the restriction for □I in nd proofs is
satisfied (the formulae in are β)the essentially modal formulae that occur from H to the
assumptions it uses from β.
50
nd-proofs extracted from IC-derivations are normal, because the elimination rules are applied
from above where as the introduction rules are applied only from below.
Counterexample Extraction Theorem: For any  and G, if the GL IC-tree for  ? G evaluates to
N, then using this tree, it is possible to construct an S4 model in which all the elements of  are
true but G is false.
Proof: The proof is similar to that for S4.
Structure of the set W: Since there are only finitely many formulae involved, and inaccessible
worlds do not influence each other, a counter example can be found in a finite model.
Accessibility relation R: The accessibility relation for GL is transitive and converse-wellfounded.
So, any model of GL that with an acyclic transitive relation can be a model of GL. In particular,
any model can be folded out as a tree (this may involve two worlds being identical in the tree).
GL’s models have accessibility relations that are converse well-founded and transitive. Any
model of GL can be folded out as a tree (this may involve two worlds being identical in the tree)
with the criteria that no node can access itself. Since we are dealing with a finite tree, the model
is converse well-founded. This is implemented as follows: if there is a (nonempty empty) path
from branch x to branch y, then world corresponding to y is accessible from the world
corresponding to x.
Branch Pi,j: jth branch constructed (running count for all the branches). j is the branch number
and i is the list of the branch numbers of the branch’s ancestors. Note that i does not include j.
The proof proceeds very similar to that of S4 except for the following: In S4, the branches were
numbered such that each branch included itself in listing its ancestors, while in GL, a branch is
not considered its own ancestor. This is because the accessibility model of S4 is reflexive, while
that of GL is not.
51
This allows to get counterexample for questions like □A ? AA counterexample for
this has a world where A is not true. Since accessibility relation is converse well-founded, there
can be no closed path of accessible worlds, and in presence of transitivity, this means that no
world can access itself. If this were not true, then the world u can access itself, and so □A is true
vacuously. Since the world in inaccessible to itself, the boxed formulae that are true in it do not
influence any other formulae in it.
Another instance of a counterexample is for □(□A→ A)A counterexample for this has
a world w with an accessible world u such that A is false in u. Since u does not have any
accessible worlds (in particular, it does not access itself), □A is true in u, but A is not, and so
□A→ A is false in u and so □(□A→ A) is false in w. Note that the sentence □(□A→ A) is true in
S4 and S5.
Further, it is possible in S4 to have a counterexample for the question □(□A→ A) ? □A
This will have a world w where A is false. This would mean that □A is false, and so (□A→ A) is
true. As it is the only accessible world. □ (□A→ A) is also true, but as shown □A is false. This
will not be possible in the case of GL, since no world can access itself.
52
CHAPTER 3
Implementation
Proof search procedures for S4, S5 and GL were implemented in the automated theorem
prover AProS [1]. This chapter discusses the details of this implementation.
The first section describes AProS - in particular the existing proof search procedure for
classical sentential logic, the second section describes the additional functionalities that need to
be added to this proof search procedure in order to do proof search in modal logic, and the third
section explains how these functionalities are implemented and incorporated into the existing
system.
3.1 AProS:
AProS (Automated Proof Search) is a theorem prover that uses the intercalation method to
search for normal natural deduction proofs in sentential and predicate logic in classical,
intuitonistic and minimal version. Since S4, S5, and GL are based on classical sentential logic, a
discussion of the sentential component of AProS is sufficient to explain the implementation of
proof search for these systems.
AProS takes as input an assertion (a set of premises and a conclusion) and tries to find a proof
for this assertion. Since AProS uses the intercalation calculus, it finds only normal proofs, and its
proofs have the subformula property. This helps in constraining the space for proof search. AProS
has an internal representation of the IC tree as a tree of occurrences, where each occurrence is of
the form 〈F, S〉 where F is a formula, and S is its scope (that is, the premises and assumptions
accessible to this occurrence). A natural deduction proof can be directly derived from this tree
53
through a process of enumeration in which the proof is constructing by traversing the tree depth
first starting with the goal. The space of proofs (trees) is searched (in depth first manner) guided
by strategic steps as explained below. A single tree is maintained and modified as the search
proceeds.
The most distinctive feature of the proof search procedure implemented in AProS is that the
proof construction can be separated strategically into three distinct modules: extraction or goaldirected forward use of elimination rules, inversion or backward use of introduction rules, and
finally the use of indirect argumentation. AProS (roughly) does the following:
1) Given an assertion, AProS initializes a partial-proof-tree with it. A partial-proof-tree is
a tree whose nodes are occurrences as described above (i.e., containing a formula and its scope).
Throughout the proof search, AProS maintains and modifies a single tree. A question node of the
form 〈α; β? G〉 in an IC tree is represented in the partial proof tree as a node (an occurrence) of
the form 〈F, S〉 where F is the formula of the occurrence (i.e., F is G) and S is the scope of this
formula, i.e., the set of assumptions available to it (i.e., S is α); the set of formulae that are
extracted from α (i.e., β) is not stored explicitly anywhere, but can be calculated (as explained in
(2)).
2) AProS checks if the current goal can be obtained via a sequence of elimination rules
starting with the positive occurrences of the premises or assumptions. (Such a sequence is called
an extraction sequence.) If so, AProS adds the last rule in the extraction sequence to the tree. The
major premise of the rule will be the next rule in the extraction sequence or an assumption, but
the rule may have minor premises which need to be proved, and such a premise is added to a list
called the list of open-questions. This process is repeated for each rule in the extraction
sequence. Then, AProS marks an open question as the current goal and searches for a proof using
steps (2), (3) or (4). This is repeated till no open questions are left. If this succeeds, the goal is
said to be extractable from the premises. If no proof is found for an open question, AProS
54
backtracks, (as in step 5) and searches for other extraction sequences that end with the current
goal, and repeats this procedure using them19.
3) If the current goal is not extractable, AProS checks if the current goal can be obtained
as a conclusion of an introduction rule. If so, the tree is updated by adding the introduction rule
application above the current goal. This adds the premises of the rule to the tree. If a premise is
not in the set of premises and assumptions available to it is added to the list of open questions,
and this list is dealt with as above.
4)
If steps (2) and (3) fail, (i.e., if the current goal G is not extractable, and is not a
conclusion of the introduction rule, or if the open questions generated above the current goal in
steps (2) and (3) cannot be proved, and AProS backtracks to G) the current goal is negated and
added to the available assumptions20. Then the algorithm tries to find a contradiction to use in an
indirect argument. This is done as follows: AProS generates a list of negated formulae that are
positively contained in the premises or the assumptions on which G depends, then generates a list
of contradictory pairs by using these formulae and their immediate subformula. AProS then adds
to the partial-proof-tree, a negation rule (introduction or elimination depending on the current
goal) and a falsum rule above it. Each pair from the list of contradictory pairs is added to as
premises to the falsum rule one by one till either the list is exhausted or a pair thus added is
proved successfully (i.e., both the formulae are added to the list of open questions, and both the
open questions are eventually added to the tree as goals, and proved).
5)
If the current goal is such that no rules can be applied, or steps (2), (3) and (4) have
been applied to this goal, but have failed, or is a repeated question, (i.e., during the course of the
search, AProS has tried to prove this goal using a superset of the assumptions currently available
to it, and failed) then AProS retracts its goals in a depth first order, marking that the current goal
AProS treats as a special case the extraction sequences of length 1 - i.e., when the goal is in the scope. Here, it
applies the “premise rule” which simply closes the current branch with a premise.
20
If G is of the form F, its negated version is simply F.
19
55
cannot be proved from the assumptions available to it. As mentioned in the steps, if it is possible
to proceed in more than one way in any step, then AProS does so in depth first manner.
If every step has succeeded, (there are no open questions left to be proved) then the tree is a fully
justified tree, from which a proof can be extracted. If however, the root of the tree fails (as in
step 5) then the proof search fails.
Note that though a question node differs from an occurrence in that the set of formulae that are
extracted from α (i.e., β) is not stored explicitly anywhere, but is calculated in a lazy fashion
using extraction, the semantics of this match the eager calculation of β since extraction is done
before any other strategy is used, and at each step AProS proceeds in a depth-first manner so that
after each rule is added (by any step), extraction is tried first.
3.2 Proof search in modal logic
This section describes the additional functionalities that need to be added to AProS in order to
implement the proof search for modal logic. The implementational details are given in the next
section. The rules are restated here.
IC rules for S4/S5
□E :  ;  ? , □δ  ,  ; , δ? 
□I :  ;  ? □   () ? 
where () is in S4 the set of formulae □ such that □ and in S5 the set of
formulae γ of the form □ or ¬□ such that γ  
IC rules for GL
□E/I :  ;  ? □   () ? 
where () is the set of formulae γ of the form
56
vi.
□ such that □ ,
vii.
 such that □ ,
viii.
□
The most direct translation into AProS that will add the required functionalities is given here.
As before, AProS initializes a partial proof tree of occurrences with the goal. As explained in the
previous chapters, modal logic introduces a new unary connective - □ (◊ is defined as in terms of
□). So, the main changes include adding inference rules for the □, and modifying the proof
search to incorporate this.
Additions to the available rules:
1.
In S4/S5, □E is added as an elimination rule – it is used as any other elimination rule.
2.
GL has no □E rule.
3.
□I is added as an introduction rule, but to satisfy its restrictions the following changes are
needed: when □I rule is applied to an occurrence 〈□F, S〉, the premise of that rule that is added as
an open question to the partial proof tree is the occurrence 〈□F, S’〉 where S' corresponds to
(), i.e., for S4/S5, S' is all boxed formulae in S that are extractable from S, and for GL, it is all
the boxed formulae in S and extractable from S, their immediate subformula, and the conclusion
of □I.
Modifications to the proof search:
1.
Construction of S’ would involve listing all the boxed formulae that are extractable, i.e.,
an eager evaluation of the extractable fromulae. AProS does not keep an explicit list of all the
formulae that are extracted, or that are extractable, but computes this in a lazy manner. So, the
proof search has to be modified to include eager evaluation. This would involve either (a) using
multiple partial-proof-trees or (b) modifying the search algorithm to deviate from depth first
search (if an open question fails, AProS backtracks and fails the rule that generated it, but if
AProS checks a formula and finds it unextractable, instead of backtracking it should move on to
the next potential formula in order to list all the extractable formulae.)
57
2.
Although conceptually simple, changing the scope of an occurrence so that it does not
contain the scope of the occurrence immediately below it would cause some implementational
changes – especially in printing the proofs.
3.3 Implementational details
The modifications to the proof search procedure listed above cannot be incorporated
easily into the existing implementation. So, we modify the algorithm presented above to use lazy
search so that while it preserves the semantics of the functionalities needed, it needs fewer
modifications to the existing proof search procedure. We compute the extractable boxed
formulae in a lazy manner that can be incorporated seamlessly into the lazy computation of
extractable formulae by the existing proof search. In this approach, instead of listing 〈() ? P〉
and trying to prove P from it, we allow (existing) proof search algorithm to try to prove P from
α. But, when the proof search algorithm tries to use an occurrence of a formula from α, we check
if it satisfies the restrictions, i.e., that the occurrence is indeed used to extract a formulae that will
be in () called the allowed-modal formula (boxed formulae in case of S4/S5 and boxed
formulae, their unboxed versions and the conclusion of the □I rule in the case of GL). In
particular, all the extraction sequences have to satisfy this restriction, i.e., contain a boxed
formula. If not, we disallow the use of that occurrence, i.e, if AProS attempts to extend the
subtree by adding a node that would result in violating the restrictions, this attempt fails, and the
algorithm backtracks in the usual depth first fashion. So, the only addition we have to the
existing algorithm is to reject a proof of P that does not meet the restrictions. (For GL, the
allowed-formulae contain □P, but it may not be extractable from α, so we explicitly add □P to
the scope of P before proceeding to attempt to prove P.)
The following are the functions that were implemented to carry out the proof search. The list
below gives the functionality of each function instead of the algorithm.
58
Allowed-modal formula:
This function checks whether a given formula is an allowed-modal formula in the modal logic
that is currently being used. This is used to check if an occurrence of the formula present in the
available assumptions can be used to extend the tree. For GL, an additional function checks if the
given formula is the conclusion of a □I rule (if so, the formula can be justified by Löb’s rule).
Depends upon:
This function returns the set of occurrences that a given occurrence depends upon. This set is
different from all the assumptions available to the given occurrence, as some assumptions may
not be used in the tree for proving the occurrence. This is also different from the set of top nodes
of the subtree defined by the occurrence, as some of the top nodes may be assumptions that are
introduced and discharged above the given occurrence.
Can use assumption:
This function checks whether using an occurrence of an available assumption to extend
the subtree violates the restrictions for the □-rules. This is done as follows:
For every application of □I (below the goal), the following are checked:
1. If the assumption is available to the conclusion of a □I-rule, it should be used to obtain an
allowed-modal formula through extraction.
2. If the assumption is not available to a □-rule21, then no allowed-modal formula that was
needed by another assumption should depend on this.
GL □E:
In GL, □E is allowed only on allowed-modal formulae that are extractable from the available
assumptions for some application of □I, such that it depends on only these assumptions. Note:
21
this is possible since the assumption may be introduced after the □I rule was applied, and its addition may have
nothing to do with the □I.
59
The rules allowed in extracting does not include □E. This function checks if the application of a
□E satisfies these restrictions.
3.4 Example:
Assertion in S4
Set of premises: {□(A B), A, □A}
Goal: □B.
The following shows a proof in IC
{□(A B), □A} ; {(A B),A} ? A (□E)
{□(A B), □A} ; {(A B, B)} ? B
{□(A), □A} ; {(A B)} ? A (E)
□(A B), □A} ; {(A B)} ? B (□E)
{□(A B), □A} ; {} ? B (□I) (A is boxed and so excluded)
{□(A B), A, □A} ; {} ? □B
The following shows the tree as it evolves in AProS:
Tree 1:
In this step, a tree with just the root node is created and initialized with the given question as
occurrence (〈□F, S〉 where F is a formula and S, a set of formulae, is its scope). The current
goal is marked with an “*”.
〈□B, {□(A B), A, □A}〉*
Tree 2:
Since the goal is not extractable, but can be obtained as a conclusion of an introduction rule, the
proof search attempts this. Since the proof search uses lazy evaluation of the extractable
formulae, all the premises are carried over.
〈B, {□(A B), A, □A} 〉*
□I
〈□B, {□(A B), A, □A}〉
60
Tree 3:
The goal is extractable with the extraction sequence:
□(A B)  A B B
In the path from the assumption used to the premise of □I, (B), there is a boxed formula □(A B). Hence the restriction is satisfied and the extraction sequence is allowed.
The proof search procedure then proceeds to add the last element of the sequence to the tree.
〈A, {□(A B), A, □A}〉*
〈 (A B), {□(A B), A, □A }〉
E
〈B, {□(A B), A, □A} 〉
□I
〈□B, {□(A B), A, □A} 〉
Tree 4:
The current goal is its scope. However, this premise does not satisfy the restrictions of □I rule.
Hence, this fails.
PREMISE -FAIL
〈A, {□(A B), A, □A}〉
〈 (A B), {□(A B), A, □A} 〉
E
〈B, {□(A B), A, □A} 〉
□I
〈□B, {□(A B), A, □A} 〉
61
Tree5:
Since the proof search procedure backtracks in a depth first manner, it tries to prove the same
goal using other means. The goal is extractable by the following extraction sequence AA
This extraction sequence satisfies the restrictions, and so is tried.
〈□A, {□(A B), A, □A}〉*
□E
〈A, {□(A B), A, □A}〉
〈 (A B), {□(A B), A, □A}〉
□E
〈B, {□(A B), A, □A}〉
□I
〈□B, {□(A B), A, □A} 〉
Tree6:
The current goal is in the scope. Further, it is adding the premise does not violate the restrictions
of □I. So, this branch succeeds.
PREMISE - SUCCEED
〈□A, {□(A B), A, □A}〉
□E
〈A, {□(A B), A, □A}〉
〈 (A B), {□(A B), A, □A }〉*
E
〈B, {□(A B), A, □A}〉
□I
〈□B, {□(A B), A, □A} 〉
62
Tree 7:
This question was part of the original extraction sequence □(A B)  A B B
So, the proof search procedure simply continues by adding the next element.
PREMISE - SUCCEED
〈□A, {□(A B), A, □A}〉
〈 (A B), {□(A B), A, □A }〉*
□E
□E
〈A, {□(A B), A, □A}〉
〈 (A B), {□(A B), A, □A }〉
E
〈B, {□(A B), A, □A}〉
□I
〈□B, {□(A B), A, □A}〉
Tree 8:
The current goal is in the scope, and can be added without violating any restrictions.
PREMISE - SUCCEED
PREMISE - SUCCEED
〈 (A B), {□(A B), A, □A}〉
〈□A, {□(A B), A, □A}〉
□E
□E
〈 (A B), {□(A B), A, □A}〉
〈A, {□(A B), A, □A}〉
E
〈B, {□(A B), A, □A}〉
□I
〈□B, {□(A B), A, □A}〉
This completes the proof search which was successful.
63
This example illustrates how the lazy evaluation of the extractable boxed formulae works. Above
a □I rule, an extraction sequence is allowed only of it contains a boxed formula, and a premise is
allowed to be added as an initial rule only if a boxed formula is extracted from it (above the □I
rule). The example also shows how the proof search proceeds in a depth first manner – i.e., when
one of the “proofs” of a question violates the restriction, the proof search backtracks and tries to
prove it again. This can be incorporated easily into the existing proof search since back tracking
is done as a matter of course (for instance, when one of the open questions corresponding to a
minor premise of an elimination rule cannot be proved).
64
Appendix
This section presents a list of theorems that AProS proves (or in unable to prove) in the modal
logics S4, S5 and GL. We include the proofs (IC-derivations) of a selected few22. The list of
modal logical statements was generated as follows:
1. Axioms of a logic are provable in it. Axioms of one logical system L1 that are not provable
in another logic L2 are listed under unprovable statements of L2.
2. Properties that the models of a logic should satisfy can be proved in it. Properties that the
models need not satisfy are unprovable in it.
3. Distinct modalities: A sequence of boxes and negations is called a modality. For a given
logic, there is a set of modalities DS such that for any other modality, there is some
modality in DS that it is equivalent, and no two modalities in DS are equivalent to each
other. DS is called the set of distinct modalities. A logic cannot prove that any two
elements in the set are equivalent. When DS is finite, all such statements are shown to be
unprovable. When DS is infinite, a few of these are shown. Further, other modalities that
are equivalent to ones in DS follow some pattern, called the reduction rules. We prove
the reduction rules and give examples of application of the rules.
4. Finally, we list a set of miscellaneous examples obtained by modifying one of the
examples listed above to include various connectives.
1. Examples in S4:
Theorems:
The examples are organized as follows:
I.
II.
III.
IV.
Axioms of S4
Properties of S4 models
Reduction rules.
Miscellaneous examples with other connectives
I. Axioms of S4.
Necessitation: For any tautology A, S4 ⊦ □A
1. S4 ⊦ {} ? □(A→A)
Y
{A} ? (A)
I
{} ? (AA)
□I
{} ? □(AA)
22
Some branches in some derivations leave out unwanted or repeated assumptions for the sake of space,
65
2. S4 ⊦ {} ? □(A∨ ¬A)
3. S4 ⊦ {} ? □(P→(Q→P))
Distribution
4. S4 ⊦{□(R→S), □R} ? □S
Y
Y
{; RS,R,S} ? S
{; RS,R} ? R
EE
{; RS,R} ? S
□E
{□R; RS } ? S
□E
{□(RS),□R} ? S
□I
□(AA)
{□(RS),□R} ?
□S
□(AA)
5. S4 ⊦ {□A} ? A
Y
{;A} ? (A)
□E
{□A} ? A
6. S4 ⊦ {□P} ? □□P
Y
{□A}?□A
□I
{□A}?□□A
66
Axioms of S5 corresponding to symmetry, Löb axiom are unprovable, and AProS shows this.
II. Properties of S4 models.
From definition of □
7. S4 ⊦{□(P & ¬P)} ? □Q
8. S4 ⊦{□P, ¬□¬¬P} ? Q
9. S4 ⊦{□¬P, ¬□¬¬¬P} ? Q
The proofs are direct and not shown here.
Reflexivity, transitivity are directly axioms.
Symmetry and well-foundedness are not required properties of the accessibility relation of
S4 models, and so are unprovable. AProS shows that they are unprovable.
III. Distinct Modalities
The set of distinct modalities for S4 are:
DS = {*P, P | * is one of □, ◊, ◊□, □◊, ◊□◊, □◊□, ¬□, ¬◊, ¬◊□, ¬□◊, ¬◊□◊, ¬□◊□, ¬}
All the other modalities reduce to one of these using the following reduction rules.
Reduction rules (proved in the next pages)
10. S4 ⊦{} ? □P ↔ □□P, (shown above)
11. S4 ⊦{} ? ¬□¬P ↔ ¬□¬¬□¬P i.e. {} ? ◊P ↔ ◊◊P
12. S4 ⊦{} ? □¬□¬□¬□¬P ↔ □¬□¬P i.e.{} ? □◊□◊P↔ □◊P
13. S4 ⊦{} ? ¬□¬□¬□¬□P ↔ ¬□¬□P i.e.{} ? ◊□◊□P ↔ ◊□P
Since the rules listed here are bi-implications, the negated versions of these rules are already
proves. However, AProS was made to prove them since the logics differ in the way their boxrules handle negations and these verified this interaction. The proofs of these are not listed.
14. S4 ⊦{} ? ¬□P ↔ ¬□□P,
15. S4 ⊦{} ? ¬¬□¬P ↔ ¬¬□¬¬□¬P, i.e.{} ? ¬◊P ↔ ¬◊◊P
16. S4 ⊦{} ? ¬□¬□¬□¬□¬P ↔ ¬□¬□¬P i.e.{} ? ¬□◊□◊P ↔ ¬□◊P
17. S4 ⊦{} ? ¬¬□¬□¬□¬□P ↔ ¬¬□¬□P i.e.{} ? ¬◊□◊□P ↔ ¬◊□P
Modal logics are usually presented using connectives □ and ◊. Because of this, the formulae ¬□P
and ◊¬P are syntactically different, and reduction rules list them. Since we only use □, ◊¬P
translates to ¬□¬¬P. Proving these two are equal is now simply a propositional proof. However,
67
AProS was made to prove these since interaction between negation rules and box rules have to be
verified. We do not give the proofs of these here.
18. S4 ⊦{} ? (□¬P ↔ ¬¬□¬P),
i.e. {} ? (□¬P ↔ ¬◊P)
19. S4 ⊦{} ? (¬□¬¬P ↔ ¬□P),
i.e.{} ? (◊¬P ↔ ¬□P)
20. S4 ⊦{} ? (□¬□¬¬P↔¬¬□¬□P), i.e.{} ? (□◊¬P↔¬◊□P)
21. S4 ⊦{} ? (¬□¬□¬P↔¬□¬□¬P), i.e.{} ? (¬□◊P↔¬□◊P) or {} ? (¬□◊P↔◊□¬P)
or {} ? (◊□¬P↔¬□◊P) or {} ? (◊□¬P↔◊□¬P)
22. S4 ⊦{} ? (□¬□¬□¬P↔¬¬□¬□¬□¬P), i.e. {} ? (□◊□¬P↔¬◊□◊P)
or {} ? (□◊□¬P↔¬¬□◊□¬P) or {} ? (□¬□◊P↔¬◊□◊P) or {} ? (□¬□◊P↔¬¬□◊□¬P)
23. S4 ⊦{} ? (¬□¬□¬□¬¬P↔¬□¬□¬□P), i.e.{} ? (◊□◊¬P↔◊□¬□P)
or {} ? (◊□◊¬P↔¬□◊□P) or {} ? (¬□◊□¬¬P↔◊□¬□P) or {} ? (¬□◊□¬¬P↔¬□◊□P)
Proofs:
11.(a) S4 ⊦{¬□¬P } ? ¬□¬¬□¬P
Y
{ ¬□¬P ; ¬¬□¬P} ? ¬□¬P
Y
□E
{□¬¬□¬P, ¬□¬P } ? ¬¬□¬P
{□¬¬□¬P, ¬□¬P } ? ¬□¬P
⊥I
{□¬¬□¬P, ¬□¬P } ? ⊥
¬I
{¬□¬P} ? ¬□¬¬□¬P
68
11.(b) S4 ⊦{ ¬□¬¬□¬P } ?  ¬□¬P
Y
Y
{□¬P, ¬□¬P } ? □¬P
{□¬P, ¬□¬P } ? ¬□¬P
⊥I
{□¬P, ¬□¬P } ? ⊥
¬I
{ □¬P } ? ¬¬□¬P
Y
□I
{¬□¬¬□¬P, □¬P } ? □¬¬□¬P
{¬□¬¬□¬P, □¬P } ? ¬□¬¬□¬P
⊥I
{¬□¬¬□¬P, □¬P } ? ⊥
¬I
{¬□¬¬□¬P} ? ¬□¬P
12. (a) S4 ⊦ {□¬□¬□¬□¬P } ? □¬□¬P
Y
{□¬P; ¬□¬P } ? ¬ □¬P
Y
□E
{□¬P,□¬□¬P } ? □¬P
{□¬P; □¬□¬P } ? ¬ □¬P
⊥I
{□¬P, □¬□¬P } ? ⊥
Y
¬I
{;¬□¬□¬□¬P} ? ¬□¬□¬□¬P
{□¬□¬□¬□¬P ,□¬P} ? ¬□¬□¬P
□I
□E
{□¬□¬□¬□¬
P, □¬P } ? ¬□¬□¬□¬P
□I
{□¬□¬□¬□¬P,□¬P} ? □¬□¬□¬P
{ □¬P,
□¬□¬□¬□¬P } ? ⊥
¬I
{□¬□¬□¬□¬P} ? ¬□¬P
□I
{□¬□¬□¬□¬P} ? □¬□¬P
69
⊥I
12. (b) S4 ⊦ {□¬□¬P } ? □¬□¬□¬□¬P
Y
{□¬□¬P; ¬□¬□¬P } ? ¬□¬□¬P
Y
□E
{□¬□¬P, □¬□¬□¬P } ? ¬□¬□¬P
{□¬□¬P, □¬□¬□¬P } ? □¬□¬P
⊥I
{□¬□¬P, □¬□¬□¬P} ? ⊥
¬I
{□¬□¬P}? ¬□¬□¬□¬P
□I
{□¬□¬P}? □¬□¬□¬□¬P
13. (a) S4 ⊦ {¬□¬□¬□¬□P } ? ¬□¬□P
Y
{□¬□P; ¬□¬□P } ? ¬□¬□P
Y
□E
{□¬□P, □¬□¬□P } ? ¬□¬□P
{□¬□P; □¬□¬□P } ? □¬□P
⊥I
{□¬□P, □¬□¬□P } ? ⊥
¬I
{□¬□P} ? ¬□¬□¬□P
Y
□I
{¬□¬□¬□¬□P,
□¬□P
□¬□P} ? □¬□¬□¬□P
{¬□¬□¬□¬□P, □¬□P } ? ¬□¬□¬□¬□P
⊥I
{¬□¬□¬□¬□P, □¬□P} ? ⊥
¬I
{¬□¬□¬□¬□P} ? ¬□¬□P
70
13. (b) S4 ⊦ {¬□¬□P } ? ¬□¬□¬□¬□P
Y
{□P;
Y
¬□P } ? ¬ □P
□E
{□P, □¬□P } ? □P
{□P, □ ¬□P } ? ¬ □P
⊥I
{□P, □¬□P} ? ⊥
Y
¬I
{; ¬□¬□¬□P} ? ¬□¬□¬□P
{□P} ? ¬□¬□P
□I
{ □P} ? □¬□¬□P
□E
{□¬□¬□¬□P, □¬P} ? ¬□¬□¬□P
⊥I
{□¬□¬□¬□P, □P} ? ⊥
¬I
Y
{□¬□¬□¬□P} ? ¬□P
□I
{¬□¬□P, □¬□¬□¬□P} ? □¬□P
{¬□¬□P, □¬□¬□¬□P} ? ¬□¬□P
⊥I
{¬□¬□P, □¬□¬□¬□P} ? ⊥
{¬□¬□P} ? ¬□¬□¬□¬□P
¬I
Application of the reduction rules:
The following is a list of application of reduction rules. The list contains one statement for each
of the distinct modalities. The proofs are very long and are not listed here.
24. S4 ⊦ {} ? □P↔□□□P
25. S4 ⊦ {} ? □P↔□□□□P
26. S4 ⊦ {} ? ¬□¬P↔¬□¬¬□¬¬□¬P
27. S4 ⊦ {} ? ¬□¬P↔¬□□□¬P
28. S4 ⊦ {} ? □□¬□¬□¬□¬¬□¬□□P↔□□□□¬□¬¬□¬¬□¬¬□¬□¬□¬□P
29. S4 ⊦ {} ? □□¬□¬□¬□¬¬□¬□□¬□¬P↔□□□□¬□¬¬□¬¬□¬¬□¬□¬□¬□¬□¬P
30. S4 ⊦ {} ? ¬□□¬□¬□¬□¬¬□¬□¬¬□¬P↔¬□□□□¬□¬¬□¬¬□¬¬□¬□¬□¬□¬P
71
31. S4 ⊦ {} ? ¬□□¬□¬□¬□¬¬□P↔¬□□□□¬□¬¬□¬¬□¬¬□¬□¬□P
32. S4 ⊦ {} ? ¬□P↔¬□□□P
33. S4 ⊦ {} ? ¬¬□¬P↔¬¬□¬¬□¬¬□¬P
34. S4 ⊦ {} ? ¬□□¬□¬□¬□¬¬□¬□□P↔¬□□□□¬□¬¬□¬¬□¬¬□¬□¬□¬□P
35. S4 ⊦ {} ? ¬□□¬□¬□¬□¬¬□¬□□¬□¬P↔¬□□□□¬□¬¬□¬¬□¬¬□¬□¬□¬□¬□¬P
36. S4 ⊦ {} ? ¬¬□□¬□¬□¬□¬¬□¬□¬¬□¬P↔¬¬□□□□¬□¬¬□¬¬□¬¬□¬□¬□¬□¬P
37. S4 ⊦ {} ? ¬¬□□¬□¬□¬□¬¬□P↔¬¬□□□□¬□¬¬□¬¬□¬¬□¬□¬□P
Some distinct modalities are not equivalent, but one may imply the other. The proofs of these are
given in the next pages.
S4 ⊦ {} ? (□P → P) (shown above)
38. S4 ⊦ {} ? (P → ¬□¬P) (shown above)
39. S4 ⊦ {} ? (□P → ¬□¬P) i.e. {} ? (□P → ◊P)
40. S4 ⊦ {} ? (□P → ¬□¬□P) i.e. {} ? (□P → ◊□P)
41. S4 ⊦ {} ? (□P → □¬□¬P) i.e. {} ? (□P → □◊P)
42. S4 ⊦ {} ? (□P → ¬□¬□¬□¬P) i.e. {} ? (□P→ ◊□◊P) or {} ? (□P → ¬□◊□¬P)
43. S4 ⊦ {} ? (□P → □¬□¬□P) i.e. {} ? (□P → □◊□P)
44. S4 ⊦ {} ? (¬□¬□P → ¬□¬P) i.e. {} ? (◊□P → ◊P)
45. S4 ⊦ {} ? (□¬□¬P → ¬□¬P) i.e. {} ? (□◊P → ◊P)
46. S4 ⊦ {} ? (¬□¬□¬□¬P → ¬□¬P) i.e. {} ? (◊□◊P → ◊P) or {} ? (¬□◊□¬P → ◊P)
47. S4 ⊦ {} ? (□¬□¬□P → ¬□¬P) i.e. {} ? (□◊□P → ◊P)
48. S4 ⊦ {} ? (¬□¬□P → ¬□¬□¬□¬P) i.e.{}?(◊□P → ◊□◊P)
or {} ? (◊□P → ¬□◊□¬P)
49. S4 ⊦ {} ? (□¬□¬□P → ¬□¬□P) i.e. {} ? (□◊□P → ◊□P)
50. S4 ⊦ {} ? (□¬□¬P → ¬□¬□¬□¬P) i.e.{}?(□◊P → ◊□◊P)
72
or {} ? (□◊P → ¬□◊□¬P)
51. S4 ⊦ {} ? (□¬□¬□P → □¬□¬P) i.e. {} ? (□◊□P → □◊P)
52. S4 ⊦ {} ? (□¬□¬□P → ¬□¬□¬□¬P) i.e.{}?(□◊□P→◊□◊P)
or {}?(□◊□P→¬□◊□¬P)
AProS was made to prove the contrapositives of the above, to verify the interaction between
negation and box rules. We do not list the proofs here.
53. S4 ⊦ {} ? (¬P → ¬□P)
54. S4 ⊦ {} ? (¬¬□¬P → ¬P) i.e. {} ? (¬◊P → ¬P)
55. S4 ⊦ {} ? (¬¬□¬P → ¬□P) i.e. {} ? (¬◊P → ¬□P)
56. S4 ⊦ {} ? (¬¬□¬□P → ¬□P) i.e. {} ? (¬◊□P → ¬□P)
57. S4 ⊦ {} ? (¬□¬□¬P → ¬□P) i.e. {} ? (◊□¬P → ¬□P) or {} ? (¬□◊P → ¬□P)
58. S4 ⊦ {} ? (¬¬□¬□¬□¬P → ¬□P) i.e.{} ? (¬◊□¬□¬P → ¬□P)
or {} ? (¬¬□◊□¬P → ¬□P) or {} ? (¬¬□¬□◊P → ¬□P)or {} ? (¬◊□◊P → ¬□P)
59. S4 ⊦ {} ? (¬□¬□¬□P → ¬□P) i.e.{} ? (¬□◊□P → ¬□P) or {} ? (◊□¬□P → ¬□P)
60. S4 ⊦ {} ? (¬¬□¬P → ¬¬□¬□P) i.e. {} ? (¬◊P → ¬◊□P)
61. S4 ⊦ {} ? (¬¬□¬P → ¬□¬□¬P) i.e.{} ? (¬◊P → ◊□¬P) or {} ? (¬◊P → ¬□◊P)
62. S4 ⊦ {} ? (¬¬□¬P → ¬¬□¬□¬□¬P) i.e.{} ? (¬◊P → ¬◊□¬□¬P)
or {} ? (¬◊P → ¬◊□◊P) or {} ? (¬◊P → ¬¬□◊□¬P)or {} ? (¬◊P → ¬¬□¬□◊P)
63. S4 ⊦ {} ? (¬¬□¬P → ¬□¬□¬□P) or {}? (¬◊P→ ◊□¬□P) or {} ?(¬◊P → ¬□◊□P)
64. S4 ⊦ {} ? (¬¬□¬□¬□¬P → ¬¬□¬□P) i.e. {} ? (¬◊□¬□¬P → ¬◊□P)
or {} ? (¬¬□◊□¬P → ¬◊□P)or {} ? (¬¬□¬□◊P → ¬◊□P) or {} ? (¬◊□◊P → ¬◊□P)
65. S4 ⊦ {} ? (¬¬□¬□P → ¬□¬□¬□P) i.e. {} ? (¬◊□P → ◊□¬□P)
or {}?(¬◊□P→¬□◊□P)
66. S4 ⊦ {} ? (¬¬□¬□¬□¬P → ¬□¬□¬P) i.e. {} ? (¬◊□¬□¬P → ◊□¬P)
or {} ? (¬◊□¬□¬P → ¬□◊P) or {} ? (¬¬□◊□¬P → ◊□¬P)
or {} ? (¬¬□◊□¬P → ¬□◊P) or {} ? (¬¬□¬□◊P → ◊□¬P)
or {} ? (¬¬□¬□◊P → ¬□◊P) or {} ? (¬◊□◊P → ◊□¬P) or {} ? (¬◊□◊P → ¬□◊P)
73
67. S4 ⊦ {} ? (¬□¬□¬P → ¬□¬□¬□P) i.e.{} ? (◊□¬P → ◊□¬□P)
or {} ? (◊□¬P → ¬□◊□P) or {} ? (¬□◊P → ◊□¬□P) or {} ? (¬□◊P → ¬□◊□P)
68. S4 ⊦ {} ? (¬¬□¬□¬□¬P → ¬□¬□¬□P)i.e. {} ? (¬◊□¬□¬P → ◊□¬□P)
or {} ? (¬◊□¬□¬P → ¬□◊□P) or {} ? (¬¬□◊□¬P → ◊□¬□P)
or {} ? (¬¬□◊□¬P → ¬□◊□P) or {} ? (¬¬□¬□◊P → ◊□¬□P)
or {} ? (¬¬□¬□◊P → ¬□◊□P) or {} ? (¬◊□◊P → ◊□¬□P)
or {} ? (¬◊□◊P → ¬□◊□P)
Proofs.
38. S4 ⊦ {P} ? (¬□¬P)
Y
{P; ¬P} ? ¬P
Y
□E
{P, □¬ P} ? P
{P, □¬P} ? ¬P
⊥I
{P,□¬ P} ? ⊥
¬I
{P}? ¬□¬ P
40. S4 ⊦{□P } ? (¬□¬□P)
Y
{□P; ¬□P} ? ¬□P
Y
{□P, □¬□P} ? □P
{□P, □¬□P} ? ¬□P
□E
⊥I
{□P, □¬□P} ? ⊥
¬I
{□P}? ¬□¬□ P
74
41. S4 ⊦{□P} ? (□¬□¬P)
Y
{□P; ¬□P }? ¬□P
Y
□E
{□P, □¬□P }? □P
{□P, □¬□P }? ¬□P
⊥I
{□P, □¬□P } ? ⊥
¬I
{□P}?¬□¬□P
42. S4 ⊦{□P } ? (¬□¬□¬□¬P)
Y
Y
{□P; ¬P} ? ¬P
{□¬ P; P } ? P
□E
{□P, □¬P} ? P
□E
{□P, □¬P} ? ¬P
⊥I
Y
{□P,□¬ P} ? ⊥
¬I
{□P} ? ¬□¬P
□I
{□P} ? □¬□¬P
{□P; ¬□¬□¬P} ? ¬□¬□¬P
□E
{□P, □¬□¬□¬P} ? ¬□¬□¬P
⊥I
{□P,□¬□¬□¬P} ? ⊥
¬I
{□P} ? ¬□¬□¬□¬P
75
43. S4 ⊦{□P } ? (□¬□¬□P)
Y
{□P; ¬□P }? ¬□P
□E
Y
{□P; □¬□P } ? ¬□P
{□P; ¬□P }? □P
⊥I
{□P, □¬□P } ? ⊥
¬I
{□P} ? ¬□¬□P
□I
{□P} ? □¬□¬□P
44. S4 ⊦{¬□¬□P } ? (¬□¬P)
Y
Y
{ □¬P; P }? P
{□P;¬P }? ¬P
□E
□E
{□¬P ,□P} ? P
{□P, □¬P } ? ¬P
⊥I
{□¬P ,□P} ? ⊥
¬I
{□¬P} ? ¬□P
Y
□I
{¬□¬□P, □¬P} ? ¬□¬□P
{¬□¬□P, □¬P} ? □¬□P
⊥I
{¬□¬□P, □¬P} ? ⊥
¬I
{¬□¬□P} ? ¬□¬P
76
45. S4 ⊦{□¬□¬P } ? (¬□¬P)
Y
{;¬□¬P} ? ¬□¬P
□E
{□¬□¬P} ? ¬□¬P
46. S4 ⊦{¬□¬□¬□¬P } ? (¬□¬P)
Y
{□¬P; ¬□¬P } ? ¬□¬P
Y
□E
{□¬P, □¬□¬P } ? ¬□¬P
{□¬P; ¬□¬P } ? □¬P
⊥I
{□¬P, □¬□¬P } ? ⊥
¬I
{□¬P} ? ¬□¬□¬P
Y
□I
{¬□¬□¬□¬P,□¬P} ?
□¬□¬□¬P
{¬□¬□¬□¬P, □¬P } ? ¬□¬□¬□¬P
⊥I
□¬□P
{¬□¬□¬□¬P, □¬P} ? ⊥
¬I
{¬□¬□¬□¬P} ? ¬□¬P
77
47. S4 ⊦ {□¬□¬□P } ? (¬□¬P)
Y
Y
{□P; ¬P } ? ¬P
{□¬P; P} ? P
□E
□E
{□¬P, □P} ? ¬P
{□¬P, □P} ? P
⊥I
{□¬P, □P} ? ⊥
Y
¬I
{□¬P; ¬□¬□P } ? ¬□¬□P
{□¬P} ? ¬□P
□E
□I
{□¬□¬□P, □¬P} ? ¬□¬□P
{¬□¬□P, □¬P} ? □¬□P
⊥I
{□¬□¬□P, □¬P} ? ⊥
¬I
{□¬□¬□P} ? ¬□¬P
48. S4 ⊦{¬□¬□P } ? (¬□¬□¬□¬P)
Y
Y
{□P; ¬P } ? ¬P
{□¬P; P} ? P
{ □P, □¬P} ? P
□E
□E
{□¬P, □P } ? ¬P
⊥I
{□P, □¬P} ? ⊥
¬I
{□P} ? ¬□¬P
Y
□I
{□¬□¬□¬P, □P} ? □¬□¬P
{□¬□¬□¬P, □¬P} ? ¬□¬□¬P
⊥I
{□¬□¬□¬P, □P} ? ⊥
¬I
{□¬□¬□¬P} ? ¬□P
Y
□I
{¬□¬□P, □¬□¬□¬P} ? □¬□P
{¬□¬□P, □¬□¬□¬P} ? ¬□¬□P
⊥I
{¬□¬□P, □¬□¬□¬P} ? ⊥
¬I
{¬□¬□P} ? ¬□¬□¬□¬P
78
49. S4 ⊦{□¬□¬□P } ? (¬□¬□P)
Y
{; ¬□¬□P} ? ¬□¬□P
□E
{□¬□¬□P} ? ¬□¬□P
50. S4 ⊦{□¬□¬P } ? (¬□¬□¬□¬P)
Y
{□¬□¬P; ¬□¬□¬P } ? ¬□¬□¬P
Y
□¬□¬P¬□¬P
□E
{□¬□¬P, □¬□¬□¬P } ? ¬□¬□¬P
{□¬□¬P, ¬□¬□¬P } ? □¬□¬P
⊥I
□¬□¬P¬□¬P
{□¬□¬P, □¬□¬□¬P } ? ⊥
¬I
{□¬□¬P} ? ¬□¬□¬□¬P
79
51. S4 ⊦ {□¬□¬□P } ? (□¬□¬P)
Y
Y
{□¬P; P } ? P
{P; □¬P } ? ¬P
□E
□E
{□P, □¬P } ? P
{□P, □¬P } ? ¬ P
⊥I
{□¬P, □P} ? ⊥
¬I
{□¬P} ? ¬□P
Y
□I
{¬□¬□P, □¬P} ? □¬□P
{¬□¬□P, □¬P} ? ¬□¬□P
⊥I
{¬□¬□P, □¬P} ? ⊥
¬I
{¬□¬□P} ? ¬□¬P
□I
{□¬□¬□P} ? □¬□¬P
80
52. S4 ⊦{□¬□¬□P } ? (¬□¬□¬□¬P)
Y
Y
{□P; ¬P }? ¬P
{□¬P; P }? P
□E
□E
{□P, □ ¬P }? ¬P
{□P, □ ¬P }? P
⊥I
{□¬P, □P} ? ⊥
¬I
Y
{□¬P}?¬□P
{□¬P; ¬□¬□P } ? ¬□¬□P
□I
{□¬□¬□P, □¬P} ? □¬□P
□E
{□¬□¬□P, □¬P} ? ¬□¬□P
⊥I
Y
{□¬□¬□P, □¬P} ? ⊥
¬I
{□¬□¬□P;
{□¬□¬□P} ? ¬□¬P
¬□¬□¬P } ? ¬□¬□¬P
□E
□I
{□¬□¬□P, □¬□¬□¬P } ? □¬□¬P
{□¬□¬□P,□¬□¬□¬P } ? ¬□¬□¬P
⊥I
{□¬□¬□P, □¬□¬□¬P } ? ⊥
¬I
{□¬□¬□P} ? ¬□¬□¬□¬P
Proofs of a few theorems that are propositional variants of the reduction rules.
69. S4 ⊦{¬□¬P} ? (¬□□¬P)
Y
{¬□¬P; □¬P } ? □¬P
Y
□E
{¬□¬P; □□¬P } ? □¬P
{¬□¬P; □¬P } ? ¬□¬P
⊥I
{¬□¬P, □□¬P } ? ⊥
¬I
{¬□¬P}?¬□□¬P
81
70. S4 ⊦ {¬□
43. □¬P}
S4 ⊦ {¬□
? (¬□¬P
□¬P}) ? (¬□¬P )
Y
{ □¬P } ? □¬P
Y
□I
{¬□□¬P, □¬P } ? ¬□□¬P
{¬□□¬P, □¬P } ? □□¬P
⊥I
{¬□□¬P, □¬P } ? ⊥
¬I
{¬□□¬P}?¬□¬P
{¬□□¬P} ? ¬□¬P
IV Miscellaneous
These examples show the interaction of box and negation rules with the other connectives.
71. S4 ⊦ {R, □S} ? □(R→S),
Y
{R; S} ? S
I
{;S} ? (RS)
□E
{□S} ? (RS)
□I
{R, □S} ? □(RS)
72. S4 ⊦ {□R, □S} ? □(RVS)
Y
{□R; S}? (RVS)
□E
{□R, □S}? (RVS)
□I
{□R, □S}? □(RVS)
82
Modification of distributivity
73. S4 ⊦{□(B&C)} ? (□B&□C)
Y
Y
{;(B&C), B}? B
{;(B&C), C}? C
&E
&E
{;(B&C)}? B
{;(B&C)}? C
□E
□E
)
{□(B&C)}? B
{□(B&C)}? C
□I
□I
{)□(B&C)}? □B
{□(B&C)}? □C
&I
{□(B&C)}? □B&□C
74. S4 ⊦ {□P} ? □¬□¬P
Y
Y
{□¬P; P} ? P
{□P; ¬P} ? ¬P
□E
□E
{□¬P, □P } ? P
{□P, □¬P} ? ¬P
⊥I
{□P,□¬ P} ? ⊥
¬I
{□P}? ¬□¬ P
□I
{□P}? □¬□¬ P
83
75.
S4 ⊦{□¬P} ? ¬□P
Y
Y
{□¬P; P }? P
{□P; ¬P }? ¬P
□E
□E
{□P, □ ¬P }? P
{□P, □ ¬P }? ¬P
⊥I
{□¬P ,□P} ? ⊥
¬I
{□¬P}?¬□P
76. S4 ⊦ {□(R→S), □R, R} ? □S
Y
Y
{;R→S,R,S}? S
{;R→S,R}? R
E
{;R→S,R}? S
□E
{□(R→S);R}?S
□E
□(AA)
{□(R→S),□R}?S
□I
(AA)
{□□(R
→S),□R,R}?□S
□(AA)
77. S4 ⊦ {□P, ¬P} ? Q
Y
{ ¬P, ¬□Q ; P }? P
Y
□E
{ ¬P, ¬□Q, □P }? P
{ ¬P, ¬□Q ; P }? ¬P
⊥I
□(AA)
{□P, ¬P, ¬□Q }? ⊥
¬E
{□P,
¬P} ? □Q
□(AA)
84
The following propositional variants were also proved.
78.
S4 ⊦{A, □(B→C), (A→□B)} ? □C
79.
S4 ⊦ {□P} ? □P,
80.
S4 ⊦ {□R & □S} ? □R,
81.
S4 ⊦{□R, □S} ? □(R→S),
82.
S4 ⊦ {□R, □S} ? □□(R→S),
83.
S4 ⊦{□R, □S} ? □(R&S),
84.
S4 ⊦ {□((R↔S)&Q), □(RVZ),□¬Z} ? □(SVP)
85.
S4 ⊦{¬□A→B, ¬□A→¬B} ? □A
86.
S4 ⊦ {¬□A→B, ¬□A→¬B, □(A→C)} ? □C
87.
S4
88.
S4 ⊦ {□P, ¬P} ? Q,
89.
S4 ⊦{□R, □S} ? □(R&S),
90.
S4 ⊦ □(□P→P)
91.
S4 ⊦ P→□(□P→P)
⊦ {P} ? ¬□¬P
85
Unprovable statements:
The distinct modalities are unboxed, □, ◊, ◊□, □◊, ◊□◊, □◊□ and their negations
P is distinct from other modalities
1. S4
⊬
2. S4
⊬ {} ? (P ↔ ¬□¬P),
3. S4
⊬
{} ? (P ↔ ¬□¬□P), i.e.{} ? (P ↔ ◊□P)
4. S4
⊬
{} ? (P ↔ □¬□¬P), i.e.{} ? (P ↔ □◊P)
5. S4
⊬ {} ? (P ↔ ¬□¬□¬□¬P), i.e. {} ? (P ↔ ◊□¬□¬P) or {} ? (P ↔ ¬□◊□¬P) or
{} ? (P ↔ □P),
i.e.{} ? (P ↔ ◊P),
{} ? (P ↔ ¬□¬□◊P) or {} ? (P ↔ ◊□◊P)
6. S4
⊬ {} ? (P ↔ □¬□¬□P), i.e. {} ? (P ↔ □◊□P)
7. S4
⊬ {} ? (P ↔ ¬P),
8. S4
⊬ {} ? (P ↔ ¬□P),
9. S4
⊬ {} ? (P ↔ ¬¬□¬P), i.e.{} ? (P ↔ ¬◊P)
10. S4
⊬ {} ? (P ↔ ¬¬□¬□P), i.e. {} ? (P ↔ ¬◊□P)
11. S4
⊬ {} ? (P ↔ ¬□¬□¬P), i.e. {} ? (P ↔ ¬□◊P) or {} ? (P ↔ ◊□¬P)
12. S4
⊬ {} ? (P ↔ ¬¬□¬□¬□¬P), i.e. {} ? (P ↔ ¬◊□¬□¬P) or {} ? (P ↔ ¬¬□◊□¬P) or
{} ? (P ↔ ¬¬□¬□◊P) or {} ? (P ↔ ¬◊□◊P)
13. S4
⊬ {} ? (P ↔ ¬□¬□¬□P), i.e. {} ? (P ↔ ◊□¬□P) or {} ? (P ↔ ¬□◊□P)
□P is distinct from other modalities
14. S4
⊬ {} ? (□P ↔ ¬□¬P), i.e.{} ? (□P ↔ ◊P)
15. S4
⊬ {} ? (□P ↔ ¬□¬□P), i.e. {} ? (□P ↔ ◊□P)
16. S4
⊬
17. S4
⊬ {} ? (□P ↔ ¬□¬□¬□¬P), i.e. {} ? (□P ↔ ◊□¬□¬P) or {} ? (□P ↔ ¬□◊□¬P) or
{} ? (□P ↔ □¬□¬P), i.e. {} ? (□P ↔ □◊P)
{} ? (□P ↔ ¬□¬□◊P) or {} ? (□P ↔ ◊□◊P)
86
18. S4
⊬ {} ? (□P ↔ □¬□¬□P), i.e. {} ? (□P ↔ □◊□P)
19. S4
⊬ {} ? (□P ↔ ¬P)
20. S4
⊬ {} ? (□P ↔ ¬□P)
21. S4
⊬ {} ? (□P ↔ ¬¬□¬P), i.e.{} ? (□P ↔ ¬◊P)
22. S4
⊬ {} ? (□P ↔ ¬¬□¬□P), i.e.{} ? (□P ↔ ¬¬◊□P)
23. S4
⊬ {} ? (□P ↔ ¬□¬□¬P), i.e. {} ? (□P ↔ ¬□◊P) or {} ? (□P ↔ ◊□¬P)
24. S4
⊬ {} ? (□P ↔ ¬¬□¬□¬□¬P), i.e.{} ? (□P ↔¬◊□¬□¬P) or {} ? (□P ↔ ¬¬□◊□¬P) or
{} ? (□P ↔ ¬¬□¬□◊P) or {} ? (□P ↔ ¬◊□◊P)
25. S4
⊬ {} ? (□P ↔ ¬□¬□¬□P), i.e.{} ? (□P ↔ ◊□¬□P) or {} ? (□P ↔ ¬□◊□P)
◊P is distinct from other modalities
26. S4
⊬ {} ? (¬□¬P ↔ ¬□¬□P), i.e.{} ? (◊P ↔ ◊□P)
27. S4
⊬ {} ? (¬□¬P ↔ □¬□¬P),
28. S4
⊬ {} ? (¬□¬ P↔ ¬□¬□¬□¬P), i.e. {} ? (◊P ↔ ◊□¬□¬P) or {} ? (◊P ↔ ¬□◊□¬P)
i.e.{} ? (◊P ↔ □◊P)
or {} ? (◊P ↔ ¬□¬□◊P) or {} ? (◊P ↔ ◊□◊P)
29. S4
⊬ {} ? (¬□¬P ↔ □¬□¬□P), i.e.{} ? (◊P ↔ □◊□P)
30. S4
⊬ {} ? (¬□¬P ↔ ¬P),
31. S4
⊬ {} ? (¬□¬P ↔ ¬□P), i.e.{} ? (◊P ↔ ¬□P)
32. S4
⊬
33. S4
⊬ {} ? (¬□¬P ↔ ¬¬□¬□P), i.e.{} ? (◊P ↔ ¬◊□P)
34. S4
⊬ {} ? (¬□¬P ↔ ¬□¬□¬P), i.e.{} ? (◊P ↔ ¬□◊P) or {} ? (◊P ↔ ◊□¬P)
35. S4
⊬ {} ? (¬□¬P ↔ ¬¬□¬□¬□¬P), i.e.{} ?( ◊P ↔¬◊□¬□¬P) or {} ?( ◊P↔¬¬□◊□¬P)
i.e.{} ? (◊P ↔ ¬P)
{} ? (¬□¬P ↔ ¬¬□¬P), i.e.{} ? (◊P ↔ ¬◊P)
or {} ? (◊P ↔ ¬¬□¬□◊P) or {} ? (◊P ↔ ¬◊□◊P)
36. S4
⊬ {} ? (¬□¬P ↔ ¬□¬□¬□P), i.e.{} ? (◊P ↔ ◊□¬□P) or {} ? (◊P ↔ ¬□◊□P)
□◊P is distinct from other modalities
37. S4
⊬ {} ?(¬□¬□P ↔ □¬□¬P), i.e.{} ?( ◊□P ↔ □◊P)
87
38. S4
⊬ {} ? (¬□¬□P ↔ ¬□¬□¬□¬P), i.e.{}?( ◊□P ↔ ◊□¬□¬P) or {}?( ◊□P↔¬□◊□¬P)
or {}?(◊□P ↔ ¬□¬□◊P) or {}? (◊□P↔ ◊□◊P)
39. S4
⊬ {} ?(¬□¬□P ↔ □¬□¬□P), i.e.{} ?( ◊□P ↔ □◊□P)
40. S4
⊬ {} ? (¬□¬□P ↔ ¬P),
41. S4
⊬ {} ? (¬□¬□P ↔ ¬□P), i.e.{} ? (◊□P ↔ ¬□P)
42. S4
⊬ {} ? (¬□¬□P ↔ ¬¬□¬P), i.e.{} ? (◊□P ↔ ¬◊P)
43. S4
⊬ {} (¬□¬□P ↔ ¬¬□¬□P), i.e.{} (◊□P ↔ ¬◊□P)
44. S4
⊬ {} ? (¬□¬□P ↔ ¬□¬□¬P), i.e.{} ? (◊□P ↔ ¬□◊P) or {} ? (◊□P ↔ ◊□¬P)
45. S4
⊬ {} ?(¬□¬□P ↔ ¬¬□¬□¬□¬P), i.e.{}?( ◊□P↔¬◊□¬□¬P) or {}?(◊□P↔¬¬□◊□¬P)
i.e.{} ? (◊□P ↔ ¬P)
or {}?(◊□P ↔ ¬¬□¬□◊P) or {}?(◊□P ↔ ¬◊□◊P)
46. S4
⊬ {} ? (¬□¬□P ↔ ¬□¬□¬□P), i.e.{} ? (◊□P ↔ ◊□¬□P) or {} ? (◊□P ↔ ¬□◊□P)
□◊P is distinct from other modalities
47. S4
⊬
{} ? (□¬□¬P ↔ ¬□¬□¬□¬P), i.e.{}?( □◊P ↔ ◊□¬□¬P) or {}?(□◊P↔¬□◊□¬P)
or {}?(□◊P ↔ ¬□¬□◊P) or {}? (□◊P↔ ◊□◊P)
48. S4
⊬ {} ? (□¬□¬P ↔ □¬□¬□P), i.e.{} ? (□◊P ↔ □◊□P)
49. S4
⊬ {} ? (□¬□¬P ↔ ¬P), i.e.{} ? (□◊P ↔ ¬P)
50. S4
⊬ {} ? (□¬□¬P ↔ ¬□P), i.e.{} ? (□◊P ↔ ¬□P)
51. S4
⊬ {} ? (□¬□¬P ↔ ¬¬□¬P), i.e.{} ? (□◊P ↔ ¬◊P)
52. S4
⊬ {} ? (□¬□¬P ↔ ¬¬□¬□P), i.e.{} ? (□◊P ↔ ¬◊□P)
53. S4
⊬ {} ? (□¬□¬P ↔ ¬□¬□¬P), i.e.{} ? (□◊P ↔ ¬□◊P) or {} ? (□◊P ↔ ◊□¬P)
54. S4
⊬ {} ? (□¬□¬P ↔ ¬¬□¬□¬□¬P), i.e.{}?(□◊P↔¬◊□¬□¬P) or {}?(□◊P↔¬¬□◊□¬P)
or {}?(□◊P ↔ ¬¬□¬□◊P) or {}?(□◊P ↔ ¬◊□◊P)
55. S4
⊬ {} ? (□¬□¬P ↔ ¬□¬□¬□P), i.e.{} ? (□◊P ↔ ◊□¬□P) or {} ? (□◊P ↔ ¬□◊□P)
◊□◊P is distinct from other modalities
56. S4
⊬
{} ? (¬□¬□¬□¬P ↔ □¬□¬□P), i.e.{}?(◊□◊P ↔ □◊□P) or {}?(◊□¬□¬P ↔ □◊□P)
or {} ? (¬□¬□◊P ↔ □◊□P) or {} ? (¬□◊□¬P ↔ □◊□P)
88
57. S4
⊬
{} ? (¬□¬□¬□¬P ↔ ¬P), i.e.{}?(◊□◊P ↔ ¬P) or {}?(◊□¬□¬P ↔ ¬P)
or {} ? (¬□¬□◊P ↔ ¬P) or {} ? (¬□◊□¬P ↔ ¬P)
58. S4
⊬ {} ? (¬□¬□¬□¬P ↔ ¬□P), i.e.{}?(◊□◊P ↔ ¬□P) or {}?(◊□¬□¬P ↔ ¬□P)
or {} ? (¬□¬□◊P ↔ ¬□P) or {} ? (¬□◊□¬P ↔ ¬□P)
59. S4
⊬ {} ? (¬□¬□¬□¬P ↔ ¬¬□¬P), i.e.{}?(◊□◊P ↔ ¬◊P) or {}?(◊□¬□¬P ↔ ¬◊P)
or {} ? (¬□¬□◊P ↔ ¬◊P) or {} ? (¬□◊□¬P ↔ ¬◊P)
60. S4
⊬ {} ? (¬□¬□¬□¬P ↔ ¬¬□¬□P), i.e.{}?(◊□◊P ↔ ¬◊□P) or {}?(◊□¬□¬P ↔ ¬◊□P)
or {} ? (¬□¬□◊P ↔ ¬◊□P) or {} ? (¬□◊□¬P ↔ ¬◊□P)
61. S4
⊬ {} ? (¬□¬□¬□¬P ↔ ¬□¬□¬P), i.e.{}?(◊□◊P ↔ ¬◊□P) or {}?(◊□¬□¬P ↔ ¬◊□P)
or {} ? (¬□¬□◊P ↔ ¬◊□P) or {} ? (¬□◊□¬P ↔ ¬◊□P)
62. S4
⊬ {} ? (¬□¬□¬□¬P ↔ ¬¬□¬□¬□¬P), i.e.{}?(◊□◊P ↔ ¬◊□¬□¬P)
or {}?(◊□¬□¬P↔ ¬◊□¬□¬P) or {} ? (¬□¬□◊P ↔ ¬◊□¬□¬P) or {} ? (¬□◊□¬P ↔ ¬◊□¬□¬P)
or {}?(◊□◊P ↔ □◊□P) or {}?(◊□¬□¬P↔ ¬¬□◊□¬P) or {} ? (¬□¬□◊P ↔ ¬¬□◊□¬P)
or {} ? (¬□◊□¬P ↔ ¬¬□◊□¬P) or {}?(◊□◊P ↔ □◊□P) or {}?(◊□¬□¬P↔ ¬¬□¬□◊P)
or {} ? (¬□¬□◊P ↔ ¬¬□¬□◊P) or {} ? (¬□◊□¬P ↔ ¬¬□¬□◊P) or {}?(◊□◊P ↔ □◊□P)
or {}?(◊□¬□¬P↔ ¬◊□◊P) or {} ? (¬□¬□◊P ↔ ¬◊□◊P) or {} ? (¬□◊□¬P ↔ ¬◊□◊P)
63. S4
⊬ {} ? (¬□¬□¬□¬P ↔ ¬□¬□¬□P), i.e.{}?(◊□◊P ↔ ◊□¬□P)
or {}? (◊□¬□¬P ↔ ◊□¬□P) or {} ? (¬□¬□◊P ↔ ◊□¬□P) or {} ? (¬□◊□¬P ↔ ◊□¬□P)
or {}?(◊□◊P ↔ ¬□◊□P) or {}? (◊□¬□¬P ↔ ¬□◊□P) or {} ? (¬□¬□◊P ↔ ¬□◊□P)
or {} ? (¬□◊□¬P ↔ ◊□¬□P)
64. S4
⊬
65. S4
⊬ {} ? (¬P ↔ ¬¬□¬P), i.e.{} ? (¬P ↔ ¬◊P)
66. S4
⊬ {} ? (¬P ↔ ¬¬□¬□P), i.e.{} ? (¬P ↔ ¬◊□P)
67. S4
⊬ {} ? (¬P ↔ ¬□¬□¬P), i.e.{} ? (¬P ↔ ¬□◊P) or {} ? (¬P ↔ ◊□¬P)
68. S4
⊬ {} ? (¬P ↔ ¬¬□¬□¬□¬P), i.e.{}?(¬P↔¬◊□¬□¬P) or {}?(¬P↔¬¬□◊□¬P)
{} ? (¬P ↔ ¬□P), i.e.{} ? (¬P ↔ ¬□P)
or {}?(¬P ↔ ¬¬□¬□◊P) or {}?(¬P ↔ ¬◊□◊P)
69. S4
⊬ {} ? (¬P ↔ ¬□¬□¬□P), i.e.{} ? (¬P ↔ ◊□¬□P) or {} ? (¬P ↔ ¬□◊□P)
¬□P is distinct from other modalities
70. S4
⊬ {} ? (¬□P ↔ ¬¬□¬P), i.e.{} ? (¬□P ↔ ¬◊P)
71. S4
⊬ {} ? (¬□P ↔ ¬¬□¬□P), i.e.{} ? (¬□P ↔ ¬◊□P)
72. S4
⊬ {} ? (¬□P ↔ ¬□¬□¬P), i.e.{} ? (¬□P ↔ ¬□◊P) or {} ? (¬□P ↔ ◊□¬P)
89
73. S4
⊬ {} ? (¬□P ↔ ¬¬□¬□¬□¬P), i.e.{}?(¬□P↔¬◊□¬□¬P) or {}?(¬□P↔¬¬□◊□¬P)
or {}?(¬□P ↔ ¬¬□¬□◊P) or {}?(¬□P ↔ ¬◊□◊P)
74. S4
⊬ {} ? (¬□P ↔ ¬□¬□¬□P), i.e.{} ? (¬□P ↔ ◊□¬□P) or {} ? (¬□P ↔ ¬□◊□P)
75. S4
⊬ {} ? (¬¬□¬P ↔ ¬¬□¬□P), i.e.{} ? (¬◊P ↔ ¬◊□P)
76. S4
⊬ {} ? (¬¬□¬P ↔ ¬□¬□¬P), i.e.{} ? (¬◊P ↔ ¬□◊P) or {} ? (¬◊P ↔ ◊□¬P)
77. S4
⊬ {} (¬¬□¬P ↔ ¬¬□¬□¬□¬P), i.e.{}?(¬◊P↔¬◊□¬□¬P) or {}?(¬◊P↔¬¬□◊□¬P)
or {}?(¬◊P ↔ ¬¬□¬□◊P) or {}?(¬◊P ↔ ¬◊□◊P)
78. S4
⊬
{} ? (¬¬□¬P ↔ ¬□¬□¬□P), i.e.{} ? (¬◊P ↔ ◊□¬□P) or {} ? (¬◊P ↔ ¬□◊□P)
¬◊□P is distinct from other modalities
79. S4
⊬ {} ? (¬¬□¬□P ↔ ¬□¬□¬P), i.e.{} ? (¬◊□P ↔ ¬□◊P) or {} ? (¬◊□P ↔ ◊□¬P)
80. S4
⊬ {} ? (¬¬□¬□P ↔ ¬¬□¬□¬□¬P), i.e.{}?(¬◊□P↔¬◊□¬□¬P)
or {}?(¬◊□P↔¬¬□◊□¬P) or {}?(¬◊□P ↔ ¬¬□¬□◊P) or {}?(¬◊□P ↔ ¬◊□◊P)
81. S4
⊬ {} ? (¬¬□¬□P ↔ ¬□¬□¬□P), i.e.{} ? (¬◊□P ↔ ◊□¬□P) or {} ? (¬◊□P ↔¬□◊□P)
82. S4
⊬ {} ? (¬□¬□¬P ↔ ¬¬□¬□¬□¬P), i.e.{}?(◊□¬P↔¬◊□¬□¬P)
or {}?(◊□¬P↔¬¬□◊□¬P) or {}?(◊□¬P ↔ ¬¬□¬□◊P) or {}?(◊□¬P ↔ ¬◊□◊P)
or {}?(¬◊□P↔¬◊□¬□¬P) or {}?(¬□◊P↔¬¬□◊□¬P) or {}?(¬□◊P ↔ ¬¬□¬□◊P)
or {}?(¬□◊P ↔ ¬◊□◊P)
83. S4
⊬ {} ? (¬□¬□¬P ↔ ¬□¬□¬□P), i.e.{} ? (◊□¬P ↔ ◊□¬□P) or {} ? (◊□¬P ↔¬□◊□P)
or {} ? (¬□◊P ↔ ◊□¬□P) or {} ? (¬□◊P ↔¬□◊□P)
84. S4
⊬ {} ?(¬¬□¬□¬□¬P ↔ ¬□¬□¬□P)}, i.e. or {}?(¬◊□◊P ↔ ◊□¬□P)
or {}? (¬◊□¬□¬P ↔ ◊□¬□P) or {} ? (¬¬□¬□◊P ↔ ◊□¬□P)
or {} ? (¬¬□◊□¬P ↔ ◊□¬□P) or {}?(¬◊□◊P ↔ ¬□◊□P) or {}? (¬◊□¬□¬P ↔ ¬□◊□P)
or {} ? (¬¬□¬□◊P ↔ ¬□◊□P) or {} ? (¬¬□◊□¬P ↔ ◊□¬□P)
The following modalitles are totally distinct. We have shown above that they are not equivalent. So,
we only need to show one side of this.
85. S4
⊬ {} ? (P → ¬□¬□P), i.e.{} ? (P → ◊□P)
86. S4
⊬ {} ? (P → □¬□¬P), i.e.{} ? (P → □◊P)
87. S4
⊬ {} ? (P → ¬□¬□¬□¬P), i.e. {} ? (P → ◊□¬□¬P) or {} ? (P → ¬□◊□¬P) or
{} ? (P → ¬□¬□◊P) or {} ? (P → ◊□◊P)
88. S4
⊬ {} ? (P → □¬□¬□P), i.e.{} ? (P → □◊□P)
90
89. S4
⊬ {} ? (¬□¬□P → □¬□¬P), i.e.{} ? (◊□P → □◊P)
90. S4
⊬
91. S4
⊬ {} ? (¬¬□¬□P → □¬□¬P), i.e.{} ? (¬◊□P → □◊P)
92. S4
⊬ {} ? (¬¬□¬□P → □¬□¬P), i.e.{} ? (¬◊□P → □◊P)
93. S4
⊬ {} ? (P → ¬¬□¬□P), i.e.{} ? (P → ¬◊□P)
94. S4
⊬ {} ? (P → ¬□¬□¬P), i.e.{} ? (P → ◊□¬P) or {} ? (P → ¬□◊P)
95. S4
⊬ {} ? (P → ¬¬□¬□¬□¬P), i.e.{} ? (P → ¬◊□¬□¬P) or {} ? (P → ¬¬□◊□¬P) or
{} ? (¬□¬□P → □¬□¬P), i.e.{} ? (◊□P → □◊P)
{} ? (P → ¬¬□¬□◊P) or {} ? (P → ¬◊□◊P)
96. S4
⊬ {} ? (P → ¬□¬□¬□P), i.e.{} ? (P → ◊□¬□P) or {} ? (P → ¬□◊□P)
97. S4
⊬ {} ? (¬□¬□P → ¬□¬□¬P), i.e.{} ? (◊□P → ◊□¬P) or {} ? (◊□P → ¬□◊P)
98. S4
⊬ {} ? (¬P → ¬¬□¬□P), i.e.{} ? (¬P → ¬◊□P)
99. S4
⊬ {} ? (¬P → ¬□¬□¬P), i.e.{} ? (¬P → ◊□¬P) or {} ? (¬P → ¬□◊P)
100. S4
⊬
{} ? (¬P → ¬¬□¬□¬□¬P), i.e.{} ? (¬P → ¬◊□¬□¬P) or {} ? (¬P → ¬¬□◊□¬P)
or {} ? (¬P → ¬¬□¬□◊P) or {} ? (¬P → ¬◊□◊P)
101. S4
⊬ {} ? (¬P → ¬□¬□¬□P), i.e.{} ? (¬P → ◊□¬□P) or {} ? (¬P → ¬□◊□P)
102. S4
⊬ {} ? (¬¬□¬□P → ¬□¬□¬P), i.e.{} ? (¬◊□P → ◊□¬P) or {} ? (¬◊□P → ¬□◊P)
103. S4
⊬ {} ? (¬¬□¬□P → ¬□¬□¬P), i.e.{} ? (¬◊□P → ◊□¬P) or {} ? (¬◊□P → ¬□◊P)
104. S4
⊬ {} ? (P → ¬P),
105. S4
⊬ {} ? (P → ¬□P),
106. S4
⊬ {} ? (P → ¬¬□¬P), i.e.{} ? (P → ¬◊P)
107. S4
⊬ {} ? (¬P → ¬¬□¬□P), i.e.{} ? (¬P → ¬◊□P)
108. S4
⊬ {} ? (¬P → ¬□¬□¬P), i.e.{} ? (¬P → ◊□¬P) or {} ? (¬P → ¬□◊P)
109. S4
⊬ {} ? (¬P → ¬¬□¬□¬□¬P), i.e.{} ? (¬P → ¬◊□¬□¬P) or {} ? (¬P → ¬¬□◊□¬P)
or {} ? (¬P → ¬¬□¬□◊P) or {} ? (¬P → ¬◊□◊P)
91
110. S4
⊬ {} ? (¬P → ¬□¬□¬□P), i.e.{} ? (¬P → ◊□¬□P) or {} ? (¬P → ¬□◊□P)
111. S4
⊬ {} ? (□P → ¬P),
112. S4
⊬ {} ? (□P → ¬□P),
113. S4
⊬ {} ? (□P → ¬¬□¬P), i.e.{} ? (□P → ¬◊P)
114. S4
⊬ {} ? (□P → ¬¬□¬□P), i.e.{} ? (□P → ¬◊□P)
115. S4
⊬ {} ? (□P → ¬□¬□¬P), i.e.{} ? (□P → ◊□¬P) or {} ? (□P → ¬□◊P)
116. S4
⊬ {} ? (□P → ¬¬□¬□¬□¬P), i.e.{} ? (□P → ¬◊□¬□¬P) or {} ? (□P → ¬¬□◊□¬P)
or {} ? (□P → ¬¬□¬□◊P) or {} ? (□P → ¬◊□◊P)
117. S4
⊬ {} ? (□P → ¬□¬□¬□P), i.e.{} ? (□P → ◊□¬□P) or {} ? (□P → ¬□◊□P)
118. S4
⊬ {} ? (¬□¬P → ¬P), i.e.{} ? (◊P → ¬P)
119. S4
⊬ {} ? (¬□¬P → ¬□P), i.e.{} ? (◊P → ¬□P)
120. S4
⊬ {} ? (¬□¬P → ¬¬□¬P), i.e.{} ? (◊P → ¬◊P)
121. S4
⊬ {} ? (¬□¬P → ¬¬□¬□P), i.e.{} ? (◊P → ¬◊□P)
122. S4
⊬ {} ? (¬□¬P → ¬□¬□¬P), i.e.{} ? (◊P → ◊□¬P) or {} ? (◊P → ¬□◊P)
123. S4
⊬ {} ? (¬□¬P → ¬¬□¬□¬□¬P), i.e.{} ? (◊P → ¬◊□¬□¬P) or {} ? (◊P → ¬¬□◊□¬P)
or {} ? (◊P → ¬¬□¬□◊P) or {} ? (◊P → ¬◊□◊P)
124. S4
⊬ {} ? (¬□¬P → ¬□¬□¬□P), i.e.{} ? (◊P → ◊□¬□P) or {} ? (◊P → ¬□◊□P)
125. S4
⊬ {} ? (¬□¬□P → ¬P), i.e.{} ? (◊□P → ¬P)
126. S4
⊬ {} ? (¬□¬□P→¬□P), i.e.{} ? (◊□P→¬□P)
127. S4
⊬ {} ? (¬□¬□P → ¬¬□¬P), i.e.{} ? (◊□P → ¬◊P)
128. S4
⊬ {} ?(¬□¬□P → ¬¬□¬□P), i.e.{} ?( ◊□P → ¬◊□P)
129. S4
⊬ {} ? (¬□¬□P → ¬□¬□¬P), i.e.{} ? (◊□P → ◊□¬P) or {} ? (◊□P → ¬□◊P)
130. S4
⊬ {} ? (¬□¬□P → ¬¬□¬□¬□¬P), i.e.{} ? (◊□P → ¬◊□¬□¬P)
{} ? (◊□P → ¬¬□¬□◊P) or {} ? (◊□P → ¬◊□◊P)
92
or {} ? (◊□P → ¬¬□◊□¬P)
or
131. S4
⊬ {} ? ( ¬□¬□P → ¬□¬□¬□P), i.e.{} ? (◊□P → ◊□¬□P) or {} ? (◊□P → ¬□◊□P)
132. S4
⊬ {}
133. S4
⊬ {} ? (□¬□¬P → ¬□P), i.e.{} ? (□◊P → ¬□P)
134. S4
⊬ {} ? (□¬□¬P → ¬¬□¬P), i.e.{} ? (□◊P → ¬◊P)
135. S4
⊬ {} ? (□¬□¬P → ¬¬□¬□P), i.e.{} ? (□◊P → ¬◊□P)
136. S4
⊬ {} ? (□¬□¬P → ¬□¬□¬P), i.e.{} ? (□◊P → ¬□◊P) or {} ? (□◊P → ◊□¬P)
137. S4
⊬ {} ? (□¬□¬P → ¬¬□¬□¬□¬P), i.e.{} ? (□◊P → ¬◊□¬□¬P)
? (□¬□¬P → ¬P), i.e.{} ? (□◊P → ¬P)
or {} ? (□◊P → ¬¬□◊□¬P)
or
{} ? (□◊P → ¬¬□¬□◊P) or {} ? (□◊P → ¬◊□◊P)
138. S4
⊬ {} ? (□¬□¬P → ¬□¬□¬□P), i.e.{} ? (□◊P → ◊□¬□P) or {} ? (□◊P → ¬□◊□P)
139. S4
⊬ {} ? (¬□¬□¬□¬P → ¬P), i.e.{} ? (◊□¬□¬P → ¬P) or {} ? (¬□◊□¬P → ¬P)
or {} ? (¬□¬□◊P → ¬P) or {} ? (◊□◊P → ¬P)
140. S4
⊬ {} ? (¬□¬□¬□¬P → ¬□P), i.e.{} ? (◊□¬□¬P → ¬□P) or {} ? (¬□◊□¬P → ¬□P)
or {} ? (¬□¬□◊P → ¬□P) or {} ? (◊□◊P → ¬□P)
141. S4
⊬ {} ? (¬□¬□¬□¬P → ¬¬□¬P), i.e.{} ? (◊□¬□¬P → ¬◊P) or {} ? (¬□◊□¬P → ¬◊P)
or {} ? (¬□¬□◊P → ¬◊P) or {} ? (◊□◊P → ¬◊P)
142. S4
⊬ {} ? (¬□¬□¬□¬P → ¬¬□¬□P), i.e.{} ? (◊□¬□¬P → ¬◊□P) or {} ? (¬□◊□¬P → ¬◊□P)
or {} ? (¬□¬□◊P → ¬◊□P) or {} ? (◊□◊P → ¬◊□P)
143. S4
⊬ {} ? (¬□¬□¬□¬P → ¬□¬□¬P), i.e.{} ? (◊□¬□¬P → ◊□¬P) or {} ? (¬□◊□¬P → ◊□¬P)
or {} ? (¬□¬□◊P → ◊□¬P) or {} ? (◊□◊P → ◊□¬P) or {} ? (◊□¬□¬P → ¬□◊P)
or {} ? (¬□◊□¬P → ¬□◊P) or {} ? (¬□¬□◊P → ¬□◊P) or {} ? (◊□◊P → ¬□◊P)
144. S4
⊬ {} ? (¬□¬□¬□¬P → ¬¬□¬□¬□¬P), i.e.{} ? (◊□¬□¬P → ¬◊□¬□¬P)
or {} ? (¬□◊□¬P → ¬◊□¬□¬P) or {} ? (¬□¬□◊P → ¬◊□¬□¬P) or {} ? (◊□◊P → ¬◊□¬□¬P)
or {} ? (◊□¬□¬P → ¬¬□◊□¬P) or {} ? (¬□◊□¬P → ¬¬□◊□¬P) or {} ? (¬□¬□◊P → ¬¬□◊□¬P) or {} ?
(◊□◊P → ¬¬□◊□¬P) or {} ? (◊□¬□¬P → ¬¬□¬□◊P) or {} ? (¬□◊□¬P → ¬¬□¬□◊P)
or {} ? (¬□¬□◊P → ¬¬□¬□◊P) or {} ? (◊□◊P → ¬¬□¬□◊P) or {} ? (◊□¬□¬P → ¬◊□◊P)
or {} ? (¬□◊□¬P → ¬◊□◊P) or {} ? (¬□¬□◊P → ¬◊□◊P) or {} ? (◊□◊P → ¬◊□◊P)
145. S4
⊬ {} ? (¬□¬□¬□¬P → ¬□¬□¬□P), i.e.{} ? (◊□¬□¬P → ◊□¬□P) or {} ? (¬□◊□¬P → ◊□¬□P)
or {} ? (¬□¬□◊P → ◊□¬□P) or {} ? (◊□◊P → ◊□¬□P) or {} ? (◊□¬□¬P → ¬□◊□P)
or {} ? (¬□◊□¬P → ¬□◊□P) or {} ? (¬□¬□◊P → ¬□◊□P) or {} ? (◊□◊P → ¬□◊□P)
93
None of the following modalities is provable.
146. S4
⊬ {} ? P,
147. S4
⊬ {} ? □P,
148. S4
⊬ {} ? (¬□¬P), i.e.{} ? (◊P)
149. S4
⊬ {} ? (¬□¬□P), i.e.{} ? (◊□P)
150. S4
⊬ {} ? (□¬□¬P), i.e.{} ? (□◊P)
151. S4
⊬ {} ? (¬□¬□¬□¬P), i.e.{} ? (◊□¬□¬P) or {} ? (¬□◊□¬P) or {} ? (¬□¬□◊P)
or {} ? (◊□◊P)
152. S4
⊬ {} ? (□¬□¬□P), i.e.{} ? (□◊□P)
153. S4
⊬ {} ? (¬P),
154. S4
⊬ {} ? (¬□P),
155. S4
⊬ {} ? (¬¬□¬P), i.e.{} ? (¬◊P)
156. S4
⊬ {} ? (¬¬□¬□P), i.e.{} ? (¬◊□P)
157. S4
⊬ {} ? (¬□¬□¬P), i.e.{} ? (◊□¬P) or {} ? (¬□◊P)
158. S4
⊬ {} ? (¬¬□¬□¬□¬P), i.e.{} ? (¬◊□¬□¬P) or {} ? (¬¬□◊□¬P) or {} ? (¬¬□¬□◊P)
or {} ? (¬◊□◊P)
159. S4
⊬ {} ? (¬□¬□¬□P), i.e.{} ? (◊□¬□P) or {} ? (¬□◊□P)
Properties not required by an S4 model
(symmetry : S4
□◊P)) and S4
160. S4
⊬ {} ? (¬□¬P → □¬□¬P) ({} ? (¬□¬P → □◊P)) , and {} ? (P → □¬□¬P) ({} ? (P →
⊬
{} ? (¬□¬P → ¬□¬□¬P) ({} ? (◊P → ◊□¬P) or {} ? (◊P → ¬□◊P)) are shown above)
⊬ {} ? (□(□P→P) → □P), i.e. Löbs rule (converse well founded)
Linear ordering of the worlds is not enough to model S4
161. S4
⊬ {¬□¬P, ¬□¬¬P} ? Q,
i.e. {◊P, ◊¬P} ? Q
94
162. S4
⊬ {} ? (□(□(P→□P)→P)→P), i.e, .reflexivity+transiticity+converse-wellfounded
163. S4
⊬ {} ? (□(□A→B) v □(□B→A))
Miscellaneous
164. S4
⊬
{(R→S), R} ? □S,
165. S4
⊬
{(R→S), □R} ? □S,
166. S4
⊬ {□(R→S), R}, ? □S,
167. S4
⊬ {□R, S} ?
□(R→S),
168. S4
⊬ {□R, S} ?
□(R&S),
169. S4
⊬ {R, □S} ? □(R&S),
170. S4
⊬ {R, S} ? □(RvS),
171. S4
⊬ {□(BvC)} ? (□Bv□C),
172. S4
⊬ {Q→□Y} ? □(X&□(P&Q) → □Y),
173. S4
⊬ {□((R↔S)&Q), □(RvZ), ¬Z} ? □(SvP)
95
2. Examples in S5:
All of the statements provable in S4 are provable in S5 too. AProS proves all the statements listed
above (as provable in S4) in S5.
Theorems:
The examples are organized as follows23:
I. Axioms of S5
II. Reduction rules (generated from distinct modalities)
III. Miscellaneous examples with other connectives
I. Axioms of S5.
The axioms of necessitation, distributivity, reflexivity and transitivity are provable in S5, and
the proofs are exactly the same as they were for S4. S5 has an additional axiom that we prove
here.
 S5 ⊦□ → □□
 S5 ⊦□  □□
Y
□  □
□I
□  □□
Lob’s axiom is unprovable as AproS shows.
II.Properties of S5 models
S5 models have accessibility relation that is an equivalence. All of the properties are direct axioms. Other
properties such as well-foundedness form statements that are unprovable in S5 (and AProS shows this).
III.Distinct modalities of S5
The set of distinct modalities for S5 are:
DS ={*P, P | * is one of □, ◊, ¬□, ¬◊ ¬}
23
Only the statements that are provable only in S5 are numbered.
96
All the other modalities reduce to one of these using the following reduction rules.
Reduction rules (proved in the next pages)
S5 ⊦{} ? □P ↔ □□P, (shown for S4)
S5 ⊦{} ? ¬□¬P ↔ ¬□¬¬□¬P (shown for S4)
2. S5 ⊦{} ? □P ↔ ¬□¬□P i.e.{} ? □P↔ ◊□P
3. S5 ⊦{} ? ¬□¬P ↔ □¬□¬P i.e.{} ? ◊P ↔ □◊P
Since the rules listed here are bi-implications, the negated versions of these rules are already
proves. However, APros was made to prove them since the logics differ in the way their box-rules
handle negations which is verified here. The proofs of these are not listed.
S5 ⊦{} ? ¬□P↔¬□□P,
S5 ⊦{} ? ¬¬□¬P↔¬¬□¬¬□¬P, i.e.{} ? ¬◊P↔¬◊◊P
4. S5 ⊦{} ? ¬□P ↔ ¬¬□¬□P i.e.{} ? ¬□P↔ ¬◊□P
5. S5 ⊦{} ? ¬¬□¬P ↔ ¬□¬□¬P i.e.{} ? ¬◊P ↔ ¬□◊P
Modal logics are usually presented using connectives □ and ◊. Because of this, the formulae ¬□P
and ◊¬P are syntactically different, and reduction rules list them. These are a subset of those for
S4.
S5 ⊦{} ? (□¬P↔¬¬□¬P),
i.e. {} ? (□¬P↔¬◊P)
S5 ⊦{} ? (¬□¬¬P↔¬□P),
i.e.{} ? (◊¬P↔¬□P)
S5 ⊦{} ? (□¬□¬¬P↔¬¬□¬□P), i.e.{} ? (□◊¬P↔¬◊□P)
S5 ⊦{} ? (¬□¬□¬P↔¬□¬□¬P), i.e.{} ? (¬□◊P↔¬□◊P) or {} ? (¬□◊P↔◊□¬P)
or {} ? (◊□¬P↔¬□◊P) or {} ? (◊□¬P↔◊□¬P)
97
2. S5 ⊦{□P } ? (¬□¬□P)
Y
{□P; ¬□P }? ¬□P
Y
□E
{□P, □¬□P } ? ¬□P
{□P, ¬□P }? □P
⊥I
{□P, □¬□P } ? ⊥
¬I
{□P} ? ¬□¬□P
3. S5 ⊦{¬□¬P } ? (□¬□¬P)
Y
{¬□¬P} ? ¬□¬P
□I
{¬□¬P} ? □¬□¬P
The following is a list of application of reduction rules. The list contains one statement for each
of the distinct modalities. The proofs are very long and are not listed here.
6. S5 ⊦ {} ? □P↔□□□P
7. S5 ⊦ {} ? □P↔□□□□P
8. S5 ⊦ {} ? ¬□¬P↔¬□¬¬□¬¬□¬P
9. S5 ⊦ {} ? ¬□¬P↔¬□□□¬P
10. S5 ⊦ {} ? □□¬□¬□¬□¬¬□¬□□□□□□P↔□□□□¬□¬¬□¬¬□¬¬□¬□¬□¬□P
11. S5 ⊦ {} ? □□□□□□¬□¬□¬□¬¬□¬□□¬□¬P↔□□□□¬□¬¬□¬¬□¬¬□¬□¬□¬□¬□¬P
12. S5 ⊦ {} ? ¬□□□□□□¬□¬□¬□¬¬□¬□¬¬□¬P↔¬□□□□¬□¬¬□¬¬□¬¬□¬□¬□¬□¬P
13. S5 ⊦ {} ? ¬□□¬□¬□¬□¬¬□P↔¬□□□□¬□¬¬□¬¬□¬¬□¬□¬□P
14. S5 ⊦ {} ? ¬□P↔¬□□□P
15. S5 ⊦ {} ? ¬¬□¬P↔¬¬□¬¬□¬¬□¬P
16. S5 ⊦ {} ? ¬□□¬□¬□¬□¬¬□¬□□P↔¬□□□□¬□¬¬□¬¬□¬¬□¬□¬□¬□P
98
17. S5 ⊦ {} ? ¬□□¬□¬□¬□¬¬□¬□□¬□¬P↔¬□□□□¬□¬¬□¬¬□¬¬□¬□¬□¬□¬□¬P
18. S5 ⊦ {} ? ¬¬□□¬□¬□¬□¬¬□¬□¬¬□¬P↔¬¬□□□□¬□¬¬□¬¬□¬¬□¬□¬□¬□¬P
19. S5 ⊦ {} ? ¬¬□□¬□¬□¬□¬¬□P↔¬¬□□□□¬□¬¬□¬¬□¬¬□¬□¬□P
Some distinct modalities are not equivalent, but one may imply the other The proofs of some of
these are given in the next pages, namely, 20, 21, 22 and 30.
20. S5 ⊦ {} ? P → □¬□¬P
21. S5 ⊦ {} ? ¬□¬□P → ¬□¬P
S5 ⊦ {} ? □P → P (shown for S4)
S5 ⊦ {} ? P → ¬□¬P (shown for S4)
S5 ⊦ {} ? □P → ¬□¬P (shown for S4)
22. S5 ⊦ {} ? ¬□¬□P → □¬□¬P
23. S5 ⊦ {} ? ¬□¬□P → P
S5 ⊦ {} ? □P → □¬□¬P (shown for S4)
24. S5 ⊦ {} ? ¬□¬□¬P → ¬P
25. S5 ⊦ {} ? ¬□¬□¬P → ¬¬□¬□P
S5 ⊦ {} ? ¬P → ¬□P
S5 ⊦ {} ? ¬¬□¬P → ¬P
S5 ⊦ {} ? ¬¬□¬P → ¬□P
26. S5 ⊦ {} ? ¬¬□¬P → ¬¬□¬□P
27. S5 ⊦ {} ? ¬P → ¬¬□¬□P
28. S5 ⊦ {} ? ¬□¬□¬P → ¬□P
29. S5 ⊦ {} ? ¬□¬□P → ¬□¬P
30. S5 ⊦ {} ? ¬□¬□P → P
99
20. S5 ⊦ {} ? P → □¬□¬P
Y
{□P, ¬□¬□¬P, ¬□¬P } ? ¬□¬P
Y
As before
{□P, ¬□¬□¬P, ¬□¬P } ? ¬□¬□¬P
P } ? □¬P
{□P, ¬□¬□¬P, ¬□¬P } ? □¬□¬P
P } ? □¬P
P } ? □¬P
□I
\
{□P, ¬□¬□¬P, ¬□¬P } ? ⊥
\
P } ?{□
□¬
P,P¬□¬□¬ P} ? □¬P
{□P, ¬□¬□¬P } ? ¬□¬P
⊥I
¬I
\
⊥I
\ P } ? □¬P
{□P, ¬□¬□¬P } ? ⊥
\¬I
{P} ? □¬□¬P
21. S5 ⊦ {} ? ¬□¬□P → ¬□¬P
Y
Y
{¬□¬□P,□¬P, P } ? P
{¬□¬□P,¬P, □ P } ? ¬P
□I
□I
{¬□¬□P,□¬P, □ P } ? P
P } ? □¬P
\
P } ? □¬P
{¬□¬□P,□¬P, □ P } ? ¬P
{¬□¬□P,□¬P, □ P } ? ⊥
P } ? □¬P
¬I
\
\
P } ? □¬P
{¬□¬□P,□¬P } ? ¬□ P
Y
P } ? □¬P
{¬□¬□P,□¬P } ? □¬□ P
{¬□¬□P,□¬P } ? ¬□¬□ P
P } \? P
□¬} ?P □¬P
{¬□¬□P,□¬P } ? ⊥
\
\
¬I
{¬□¬□P} ? ¬□¬P
22. S5 ⊦ {} ? ¬□¬□P → □¬□¬P
The proof of this is simply □I applied to the proof above.
100
□I
\
⊥I
⊥I
Y
30. S5 ⊦ {} ? ¬□¬□P → P
{ ¬□¬□P, ¬P; P } ? P
Y
As before
P } ? □¬P
{□P, ¬□¬□P, ¬P } ? ¬P
{□P, ¬□¬□P, ¬P } ? P
P } ? □¬P
P } ? □¬P
\
\
{□P, ¬□¬□P, ¬P } ? ⊥
P } ?{¬□¬□
□¬P P, ¬P } ? ¬□P
{¬□¬□P, ¬P } ? □P
□E
\ } ? □¬P
¬I
\
⊥I
{¬□¬□P, ¬P } ? ⊥
\¬I
{¬□¬□P} ? P
31. S5 ⊦ {}? (□(□A → B) v □(□B → A))
The proof is not included, but it can be seen that this follows from distributivity and □-elimination.
101
⊥I
Unprovable Statements:
All of the statements stated below are unprovable in S4 as well (as AProS shows).
Distinct modalities are unboxed, □, ◊ and their negations
P is distinct from other modalities
1. S5
⊬
2. S5
⊬ {} ? (P ↔ ¬□¬P),
3. S5
⊬
{} ? (P ↔ ¬P),
4. S5
⊬
{} ? (P ↔ ¬¬□¬P), i.e.{} ? (P ↔ ¬◊P)
5. S5
⊬ {} ? (P ↔ ¬□P),
{} ? (P ↔ □P),
i.e.{} ? (P ↔ ◊P),
□P is distinct from other modalities
6. S5
⊬ {} ? (□P ↔ ¬□¬P), i.e.{} ? (□P ↔ ◊P)
7. S5
⊬ {} ? (□P ↔ ¬P),
8. S5
⊬ {} ? (□P ↔ ¬□P) ,
9. S5
⊬ {} ? (□P ↔ ¬¬□¬P), i.e.{} ? (□P ↔ ¬◊P)
◊P is distinct from other modalities
10. S5
⊬
{} ? (¬□¬P ↔ ¬¬□¬P), i.e.{} ? (◊P ↔ ¬◊P)
i.e. ¬P is distinct from other modalities
11. S5
⊬
12. S5
⊬ {} ? (¬P ↔ ¬□P),
{} ? (¬P ↔ ¬¬□¬P), i.e.{} ? (¬P ↔ ¬◊P)
¬□P is distinct from other modalities
13. S5
⊬ {} ? (¬□P ↔ ¬¬□¬P), i.e.{} ? (¬□P ↔ ¬◊P)
One side inclusions of the distinct modalities
14. S5
⊬
{} ? (P → □P),
15. S5
⊬
{} ? (P → ¬P),
102
16. S5
⊬
17. S5
⊬ {} ? (P → ¬□P),
{} ? (P → ¬¬□¬P), i.e.{} ? (P → ¬◊P)
□P is distinct from other modalities
18. S5
⊬ {} ? (□P → ¬P),
19. S5
⊬ {} ? (□P → ¬□P) ,
20. S5
⊬ {} ? (□P → ¬¬□¬P), i.e.{} ? (□P → ¬◊P)
◊P is distinct from other modalities
21. S5
⊬
{} ? (¬□¬P → ¬¬□¬P), i.e.{} ? (◊P → ¬◊P)
¬P is distinct from other modalities
22. S5
⊬
{} ? (¬P → ¬¬□¬P), i.e.{} ? (¬P → ¬◊P)
¬□P is distinct from other modalities
23. S5
⊬ {} ? (¬□P → ¬¬□¬P), i.e.{} ? (¬□P → ¬◊P)
None of the modalities are provable outright
24. S5
⊬
{} ? P
25. S5
⊬
{} ? (□P),
26. S5
⊬
{} ? (¬□¬P), i.e.{} ? (◊P)
27. S5
⊬
{} ? ¬P
28. S5
⊬
{} ? (¬□P),
29. S5
⊬
{} ? (¬¬□¬P), i.e.{} ? (◊P)
Properties not required by an S5 model
Löbs rule (converse well founded)
{} ? □( □P→P) → □P
103
Miscellaneous
30. S5
⊬
{(R→S), R } ? □S
31. S5
⊬
{(R→S), □R } ? □S,
32. S5
⊬
{□(R→S), R } ? □S
33. S5
⊬
{□R, S } ? (R→S)
34. S5
⊬
{□R, S } ? (R & S)
35 – 170. Apros was also made to prove that the distinct modalities of S4 which still hold for S5,
and their one side inclusions. Example: S5
here.
⊬ {} ? (¬□¬□¬□¬P → □P). We do not list these
104
3. Examples in GL:
Theorems:
The examples are organized as follows:
I. Axioms of GL.
II. Gödel’s second incompleteness theorem, and variants (□¬□A proves anything).
III. There are an infinite number of distinct modalities, so we show a few of the reduction
rules.
IV. Miscellaneous examples with other connectives.
I. Axioms of GL.
Necessitation: For any tautology A, GL ⊦ □A (proof is the same as that of S4).
Distributivity: □(A→B) proves □A→□B (proof is the same as that of S4).
Löb’s axiom: □(□A→A) →□A
1. GL ⊦ {□(□A→A) } ? □A
Y
Y
{(□A A),□A,A } ? □A
{(□A A),□A,A } ? A
E
{□(□A A), (□A A),□A } ?
A
□I
{□(□A A)} ? □A
Converse: GL ⊦ {} ? (□A → □(□A→A))
2. GL ⊦ {□A } ? □(□A→A)
Y
{□(□A A), □A, A } ? A
I
{□(□A A), □A, A } ? (□AA)
□I
{□A } ? □(□A A)
105
II.Gödel’s second incompleteness theorem, and variants
3.{} ? □¬□(A&¬A) → □(A&¬A)
3. GL ⊦ {□¬□(A&¬A)} ? □(A&¬A)
Y
Y
{□¬□(A&¬A), ¬□(A&¬A), □(A&¬A)} ? □(A&¬A)
{□¬□(A&¬A), ¬□(A&¬A), □(A&¬A)} ? ¬□(A&¬A)
⊥I
{□¬□(A&¬A), ¬□(A&¬A), □(A&¬A), ¬ (A&¬A)} ? ⊥
{□¬□(A&¬A), ¬□(A&¬A), □(A&¬A)} ? A&¬A
¬E
□I
{□¬□(A&¬A)} ? □(A&¬A)
4. GL ⊦ {□¬□P} ? □P
Y
Y
{□¬□P, ¬□P, □P} ? □P
{□¬□P, ¬□P, □P} ? ¬□P
⊥I
{□¬□P, ¬□P, □P, ¬P } ? ⊥
¬E
{□¬□P, ¬□P, □P} ? P
□I
{□¬□P} ? □P
106
5. GL ⊦ {¬□P} ? ¬□¬□P
Y
Y
{□¬□P, ¬□P, □P, ¬P } ? ¬□P
{□¬□P, ¬□P, □P, ¬P } ? □P
⊥I
{□¬□P, ¬□P, □P, ¬P } ? ⊥
¬E
Y
{□¬□P, ¬□P □P } ? P
□I
{□¬□P, ¬□P } ? ¬□P
{□¬□P, ¬□P,} ? □P
⊥I
{¬□P, □¬□P} ? ⊥
¬I
{¬□P} ? ¬□¬□P
6. GL ⊦ {¬□¬P} ? ¬□¬□¬P
⊥I
¬I
Y □I
Y
{□¬□¬P, ¬□¬P, □¬P, P } ? ¬□¬P
{□¬□¬P, ¬□¬P, □¬P, P } ? □¬P
⊥I
{□¬□¬P, ¬□¬P, □¬P, P } ? ⊥
Y
{□¬□¬P, ¬□¬P } ? ¬□¬P
{□¬□¬P, ¬□¬P □¬P } ? ¬ P
{□¬□¬P, ¬¬□P} ? □¬P
{¬□¬P, □¬□¬P} ? ⊥
{¬□¬P} ? ¬□¬□¬P
107
¬I
7. GL ⊦ {□¬□P} ? □¬□Q
Y
Y
{□¬□P, ¬□P, □¬□Q, ¬□Q, □P, ¬P } ? ¬□P
{□¬□P, ¬□P, □¬□Q, ¬□Q, □P, ¬P } ? □P
⊥I
{□¬□P, ¬□P, □¬□Q, ¬□Q, □P, ¬P } ? ⊥
¬E
{□¬□P, ¬□P, □¬□Q, ¬□Q □P } ? P
Y
□I
{□¬□P, ¬□P, □¬□Q, ¬□Q } ? ¬□P
{□¬□P, ¬□P, □¬□Q, ¬□Q } ? □P
⊥I
{□¬□P, ¬□P, □¬□Q, ¬□Q } ? ⊥
¬I
{□¬□P, ¬□P, □¬□Q} ? ¬□Q
□I
{□¬□P} ? □¬□Q
8. GL ⊦ {□¬□P} ? □Q
Y
Y
{□¬□P, ¬□P, ¬Q, □Q, □P, ¬P } ? ¬□P
{□¬□P, ¬□P, ¬Q, □Q, □P, ¬P } ? □P
⊥I
{□¬□P, ¬□P, ¬Q, □Q, □P, ¬P } ? ⊥
¬E
{□¬□P, ¬□P, ¬Q, □Q □P } ? P
Y
□I
{□¬□P, ¬□P, □Q, ¬Q } ? ¬□P
{□¬□P, ¬□P, □Q, ¬Q } ? □P
⊥I
{□¬□P, ¬□P, ¬Q, □Q } ? ⊥
¬E
{□¬□P, ¬□P, □Q} ? Q
□I
{□¬□P} ? □Q
108
9. GL ⊦ {□¬□¬P} ? □¬P
Y
Y
{□¬□¬P, ¬□¬P, □¬P , P} ? □¬P
{□¬□¬P, ¬□¬P, □¬P , P} ? ¬□¬P
⊥I
{□¬□¬P, ¬□¬P, □¬P , P} ? ⊥
¬I
{□¬□¬P, ¬□¬P, □¬P } ? ¬P
□I
{□¬□¬P} ? □¬P
10. GL ⊦ {¬□¬P, □Q} ? ¬□¬Q
Y
Y
{□Q, □¬Q , Q, ¬Q, P} ? Q
{□Q, □¬Q , Q, ¬Q, P} ? ¬Q
⊥I
{□Q, □¬Q , Q, ¬Q, P} ? ⊥
¬I
{ □Q, □¬Q , Q, ¬Q } ? ¬P
Y
□I
{¬□¬P, □Q, □¬Q } ? □¬P
{¬□¬P, □Q, □¬Q } ? □¬P
⊥I
{¬□¬P, □Q, □¬Q} ? ⊥
¬I
{¬□¬P, □Q} ? ¬□¬Q
109
11. GL ⊦ {¬□P, □Q} ? ¬□¬Q
¬I
Y
Y
{{□¬P, ¬P, □Q, □¬Q , Q, ¬Q, P} ? Q
{{□¬P, ¬P, □Q, □¬Q , Q, ¬Q, P} ? ¬Q
⊥I
{{□¬P, ¬P, □Q, □¬Q , Q, ¬Q, P} ? ⊥
¬E
{□¬P, ¬P, □Q, □¬Q , Q, ¬Q } ? P
Y
□I
{¬□P, □Q, □¬Q } ? ¬□P
{□¬P, □Q, □¬Q } ? □P
⊥I
{¬□P, □Q, □¬Q} ? ⊥
¬I
{¬□P, □Q} ? ¬□¬Q
AProS also proves other variants of this theme such as:
12. GL ⊦ {¬□P, □(A &¬A)} ? Q
13. GL ⊦ {□(A &¬A)} ? □Q
14. GL ⊦ {¬□¬□¬P} ? ¬□¬□¬Q24
15. GL ⊦ {¬□¬□P} ? (¬□¬□¬□¬P)
16. GL ⊦ {□¬□¬□P} ? □¬□¬P
17. GL ⊦ { ¬□¬□¬□P} ? (¬□¬□¬□¬P)
AProS also proves modifications of distributivity and transitivity, and the definition of box such as:
These are similar to the proofs in S4.
GL ⊦ {R, □S} ? □(RS)
GL ⊦ {□R, □S} ? □(RvS)
GL ⊦ {□(B&C)} ? □B&□C
24
I am not using diamonds here, since there is no standard format which uses it.
110
GL ⊦ {¬□¬¬□¬P} ? (¬□¬P)
GL ⊦ {□(□PP)} ? (□□P□P)
GL ⊦ {¬□¬P} ? (¬□¬□¬P)
GL ⊦ {□(P & ¬P)}, (□Q)
GL ⊦ {□P,¬□¬¬P} ? Q
GL ⊦ {□¬P, ¬□¬¬¬P} ? Q
GL ⊦ {□P} ? □P
GL ⊦ {□R & □S} ? □R,
GL ⊦ {□(R & S)} ? □R
GL ⊦ {□(R & S)} ? □S
GL ⊦ {(□R) v (□S)} ? □(R v S)
GL ⊦ {□□B, □((A v ¬A) □A (□BA)) } ? □A
GL ⊦ {¬□A, A¬A} ? (¬□¬□A)
GL ⊦{¬□(□(A&¬A)  (A&¬A))} ? (¬□¬□(A&¬A))
GL ⊦{¬□(A&¬A)} ? (¬□(□(A&¬A)  (A&¬A)))
GL ⊦ {¬□(A&¬A)} ? (¬□¬□(A&¬A))
GL ⊦ {□¬□(A&¬A)} ? □R
GL ⊦ {□¬□(A&¬A)} ? □¬□¬R
GL ⊦ {□A} ? □(□AA)
GL ⊦ {□(A&¬A)} ? □(¬A)
GL ⊦ {□(A&¬A)} ? □¬□¬A
GL ⊦ {(A&¬A)} ? (¬□¬P)
GL ⊦ {□R, □S} ? □(RS)
111
GL ⊦ {R, □S} ? □(RS)
GL ⊦ {□R, □S} ? □□(RS)
GL ⊦ {□R, □S} ? □(R&S)
GL ⊦ {□R, □S} ? □(RvS)
GL ⊦ {□R, S } ? □(RvS)
GL ⊦ {R, □S} ? □(RvS)
There are infinite distinct modalities; the following lists a few.
GL ⊦ {□P} ? □□□P
GL ⊦ {¬□¬□P} ? ¬□¬□□□P
GL ⊦ {□¬□¬□P} ? □¬□¬□□□P
GL ⊦ {¬□□□¬□¬P} ? ¬□¬□¬Q
GL ⊦ {□¬□¬□P} ? □¬□¬Q
GL ⊦ {¬□¬□¬□P} ? ¬□¬□¬Q
GL ⊦ {□¬□¬□¬P} ? □¬□¬Q
GL ⊦ {¬□¬□¬□¬P} ? ¬□¬□¬Q
GL ⊦ {□¬□¬□¬□P} ? □¬□¬Q
GL ⊦ {¬□¬□¬□¬□P} ? ¬□¬□¬Q
GL ⊦ {□¬□¬□¬P} ? □¬□¬Q
GL ⊦ {¬□¬□¬□¬P} ? ¬□¬□¬Q
GL ⊦ {□¬□¬□□□□¬□¬□□¬□□□P} ? □□□¬□¬Q
112
Unprovable statements:
The following list is based on properties of models.
Reflexivity
1. GL
⊬
2. GL
⊬ {P} ? ¬□¬P i.e. {P} ? ◊P
3. GL
⊬ {□P, ¬P} ? Q
4. GL
⊬ {□P} ? ¬□¬P i.e, {□P} ? ◊P
5. GL
⊬ {} ? □(□PP)
6. GL
⊬ {} ? P□(□PP)
7. GL
⊬ {} ? □P□¬□¬P
8. GL
⊬ {} ? □P□¬□¬□P
9. GL
⊬ {} ? ¬□¬□P ¬□¬P
{□P} ? P
10. GL
⊬ {} ? □P ¬□¬□¬□¬P
11. GL
⊬ {} ? ¬□¬□¬□¬P ¬□¬P
12. GL
⊬ {} ? □¬□¬□P ¬□¬P
Symmetry
13. GL
⊬ {¬□¬P} ? □¬□¬P i.e. {◊P} ? □◊P
Multiple properties
14. GL
⊬ {} ? □(□(P□P) P) P
15. GL
⊬ {} ? □(□AB) v □(□BA)
16. GL
⊬ {¬□¬P, ¬□¬¬P} ? Q
113
Distinct modalities
P, ¬P, □P, □¬P, ¬□P, ¬□¬P, □¬□P … are all distinct (this is just a small subset)
17. GL
⊬
18. GL
⊬ {} ? P↔□P
19. GL
⊬ {} ? P↔□¬P
20. GL
⊬ {} ? P↔¬□P
21. GL
⊬ {} ? P↔¬□¬P
22. GL
⊬ {} ? P↔□¬□P
23. GL
⊬ {} ? ¬P↔□P
24. GL
⊬ {} ? ¬P↔□¬P
25. GL
⊬ {} ? ¬P↔¬□P
26. GL
⊬ {} ? ¬P↔¬□¬P
27. GL
⊬ {} ? ¬P↔□¬□P
28. GL
⊬ {} ? □P↔□¬P
29. GL
⊬ {} ? □P↔¬□P
30. GL
⊬ {} ? □P↔¬□¬P
31. GL
⊬ {} ? □P↔□¬□P
32. GL
⊬ {} ? ¬□P↔¬□¬P
33. GL
⊬ {} ? ¬□P↔□¬□P
34. GL
⊬ {} ? ¬□¬P↔□¬□P
{} ? P↔¬P
All the above are totally distinct
114
35. GL
⊬ {} ? P¬P
36. GL
⊬ {} ? P□P
37. GL
⊬ {} ? P□¬P
38. GL
⊬ {} ? P¬□P
39. GL
⊬ {} ? P¬□¬P
40. GL
⊬
41. GL
⊬ {} ? ¬P□P
42. GL
⊬ {} ? ¬P□¬P
43. GL
⊬ {} ? ¬P¬□P
44. GL
⊬ {} ? ¬P¬□¬P
45. GL
⊬ {} ? ¬P□¬□P
46. GL
⊬ {} ? □P□¬P
47. GL
⊬ {} ? □P¬□P
48. GL
⊬ {} ? □P¬□¬P
49. GL
⊬ {} ? □P□¬□P
50. GL
⊬ {} ? ¬□P¬□¬P
51. GL
⊬ {} ? ¬□P□¬□P
52. GL
⊬ {} ? ¬□¬P□¬□P
{} ? P□¬□P
These modalities are not provable outright.
53. GL
⊬
{} ? P
115
54. GL
⊬ {} ? □P
55. GL
⊬ {} ? ¬□¬P
56. GL
⊬ {} ? ¬P
57. GL
⊬ {} ? ¬□P
58. GL
⊬ {} ? □¬P
59. GL
⊬ {□(A&¬A) } ? ¬□¬□(A&¬A)
60. GL
⊬ {(RS),R} ? □S
61. GL
⊬ {(RS), □R} ? □S
62. GL
⊬ {□(RS),R} ? □S
63. GL
⊬ {□R, S} ? □(RS)
64. GL
⊬ {□A□B} ? □(AB)
65. GL
⊬ {□R, S} ? □(R&S)
66. GL
⊬ {R, □S} ? □(R&S)
67. GL
⊬ {R, S} ? □(RvS)
116
References:
[1]
AProS http://www.phil.cmu.edu/projects/apros/
[2]
Boolos, G., “The Logic of Provability”, Cambridge University Press, Cambridge, 1993.
[3]
Byrnes, J., “Proof Search and Normal Forms in Natural Deduction”, PhD Thesis.
Carnegie Mellon University (1999).
[4]
Gödel, K., “Über Formal Unentscheidbare Sätze der Principia Mathematica und
Verwandter Systeme I,” Monatshefte für Mathematik und Physik, Vol. 38 (1931):
173-198
[5]
Hilbert, D. and Bernays, P. “Grundlagen der Mathematik”, Vol 2. Berlin, Heidelberg,
New York, Springer-Verlag (1939).
[6]
Löb, M.H., “Solution of a Problem of Leon Henkin,” Journal of Symbolic Logic, Vol.
20 (1955): 115-118
[7]
Prawitz, D., “Natural Deduction. A Proof-Theoretic Study”, 1965, ABC i Symbolisk
Logik, 1975, 2:a uppl (1991): 74-80
[8]
Solovay, R.M., “Provability Interpretations of Modal Logic,” Israel Journal of
Mathematics, Vol. 25 (1976): 287-304.
[9]
Sieg, W., Cittadini, S., “Normal Natural Deduction Proofs (in Non-classical logics),”
Mechanizing Mathematical Reasoning, LNAI 2605, 169-191, 2005.
[10]
Sieg, W., Byrnes, J. “Normal Natural Deduction Proofs (in classical logic),” Studia
Logica 60, 67-106, 1998.
[11]
Troelstra, A.S., Schwichtenberg, H., “Basic Proof Theory.” In series Cambridge
Tracts in Theoretical Computer Science, Cambridge University Press, (1996).
117