Download Philosophy as Logical Analysis of Science: Carnap, Schlick, Gödel

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Intuitionistic logic wikipedia , lookup

Axiom of reducibility wikipedia , lookup

Turing's proof wikipedia , lookup

Tractatus Logico-Philosophicus wikipedia , lookup

Argument wikipedia , lookup

Willard Van Orman Quine wikipedia , lookup

Axiom wikipedia , lookup

List of first-order theories wikipedia , lookup

Laws of Form wikipedia , lookup

Model theory wikipedia , lookup

Jesús Mosterín wikipedia , lookup

Propositional calculus wikipedia , lookup

Mathematical proof wikipedia , lookup

Foundations of mathematics wikipedia , lookup

Donald Davidson (philosopher) wikipedia , lookup

Semantic holism wikipedia , lookup

Analytic–synthetic distinction wikipedia , lookup

Natural deduction wikipedia , lookup

Mathematical logic wikipedia , lookup

Truth wikipedia , lookup

Law of thought wikipedia , lookup

Gödel's incompleteness theorems wikipedia , lookup

Meaning (philosophy of language) wikipedia , lookup

Principia Mathematica wikipedia , lookup

Theorem wikipedia , lookup

Truth-bearer wikipedia , lookup

Transcript
Philosophy as Logical Analysis of Science: Carnap, Schlick, Gödel, Tarski, Church, Turing
In the 1920s, the Vienna Circle pioneered a new school of philosophy, logical empiricism, influenced by
Einsteinian physics, Russellian logic, and Wittgenstein’s Tractatus. Morritz Schlick, the early leader of
the Circle, found inspiration in the Tractatus. As a former physicist, he took space-time points and objects
or events occupying them to be fundamental physical entities. Following the Tractatus he also took the
statements of physics to be truth functions of atomic statements about these entities. But his devotion to
the Tractatus didn’t stop there. Since the pseudo-atomic statements of physics bear conceptual relations to
one another, they are not logically independent, which for Wittgenstein meant that they cannot be the
genuine atomic propositions on which all thought rests. Thus the pseudo-atomic statements of physics had
to be analyzed as truth functions of real atomic statements, the truth or falsity of which are independent of
one another. To achieve this, Schlick ended up taking their subject matter to be individual, momentary
sense impressions the apprehension of which is their verification. Although not all his comrades agreed,
this reasoning influenced even those who eventually moved away from it
Rudolf Carnap’s Aufbau (The Logical Construction of the World) was the first great work of logical
empiricism Its goal was to establish the possibility of uniting all science into a single system in which all
scientific concepts are defined from a small base of primitive logical and empirical concepts. Carnap
asserts the possibility of several different unifications involving different reductions. The two most
important reductions purport to reduce all scientific knowledge to knowledge of psychological facts. The
autopsychological reduction starts from short, perceptual experiences of a single self, involving any
combination of the senses. The only primitive concept applied to them is recollected similarity, from
which Carnap tries to extract phenomenal concept types, e.g., perceived colors, shapes, sizes. These were
to provide the basis for defining of all objects of knowledge. The general psychological reduction is
similar except that its base consists of perceptual experiences, not of a single agent, but of all agents. In
both purported reductions the physical is reduced to the psychological, but in the autopsychological
reduction brains and bodies are first “defined” in terms of one’s own experiences. The experiences of
others, and those subjects themselves, are then defined in terms of their associated brains and bodies.
Then the rest of the physical is reduced to the psychological. No matter which form the reduction took,
Carnap imagined translating statements about the physical into statements about perceptual experiences
standing in the relation of recollected similarity to one another.
He believed these reductions possible because he thought that if the physical weren’t reducible to the
psychological, we wouldn’t have knowledge we in fact have. He assumed that we come to know of
physical things by coming know about our sense experience. So, our evidence for claims about the
physical is our knowledge of our own mental states, while our evidence for claims about the mental states
of others is our knowledge of certain physical things. So, he thought, knowledge of our own mental states
provides all our evidence for any knowledge we may have about the world.
This, he eventually realized, is deeply wrong. Over the coming decades he realized (i) that reports of
observational properties of physical things, rather than reports of our perceptual experiences, provide the
epistemological basis of physical science, (ii) that definitions play a smaller role than he originally
thought in connecting theoretical claims to observations, (iii) that reduction of all statements to
observation statements isn’t necessary to relate theory to evidence, and (iv) that complete verification is
impossible and empirical confirmation comes in degrees. But these lessons lay in the future when he was
drawing the conclusions in the Aufbau. In the Aufbau, he concluded that all intelligible questions can be
scientifically answered because all meaningful scientific statements are, in principle, conclusively
verifiable, and hence capable of being known to be true, or false.
The search for a precise and acceptable statement of the verifiability criterion of meaning preoccupied the
logical empiricists for two decades. Since any criterion must count science as meaningful, it was soon
recognized that the meaningfulness of S doesn’t require S’s conclusive verifiability – defined as the
logical entailment of S by a finite, consistent set of observation statements. Nor did S’s meaningfulness
require its conclusive falsifiability. Many meaningful sentences in science were neither. When this was
realized, attention focused on the idea that empirically meaningful statements contribute to the
observational entailments of theories containing them. But this idea also failed to provide an acceptable
theory of meaning when Alonzo Church and Carl Hempel showed that even the most sophisticated
statements of the criterion based on this idea labeled obviously meaningless strings of words as
meaningful. The reason it did so is that confirmation in science is holistic. We confirm a group of related
sentences without having a unique way of assigning distinct confirmatory evidence to individual members
of the group. Because of this, empirical meaning must also be holistic, if it is defined in terms of
confirmatory evidence. For a time W.V. Quine favored such a theory of meaning for empirical theories.
But it too failed when variations of the problems of non-holistic verificationism were reconstructed as
problems for it. Thus, the attempt to use a philosophically inspired theory of meaning to vindicate
science, while downgrading non-scientific discourse, was unsuccessful
The logical empiricists’ linguistic theory of the a priori held that logical truths are knowable apriori,
because they are stipulated to true by linguistic conventions. Quine observed that since proponents
recognize infinitely many such truths, agents can’t adopt a separate convention for each. Rather, there
must be finitely many conventions from which infinitely many a priori truths follow logically. However,
this explanation presupposes the apriority of logic rather than explaining it. Since apriority can’t be
explained by knowledge of truth by convention, he rejected it. He extended his attack to necessity, which,
he plausibly argued, can’t be explained by analyticity. Since he took these two notions to be
interdefinable, he rejected both. But the door was left open for those who came later who accepted both
while denying that they are interdefinable, or even coextensive.
During the same period, progress was made in understanding truth and connecting it to meaning. In 1935,
Alfred Tarski showed how to define truth for mathematical languages. Although his truth predicates were
not synonymous with natural-language truth predicates, the two were provably coextensive our over the
mathematical languages in question, and so applied to the same sentences of those languages. His truth
predicates had two important virtues. They can’t be used to construct sentences that say of themselves
that they aren’t true, and they don’t generate paradoxes. Moreover, defining them never requires
introducing problematic concepts that aren’t already present in the language to which they apply; thus his
semantic concept of truth can’t be the source of philosophical problems. Realizing this, Tarski said:
[W]e may accept the semantic conception of truth without giving up any epistemological attitude we may have
had; we may remain naïve realists, critical idealists or idealists, empiricists or metaphysicians – whatever we
were before. The semantic conception of truth is completely neutral toward all these issues.
After giving his definition of truth, Tarski showed how to use it to define logical truth and logical
consequence for the languages of logic and mathematics. This work was used by others to provide
interpretations for formal languages. To give such an interpretation is to identify a domain of objects for
the language to talk about, to assign each name an object, to assign each n-place function symbol an nplace function from n-tuples of elements of the domain into the domain, to assign each n-place predicate a
subset of the set of n-tuples of elements of the domain, and so on for all non-logical vocabulary. The
interpretations of sentences are derived from the interpretation of the non-logical vocabulary plus clauses
that encode meanings the logical vocabulary in a Tarskian definition of truth in a model. These
derivations yield instances of the schema ‘S’ is a true sentence of L iff P, which, taken as a whole, give
the truth conditions of the sentences of the language.
Carnap accepted Tarski and drew his own further conclusions in a lecture given in 1935 at the same
philosophical congress at which Tarski presented his theory of truth. Carnap’s main point – that truth and
confirmation must be sharply distinguished – was a much needed corrective to prevailing views. But his
argument for this suffered from the mistaken idea that for any declarative sentence S, the statement made
by using S is the same as the statement made by using the sentence ‘S’ is true, which, in turn, is the
same as the statement made by using ‘S’ is T (where ‘T’ is Tarski’s defined truth predicate). In fact,
these statements are not the same, even if we accept, for the sake of argument, Carnap’s idea that
necessarily and apriori equivalent sentences make the same statement. Under this assumption, uses of S
do make the same statement as uses of ‘S’ is T. But that statement is different from the statement made
by uses of ‘S’ is true. Let S be the sentence ‘the earth is round’. This sentence is used to make the
statement that the earth is round, which is something one may know, even if one doesn’t understand
English, and knows nothing about any English sentence. Thus the statement made by a use of the
sentence ‘The earth is round’ is not the same as the statement that that sentence is true.
2
Carnap’s failure to see this is ironic for a theorist who had come to believe that knowledge of truth
conditions gives one information about meaning. For surely, if ‘S’ is true were apriori equivalent to, or
made the same statement as, S, then ‘S’ is true iff S would be apriori equivalent to, or make the same
statement as S iff S. But then since knowledge that the earth is round iff the earth is round gives one no
information about the meaning of the sentence ‘the earth is round’, knowledge that ‘the earth is round’ is
a true iff the earth is round wouldn’t either. Consequently, one who believes that knowledge of truth
conditions does provide information about meaning must not take the statements made by uses of S and
‘S’ is true to be necessary and apriori consequences of each other. By contrast, the statements made by
uses of S and ‘S’ is T, where ‘T’ is Tarski’s truth predicate, are such consequences of one another.
Thus, Tarski’s notion of truth can’t be the one we need for a truth-conditional theory of meaning. Tarski’s
notion has other important uses, but it can’t be used to provide information about meaning.
Carnap failed to see this in his 1942 Introduction to Semantics, where he developed two ideas: (i) that a
previously uninterpreted language can be given an interpretation by assigning designations to its
nonlogical vocabulary, and truth conditions to its sentences, and (ii) that the meanings of the sentences of
an already meaningful language can be described by identifying designations and specifying truth
conditions. He went wrong in characterizing Tarski-like rules for designation and truth as definitions of
truth and designation. But this error is easily corrected. If our ordinary notions of truth and designation are
legitimate and nonparadoxical, they can be used in Tarski-style rules to state truth and designation
conditions that provide some information about meaning. The requirement that for each sentence S of
language L, the rules of a theory entail ‘S’ is true in L iff P, where P means the same as S, ensures that
the theory assigns every sentence correct truth conditions, on the basis of an interpretation of its parts.
Although satisfying this requirement is not sufficient to identify meanings of sentences, or to understand
them, one can appreciate why it might be thought to be necessary for these tasks, and why one might have
been optimistic that a theory of truth conditions would be a part of an adequate theory of meaning.
This conception of meaning and interpretation was an achievement that was familiar to logicians and
philosophers from 1935 through the 1960s. Those were also years when logic and metamathematics were
transformed by Gödel, Tarski, Church, Turing and others. With model theory and recursive function
theory as mature disciplines, logic and metamathematics separated themselves from earlier, more
epistemological conceptions by focusing on rigorously defined scientific domains of study. This is
illustrated by several important achievements of the era. The first is the Gödel-Tarski Theorem that
arithmetical truth is not arithmetically definable. It is interesting because we know that every effectively
decidable relation on natural numbers is arithmetically definable, i.e. for every such k-place relation R
there is a formula of LA that is true of a sequence of k numbers iff they stand in relation R. We also
know that some sets s for which there is no decision procedure determining its members are also
definable in arithmetic. The theorem tells us that the set of arithmetical truths isn’t one of them.
The theorem assigns numerical codes to expressions of LA that allow us to treat formulas of LA – which
are officially about numbers – as making claims about LA. The indefinability theorem says there is no
formula of LA that is true of the set of numbers that code the truths of LA. Call a formula with one free
variable a predicate. If P is a predicate, a self-ascription of P is the sentence we get by substituting the
numeral that names the code of P for free-variable occurrences in P. The relation that holds between n ,m
iff m codes a predicate P and n is the code of a self-ascription of P is decidable. Thus, there is a formula
Self-Ascription x2, x1 of LA that is true of n and m iff m is the code of a predicate P, and n is the code of
its self-ascription. Now suppose there is a formula T(x2) of LA that is true of n iff n is the code of a true
sentence of LA. The formula x2(Self-Ascription x2, x1 & ~ T(x2) is a predicate that is true of m iff m is
the code of a predicate that isn’t true of its own code. Let h* be the numeral that denotes the code of this
predicate. Then the sentence x2(Self-Ascription x2,h* & ~T(x2) says that a self-ascription of that
predicate isn’t true. Since this sentence is the self-ascription of the predicate, it says of itself that it isn’t
true. So we have been forced to conclude that there is a sentence of LA that is true iff it isn’t true. Since
this is impossible, the supposition that led to it – that there is a predicate of LA that is true of a number n
iff n codes a true sentence of LA -- is false. Arithmetical truth is not arithmetically definable.
This theorem is an application of Gödel’s first incompleteness theorem, which says that for every
consistent first-order system of proof in arithmetic, there are pairs of sentences S and ~S it can’t prove.
3
Since a proof is a finite sequence of formulas, we can assign a number to each proof. Now consider the
relation that holds between numbers n and m iff n is the code of a proof of a sentence with code m. Since
it is decidable, there is a formula, Proof (x1, x2), of LA that defines tit Next consider x1 Proof (x1, x2),
which is true of codes of sentences provable in the system. Since this set is definable in LA, but the set of
arithmetical truths isn’t, the truths of LA aren’t the same as sentences in the system. So if all provable
sentences are true, some truths can’t be proved in the system. This is the simplest version of Gödel’s first
incompleteness theorem.
His method of proving it was a little different. The predicates in his proof were G1 and G2.
G1.
x2 (x2 is a self-ascription of x1 & ~ x3 Proof (x3, x2))
G2.
x2 (x2 is a self-ascription of [k] & ~ x3 Proof (x3, x2))
G1 is true of all and only codes of predicates the self-ascriptions of which are not provable; in short, it is
true of predicates that aren’t provable of themselves. Let k code G1. G2 says that a self-ascription of G1
is not provable. Since G2 is the self-ascription of G1, G2, which is the self-ascription of G1, says that G2
isn’t provable. It is either true and unprovable, or false and provable. If no falsehoods are provable, it
must be true and not provable.
When we give Gödel’s result this way, we prove, using informal reasoning in the metalanguage, that a
sentence saying that it isn’t provable in a formal theory T, isn’t a theorem of T. Are there theorems of T
that say this? Yes. Gödel showed there are theorems of T -- G iff ~ Provable [G] (where [G] denotes the
code of G) --that assert that G is true iff G is unprovable in T. From this it follows that G isn’t a theorem,
for if it were, both Prov [G] and ~ Prov [G] would be theorems, and T would be inconsistent. Is ~G a
theorem? Since G isn’t, ~Proof n, [G] is a theorem for all natural numbers n. It might seem to follow that
Prov [G] isn’t a theorem, so ~G can’t, on pain of inconsistency, be a theorem. In fact, it doesn’t follow
unless one uses a notion of consistency slightly stronger than logical consistency, which Gödel did. Later
Rosser showed that logical consistency is enough by selecting a different class of unprovable sentences.
Gödel’s second incompleteness theorem extends this result by showing that logically consistent firstorder theories of arithmetic like Peano Arithmetic can’t prove their own consistency -- they can’t prove
theorems that say that they don’t prove contradictory theorems. The idea is this. We know from the first
incompleteness theorem that if PA is a consistent formal theory, then G isn’t a theorem of PA, but G 
~Prov G is. This is expressed in PA by the fact that ConPA  ~Prov G and G  ~Prov G are theorems,
where ConPA states that the theorems of PA are logically consistent. If ConPA were also provable in PA,
then G and ~Prov G would theorems. Since Prov G would also a theorem, PA would be inconsistent.
Since we know that PA is consistent, this means that ConPA isn’t be provable in PA.
There is a connection between effective computability and proof in a formal system. Proof is a decidable
notion and systematic searches of proofs are decision procedures for membership in decidable sets of
natural numbers. Gödel’s recursive functions are one formalization of effectively computability. Church’s
-definable functions are another. Church also used Gödel’s first incompleteness theorem to prove there
is no decision procedure for first-order logical consequence. His proof is based on the self-ascription G of
~x Proof x,y. Since Proof encodes a decidable relation, if two numbers stand in it a theorem of T says
they do; if they don’t a theorem says they don’t. This doesn’t guarantee that if a sentence isn’t provable
some theorem says it isn’t. If it did, G is unprovability would guarantee that ~x Proof x,G and G were
provable. So when S isn’t a logical consequence of T’s axioms, T will sometimes fail to tell you. In short
1st-order consequence is undeciable. Alan Turing proved the same result using a different, but equivalent
formalization of a computable function–one computed by a Turing machine. Such a machine is imagined
to run on an infinite tape divided into squares, each blank or having one dot on it. The machine moves
along the tape, one square at a time, checking to see whether or not the square it is on is blank. It can
print a dot on a blank or erase a dot on a square that had one. The machine has a finite number of internal
states and its instructions tell it what to do, based on the state it is in. The machine is digital, operating on
zero’s and one’s, which model the two positions of an electric circuit (open and closed). Its instructions
can be encoded in the first-order predicate calculus. These two features allowed Turing to prove the
undecidability of logical consequence, while also providing the mathematical basis for electronic
computing. With this, the turn to logic and language in analytic philosophy ushered in the digital age.
4