Download Fundamental Notions in Semantics

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Lojban grammar wikipedia , lookup

Malay grammar wikipedia , lookup

Symbol grounding problem wikipedia , lookup

Latin syntax wikipedia , lookup

Transformational grammar wikipedia , lookup

Focus (linguistics) wikipedia , lookup

Polish grammar wikipedia , lookup

Spanish grammar wikipedia , lookup

Parsing wikipedia , lookup

Pleonasm wikipedia , lookup

Lexical semantics wikipedia , lookup

Pipil grammar wikipedia , lookup

Interpretation (logic) wikipedia , lookup

Junction Grammar wikipedia , lookup

Meaning (philosophy of language) wikipedia , lookup

Semantic holism wikipedia , lookup

Cognitive semantics wikipedia , lookup

Transcript
Fundamental Notions
In Semantics
T.-H. Jonah Lin
Graduate Institute of Linguistics
National Tsing Hua University
September 2006
1. Meaning and natural language semantics
1. Meaning and semantics
Questions regarding meaning are usually big ones. We wonder about the “meaning” of life, we ask
what’s the meaning in doing certain thing, and so one. These questions are serious, but they are
also vague. They cannot be investigated by scientific methods, and there is no guarantee that an
answer can be reached that can be verified. This is the impression that the term “meaning” leaves
us.
Linguistic meaning, however, is very different. Compared with those big questions, linguistic
meaning is much simpler and more straightforward. Suppose someone comes up to you in the
street and says the following to you:
(1)
Πού είναι ο σταθµός τραίνων?
You don’t know what it is. You ask: “What do you mean?” You know this guy is uttering a
certain linguistic expression, but you don’t understand the “meaning” of the linguistic expression.
Linguistic expressions have meanings. It is part of the definition of language that it has
meaning. Semantics - the study of linguistic meaning - is meant to investigate the meanings of
linguistic expressions. To put it simply, semantics asks the following questions and tries to make
them clear:
(2)
How do linguistic expressions like (1) carry meaning that gets expressed?
How does one understand the meaning of a linguistic expression such as (1)?
At this moment we need to rehearse a little about the nature of language. We follow the
Saussurean idea that language is a symbolic system consisting of a component of the signifying and
a component of the signified. The linking of the two components constitutes the system of
language. Put in modern terms, a language consists of form and meaning, such that a specific form
denotes a specific meaning.
(3)
Form 1
Meaning 1
Form 2
Meaning 2
Form 3
.
.
.
Meaning 3
.
.
.
Form n
Meaning n
Language
The forms, however, is more than an arbitrary collection of things. There are rules and principles
responsible for generating the forms of a language. Remember that language is a recursive system,
and that it can build an infinite number of sentences from a limited set of lexical items by the aid of
certain syntactic principles and rules. An inference is that the meanings must be a recursive system
as well.
(4)
John cried.
→ Meaning 1
John cried at the school.
→ Meaning 2
John cried at the school sadly.
→ Meaning 3
John cried at the school sadly after being blamed.
→ Meaning 4
The above are four distinct sentences, and there are also four distinct meanings corresponding to
the sentences. But the four sentences are not totally separate from each other; they are interrelated.
Specifically, the later sentences are generated with some more specific modification added upon
the earlier sentences. We will then suppose that that the meanings 1-4 are not really independent
from each other; they are interrelated. We may further assume that there are semantic
rules/principles that generate, say, Meaning 4 from Meaning 3 by adding some pieces of meaning.
In conclusion, natural language syntax is a recursive system, and natural language semantics
must be a recursive system too. Natural language generates an infinite set of sentences by a finite
set of lexical items and principles. This must be the same for semantics. In other words, we have
3
an infinite number of meanings because we can deduce them from a finite set of “atomic
meanings” and semantic principles.
2. Meaning and truth condition
But we still have to know what meaning is. That is, suppose a linguistic expression denotes a
meaning. What is this meaning that is being denoted?
This question seems to have a simple answer if we are talking about nouns. The traditional
idea is that nouns denote objects in the world. For examples, the word table denotes the object
table, and the noun tiger denotes the animal tiger. But this idea is in fact problematic. First, there
are nouns in our language that do not denote concrete things. What object in the world does the
noun love denote? And the noun sincerity? We cannot find any object in the world that is called
love or sincerity. Furthermore, there are nouns that do not denote any object at all. The word
unicorn doesn’t denote anything in the world, since unicorn doesn’t exist. What about dinosaur?
Dinosaurs have long been extinct and you find no dinosaurs in the world. So the noun dinosaur
denotes nothing in the world. Another kind of nouns that poses a similar problem is nouns like
someone, nobody, and anyone. When we says Nobody likes John, do we mean that there is an
object in the world called nobody, such that this object likes John? Of course not. By the way, this
is the source for jokes like the following one:
公主被魔王抓走了...
公主: 救命呀!救命呀!
魔王: 妳儘管叫破喉嚨吧,沒有人會來救妳的!
公主: 破喉嚨!破喉嚨!
沒有人: 公主,我來救妳了!
魔王: ..................................
There is a bigger problem with this. Suppose that the meaning of a noun is the object in the
world that the noun denotes. Then how can we tell the difference between the two sentences
below?
4
(5)
Morning Star is Evening Star
(That is, Venus; a scientific discovery)
Morning Star is Morning Star
(= Cliché; necessarily true)
If Morning Star is Evening Star (that is, Venus), then both of the sentences would be of the
semantic form X is X. Since any X is X itself, this semantic form is necessarily true - in other
words, it is a cliché. But this is contrary to our linguistic intuition. According to our intuition, the
first sentence obviously is very meaningful - in fact it was a scientific discovery in the history of
astronomy. This is in sharp contrast with the second, sentence, which is indeed a cliché. But if we
insist that the meaning of a noun is the object in the world that it denotes, we will confuse these two
sentences together, contrary to our linguistic intuition.
In conclusion, the meaning of a linguistic expression cannot be as simple as the object it
denotes in the world. There must be something more. What is it, really?
To be straightforward, we can say that the meaning of a linguistic expression is the way it is
used. So, to understand the meaning of a linguistic expression, one needs to understand under what
circumstance one can use the linguistic expression. We will come back to nouns and other lexical
items later, and look at sentences first.
To understand the meaning of a sentence, we need to understand under what circumstance the
sentence can be used. Obviously a sentence can be used only when it is true.
(6)
Suppose that John went to a movie last night with Mary –
Speaker A: John went to a movie last night with Mary.
⇒ A true statement
Speaker B: John went to a movie last night with Jane.
⇒ A false statement
We know the meaning of a sentence because we know how to judge whether it is true or false in
specific situations. In theoretical terms, to understand the meaning of a linguistic expression is to
understand its truth condition, namely to understand under what situation the sentence is true.
(7)
‘John went to a movie last night with Jane’ is true if and only if John went to a movie last
night with Jane.
(And if John actually went out to a movie last night with Mary, ‘John went to a movie last
night with Mary’ is false.)
We usually expression the truth condition of a sentence in the following format:
5
(8)
‘φ’ is true if and only if φ.
For example:
(9)
‘Snow is white’ is true if and only if snow is white.
This looks like a cliché, as in X is X. But in fact it is not. Notice that we quote the first φ, namely
‘φ’. This means that the first φ is an object under scientific investigation. We call it the object
language. When we investigate the semantics of English, English is an object language subject to
scientific investigation; and when we investigate the semantics of Greek, Greek is an object
language under scientific investigation. To investigate something, we need a language to describe
our investigation; the language that we use is called the meta-language, namely the scientific
language. (9) looks stupid because we happen to use English as both the object language and the
meta-language. Things become much less stupid if the meta-language is different from the object
language. For example:
(10)
‘το χιόνι είναι άσπρο’ is true if and only if snow is white.
Here we want to know what the Greek sentence ‘το χιόνι είναι άσπρο’ means, and our formula tells
you that this Greek sentence is true if and only if snow is white. Alternatively, we can use a
different meta-language, for example:
(11)
|| Snow is white || = t iff white’(snow’)
(The value of the linguistic expression ‘Snow is white’ is TRUE if and only if the entity
that the linguistic expression ‘snow’ denotes has the property that the linguistic expression
‘white’ denotes.)
Here we use logical language as the meta-language, like science in general. No matter what, it is
important to remember that to understand the meaning of a sentence, we need to understand its
truth condition, in the format expressed in (8).
6
3. Compositionality
Since linguistic expressions are infinite in number, there are an infinite number of “meanings.” We
need a way to capture such infinity of meaning. We cannot memorize all “meanings,” since, first,
it is practically impossible, second, it goes against the creativity of language, and third, linguistic
meaning doesn’t seem to have an independent status, as it depends crucially on syntax.
Obviously we know the meaning of a linguistic expression because we know the meanings of
the constituting parts and we know the way these parts are put together.
(9)
John likes Mary.
This sentence is “meaningful” to us because we know the meanings of the words John, Mary, and
likes. And also, we know what it “means” to put these words in such an order. This is different
from, say, a sentence with exactly the same words but with a different word order.
(10)
Mary likes John.
This principle, namely that the meaning of a linguistic expression crucially depends on its
composing parts and the way those parts are put together, is known as the principle of
compositionality of natural language semantics.
Now we learned two things. First, to know the meaning of a linguistic expression, we need to
know its truth condition, namely under what situation the linguistic expression is true. Second, to
know the meaning of a linguistic expression we need to know its composing parts and the way
those parts compose the expression. These two things conjointly tell us a very important
conclusion: What natural language semantics does is recognize (i.e. provide an analysis) each part
of the linguistic expression under a truth-conditional perspective, and then calculate the final truth
condition of the expression out of those parts. How can we achieve this? The common practice is
to employ set theory.
4. A set-theoretic approach to compositionality
Before going on to the details, we need to clarify one question. It seems intuitively okay to say that
to understand the meaning of a linguistic expression one needs to know its truth condition, namely
under what situation the linguistic expression is true. But there is problem: not every linguistic
7
expressions can be said to be true or false. Sentences can be true or false, but nouns, adjectives,
prepositions (and others) cannot be said to be true or false. What can we do about them?
Remember what we said about the meanings of linguistic expressions: we know the meaning
of a linguistic expression because we know the way to use it. Just like sentences for which we need
to make judgments as to whether they are true or false, we need to make judgments about nouns
and other lexical items. For example, if you see the thing in which John sits in the classroom, you
have no problem in identifying it as a chair. But suppose you and a bunch of friends go out to the
field for a picnic, and you see a stone on which people can sit. You say: “This is like a chair.” But
other people may say: “No, it doesn’t look like a chair.” Thus there is disagreement as to whether
this stone looks like a chair or not. In other words, you and those people make different judgments
with regards to the identification of the stone. Use of particular lexical items involves judgment.
There is reason to believe that this is the core concept of natural language semantics.
We may implement this idea in set terms. That is, we find it very helpful to define lexical
items as denoting sets.
(11)
(12)
a.
John = {j}
(or {x | x is j})
b.
Mary = {m}
(or {x | x is m})
c.
boy = {x | x is a boy}
d.
boys = {x | x is a collection of boys}
e.
love = {x | x is love}
f.
sincerity = {x | x is sincerity}
a.
run = {x | x runs}
b.
be red = {x | x is red}
c.
like = {<x, y> | x likes y}
These, again, look like cliché. But in fact they are not. When someone says X is a table, all it
means is that this guy makes a judgment and identifies this X as a member of the set of table in
his/her mental grammar. Different people may have different judgments on the identification of
something, and that means that people may have different set-membership in their mental
grammars. We don’t need to care about what boy, chair, or sincerity denotes in the world; it
doesn’t matter. What matters is the judgment that a speaker makes regarding the identification of
the thing in question with regards to the set-membership in the speaker’s mental grammar.
Now we can see how these notions yield the desired results.
8
(11)
(15)
John runs.
a.
John = j
(The individual labeled John)
b.
run = {x | x run}
(The set of all things that run)
c.
John runs is true iff j is a member of the set {x | x run}, false otherwise.
John likes Mary
a.
John = j; Mary = m
b.
like = {<x, y> | x likes y}
(The set of all pairs such that the first element of the pair likes the second element of
the pair)
c.
(16)
John likes Mary is true iff the pair <j, m> is a member of the set of like.
John likes black clothes.
a.
John = j; like = {<x, y> | x likes y}; clothes = {x | x is a piece of cloth}
b.
black = {x | x is black}
(The set of all things that are black)
c.
black clothes = {x | x is black} ∩ {x | x is a piece of cloth }
= {x | x is black and x is a piece of clothes}
(The intersection of the set of black and the set of cloth)
d.
John likes black clothes is true iff the pair <j, black clothes> is a member of the set
like, namely iff John and things that are black clothes are in the relation of liking.
In the above examples we use the logical language as the meta-language and specify the truth
conditions of the different sentences. We can of course use other meta-languages. But the point
here is that, this is what we are going to do with natural language semantics - to specify the truth
conditions of sentences in a recursive way (namely building up truth conditions from a finite set of
smaller elements and principles).
9
2. Logic and natural language semantics
1. What is logic?
Logic is the science for reasoning. We have different intuitions about different kinds of statements.
For example, when we say Morning Star is Evening Star, the statement can be true or false, and the
truth of this statement can be verified empirically, namely by checking the states of affairs of the
world. On the other hand, when we say Morning Star is Morning Star, we are not making a
statement with empirical content - this statement is always true no matter how the states of affairs
in the world are. Statements like Morning Star is Morning Star are called tautologies, that is,
necessarily true statements. The necessary truth of tautology has nothing to do with the empirical
world; it is necessarily true because of its form, X is X. Logic is concerned with the forms of
statements and arguments.
(1)
“Logic is concerned with arguments, good and bad.” – Kalish, Montague, and Mar.
What does it mean to say that logic is the science of reasoning? Basically what logic does it
show that a given statement or argument is valid (or not) based on certain principles and rules. See
the following example.
(2)
John is a man. Every man loves BMW. So John loves BMW.
(2) is an argument consisting of three statements. The last statement is the conclusion based the
previous two statements. This argument is intuitively correct. Can we prove it? Yes we can. First
we formalize the elements:
(3)
Formalization:
John
j
(The individual John)
man
Man (x)
(All individuals with the property of being a man)
love
Love (x, y)
(The property of any <x, y> such that x loves y)
BMW
BMW
(The entity BMW)
And then we reformulate the argument in terms of the formalized elements, as follows:
(4)
Premise 1:
Man (j)
Premise 2:
∀x [Man (x) → Love (x, BMW)]
∴
Love (j, BMW)
Now we want to show that the conclusion is valid based on the two premises. The proof goes as
follows.
(5)
Show:
Love (j, BMW)
1.
Man (j)
Premise 1
2.
∀x [Man (x) → Love (x, BMW)]
Premise 2
3.
Man (j) → Love (j, BMW)
2, Universal Instantiation
4.
Love (j, BMW)
1, 3, Modus Ponens
Q.E.D.
Each of the steps in (5) is intuitively correct. (They can also be proved correct.) First we want to
show that Love (j, BMW) is true (hence the assertion Show). Then we take the two premises down
to the proof, as in line 1 and 2. Next we do something to line 2 and derive line 3. This step is
called Universal Instantiation. This rule says that if all elements in the universe (technically called
“the domain of universe”) has the property P, then it is safe to say that some particular element in
the universe - let’s say John - has the property P also. So if all men love BMW, it is necessarily
true that John, as a member of “all men,” loves BMW. This is why we get line 3. Now we find
that line 1 and line 3 feed what logic textbooks call the application of Modus Ponens, also known
as syllogism. This rule says that if p→q, and we have p, then we get q. Thus we get the conclusion
in line 4, namely Love (j, BMW), as desired. So we’ve successfully shown what we intend to
show.
Here is another example. The proof below involves two special derivations: the conditional
derivation, and the indirect derivation.
11
(6)
If it is not the case that John is male, then it is not the case that, John is male and Mary is
female.
Formalization:
John is male
P
Mary is female
Q
⇒ ∼P → ∼(P ∧ Q)
Show
∼P → ∼(P ∧ Q)
1.
∼P
Assumption, Conditional Derivation
∼(P ∧ Q)
Assertion
3.
P∧Q
Assumption, Indirect Derivation
4.
P
3, Simplification
5.
∼P
1, Repetition
2.
Show
Q.E.D.
What is conditional proof? A conditional proof is such that if you plan to prove a conditional
statement p→q, you first assume p, and then try to derive q. If you successfully derive q, then you
prove p→q. Indirect proof (also known as reductio ad absurdum) is such that if you plan to prove
p, then you first assume ∼p, and then try to derive a contradiction, such as q and ∼q. If you can
successfully do it, you prove p. The logic of the derivation in (6), then, is this. We try to prove ∼P
→ ∼(P ∧ Q), so we first assume that ∼P is true (line 1), and then try to show that ∼(P ∧ Q) is true
(line 2). To show that ∼(P ∧ Q) is true, we use indirect proof, by assuming that P∧Q is true (line 3).
If P∧Q is true, then P must be true; this rule is called simplification. (If P∧Q is true, then, since the
definition of conjunction is that the conjuncts are all true, P must be true.) So we have P (line 4).
But we have already assumed that ∼P is true (line 1), and this leads to a contradiction - P is true in
line 4), and ∼P is also true in line 1. Since a statement cannot be true and false at the same time, we
arrive at a contradictory conclusion. This indicates that the assumption P∧Q is false. Thus its
negation, ∼(P ∧ Q), must be true (since X is either true or false). In this way we successfully prove
that ∼(P ∧ Q) is true. And the conditional proof initiated at line 1 is also completed.
We can also prove very abstract and complex things. We have seen that the argument in (2) is
valid; the roof is given in (5). We can go one step further and show that arguments of the form like
(2) are all valid. This is shown below.
12
(7) Show ∀P∀Q [[P(a) ∧ ∀x [P(x) → Q(x, b)]] → Q(a, b)]
1.
Show [P(a) ∧ ∀x [P(x) → Q(x, b)]] → Q(a, b)
Universal Derivation
2.
P(a) ∧ ∀x [P(x) → Q(x, b)]
Assumption CD
3.
Show Q(a, b)
Assertion
4.
∀x [P(x) → Q(x, b)]
2, S
5.
P(a) → Q(a, b)
4, UI
6.
P(a)
2, S
7.
Q(a, b)
5, 6, MP
Q.E.D
You may spend some time to see that the formula (7) is meant to prove is indeed a more general
form of the argument in (2). It says that for any property P (e.g. being a man) and for any property
Q (e.g. love), if the thing a (e.g. John) has the property P (John is a man) and for any elements x, if
x has the property P then x has the property Q relative to the thing b (e.g. BMW) (all men love
BMW), then the thing a has the property Q relative to the thing b (John loves BMW). Thus if we
replace P with is a student and Q with hate, and designate a as Mary and b as exams, we obtain a
valid argument like:
(8)
Mary is a student. All students hate exams. So Mary hates exams.
One special technique used in (7) is what we call the universal derivation. Basically it says that
when you have a universal statement to prove (statements like ∀xPx), you may choose anything in
the place of x and show that the result is true.
Conclusion: Logical thinking and reasoning is the foundation of science. It is a specific way of
thinking; it focuses on truth preserving. That is, in a series of derivational steps, the truth of each
step must be guaranteed from earlier steps or the premises. “Jumping around” among steps is
strictly forbidden as it leads to logical fallacy. Due to the characteristic of truth preservation, logic
is a very good language to use in scientific investigation. This is why it is used as the metalanguage for most scientific researches.
2. Human thinking and logic
Logic is a very precise language. This is why it is the meta-language for science. However, human
beings don’t always think in the logical way. People tend to think they are rational, though in
13
many cases they are not. Sometimes when people think they are being reasoning, they are in fact
making use of the thinking mode called heuristics. In this mode of thinking, we tend to take prototypes as representatives and base our judgments on them, or, we tend to count on things that we are
familiar with and take them as the basis for decision-making. But these things can yield
tremendous errors.
(9)
Steve is very shy and withdrawn, invariably helpful, but with little interest in people, or in
the world of reality. A meek and tidy soul, he has a need for order and structure, and a
passion for detail.
Question: What kind of person do you think Steve is – a farmer, a salesman, an airline
pilot, a librarian, or a physician?
(10)
I have a week for vacation and I am planning for the vacation. It occurs to me that,
obviously, it is much safer to drive to Kaohsiung to visit a friend than to spend 5 days as a
tourist visiting the holy places and biblical historical sites in Israel.
Sometimes we feel we are making a sound judgment, which turns out to be wrong because we fail
to take all relevant factors into consideration.
(6) The state of New Mexico has a very high mortal rate of pulmonary tuberculosis. It is therefore
apparent that New Mexico is a bad place since many of its population afflict pulmonary
tuberculosis.
Sometimes we are simply misled by language.
(7) A story from Russia's Lost Literature of the Absurd
There was once a red-haired man who had no eyes and no ears. He also had no hair, so
he was called red-haired only in a manner of speaking. He wasn't able to talk, because he
didn't have a mouth. He had no nose, either.
He didn't have any arms or legs. He also didn't have a stomach, and he didn't have a back,
and he didn't have a spine, and he also didn't have any other insides. He didn't have anything.
So it's hard to understand whom we are talking about.
So we'd better not talk about him any more.
14
(8) Paradoxes of Zeno of Elea (490-425 BC)
1. The Dichotomy: Motion cannot exist because before that which is in motion can reach
its destination, it must reach the midpoint of its course, but before it can reach the middle, it
must reach the quarterpoint, but before it reaches the quarterpoint, it first must reach the
eigthpoint, etc. Hence, motion can never start.
2. The Achilles: The running Achilles can never catch a crawling tortoise ahead of him
because he must first reach where the tortoise started. However, when he reaches there, the
tortoise has moved ahead, and Achilles must now run to the new position, which by the time
he reaches the tortoise has moved ahead, etc. Hence the tortoise will always be ahead.
3. The Arrow: Time is made up of instants, which are the smallest measure and indivisible.
An arrow is either in motion or at rest. An arrow cannot move, because for motion to occur,
the arrow would have to be in one position at the start of an instant and at another at the end of
the instant. However, this means that the instant is divisible which is impossible because by
definition, instants are indivisible. Hence, the arrow is always at rest.
The mini-story in (7) is absurd because in the beginning it posits something which seems to have
concrete reference and properties, yet in the end only to deprive all the properties of this thing.
This indicates that what looks concrete and real may eventually just be an illusion created by our
language. The paradoxes in (8) are paradoxical simply because they employ a recursive rule
(…before moving the distance X one has to move half of the distance X/2…) and keep applying it,
disregard of the reality. These are all rules in language; they have nothing to do with the real world.
Conclusion: Logic is a special way of thinking. It is strictly systematic – in the sense that it
has primitives and rules that govern the combination of these primitives and the generation of
strings (or structures) out of these primitives. In other words, logic is a language.
3. The nature of logic
To be precise, what does logic do? What logic does is formal reasoning, namely to reason based
on the form of the argument. Given a set of premises P1, P2, … Pn, we employ inference rules and
generate a set of theorems T1, T2, … Tm. If the premises are true, then the theorems are also true,
since the inference rules of logic preserves truth. A different way to put it is – the theorems T1,
T2, … Tm are already “contained” in the premises P1, P2, … Pn; we simply “discover” them. In such
situation the truth of the theorems T1, T2, …Tm are said to be analytic truth.
In logic (and semantics) it is important to make a distinction between synthetic truth and
15
analytic truth. Synthetic truth is truth by empirical verification. Analytic truth is truth by
definition. A very good example is the contrast between Morning Star is Evening Star and
Morning Star is Morning Star. The former statement is a synthetically true statement, since to
identify two things (two names, in fact), we need empirical evidence. On the other hand, the latter
statement is analytically true, because its truth has nothing to do with the empirical states of the
world; it is true simply by its form - X is X. Truth in logic is of the latter kind, namely analytic
truth.
How can formal reasoning be studied? On this question we need to say something about the
ontology of logic. We say that logical reasoning preserves truth with absolute necessity. How can
such absolute necessity be possible? There have been two proposals. The first is called Realism.
According to this proposal, there is a mathematical world out there independently, and it contains
mathematical entities. The job of mathematicians is to discover them. The second proposal is
called Intuitionism. According to this proposal, mathematical entities are mental constructions in
human mind and therefore they are not independent of human mind. They exist because the
mathematical intuitions in human mental structure support them. They are not real. These two
proposals have very different consequences for our view on the nature of the universe. If the
realism is correct, then for each mathematical expression there is a corresponding reality out there
in the universe. That is, mathematics represents the true picture of the universe. On the other hand,
if the intuitionism is correct, then what we think - and science in general - may not have any
connection with the reality of the universe. We don’t know what is going on in the universe, since
mathematics and science are just inventions of human minds and have no bearing on the truth of
the universe. Which of the two proposals is correct is a question debated among philosophers.
There are advantages and weakness for each of the two proposals, though.
4. Logic and natural language
Why bother studying logic? This question has three answers.

To have your brain exercised!

Logic is a scientific language and may serve as a tool to express human language – particularly
in the areas of natural language semantics.

The system of logic is itself a language; its axiomatization provides a model for the
axiomatization of natural language.
Logic is an axiomatic system. An axiomatic system is a system that can generate a large number of
16
strings or structures with a small number of primitives and rules/principles. We have seen that
natural language is such a system. In other words, natural language is an axiomatic system, and
perhaps the most complicated one in our universe. Logic is an axiomatic system; therefore logic is
also a language in the technical sense.
The system of logic has only a small amount of primitives, e.g. the logical symbols ∧, ∨, →, ∀,
and so on. With these symbols we can generate an infinite number of logical expressions (with the
help of some ancillary symbols):
(9)
P→Q
(P→Q)→([Q→R]→[P→R])
(~P→~Q)→(Q→P)
P∧(Q∨R)↔(P∧Q)∨(P∧R)
(P→[Q↔R])↔([P→Q]↔[P→R])
~(P↔Q)↔(P↔~Q)
(P↔Q)↔([P↔R]↔[Q↔R])
(P↔R)∧(Q↔S)→([P→Q]↔[R→S])
(and infinitely many more …)
The logical expressions in (9) are all well-formed; that is, they look “good” in form. On the other
hands, the “logical expressions” in (10) look bad in form. They are not well-formed; they are illformed.
(10)
→PQ∀
∧P (Q∨↔R) (Q P∧)∨∧ (PR)
PQPP→→→
This tells us that logic has a syntax. The construction of the logical expressions must follow the
syntax of logic to be well-formed. Furthermore, logic has a semantics. This means that each
logical expression has a “meaning” - true or false. For instance:
(11)
P→(Q→P)
(Necessarily true; a theorem)
P→(Q→R)
(Can be true or false, depending on P, Q, R)
~P∧P
(Necessarily false)
A necessarily true logical expression is true regardless of the truth of its atomic elements; thus
P→(Q→P) is true regardless of the truth or falsity of P and Q. Such logical expressions are
17
theorems. Below is the proof of its validity.
(12)
Show P→(Q→P)
Assertion
1. P
Assumption CD
2. Show Q→P
Assertion
3. Q
Assumption CD
4. P
1, R.
Q.E.D.
P→(Q→R) is not a theorem; it can be true or false depending on the value of P, Q, and R. (For
example, if P and Q are true and R is false, then P→(Q→R); and, if P is false, and Q remains true
and R remains false, then P→(Q→R) becomes true.) ~P∧P is a necessarily false logical expression;
it is also called a contradiction.
Since logic has a syntax and a semantics, it resembles natural language. Thus the study of
logic as a language may serve as a model for the study of natural language. Since logic itself is a
language with syntax and semantics, it must have a grammar. Below is a sketch of the grammar of
the logical system that we are familiar with.
(13)
The grammar of the language of propositional logic L
I.
Syntax
(a) Propositions (A): {P, Q, R, S…}
(b) The operators (O):
→ Conditional
∨
Disjunction
∧
Conjunction
∼
Negation
↔ Bi-conditional
(
Left bracket
)
Right bracket
(c) Rules of syntax:
i.
Any φ ∈ A is a well-formed sentence in L.
ii. If φ is a well-formed sentence in L, then 1.
(φ) is a well-formed sentence in L;
2.
∼φ is a well-formed sentence in L;
18
3.
φ→ψ is a well-formed sentence in L;
4.
φ↔ψ is a well-formed sentence in L;
5.
φ∨ψ is a well-formed sentence in L;
6.
φ∧ψ is a well-formed sentence in L.
7.
Nothing else is a sentence in L.
II. Semantics
(a) F (written as ||…||) is an interpret function from sentences in L to {0, 1}.
(0 = false, 1 = true)
(b) Values of sentences in L:
i.
|| φ || = F(φ).
ii. || ∼φ || = 1 iff || φ || = 0; 0 otherwise.
iii. || φ→ψ || = 1 iff || φ || = 0 or || ψ || = 1; 0 otherwise.
iv. || φ↔ψ || = 1 iff || φ || = ||ψ ||; 0 otherwise.
v.
|| φ∨ψ || = 1 iff || φ || = 1 or || ψ || = 1; 0 otherwise.
vi. || φ∧ψ || = 1 iff || φ || = 1 and || ψ || = 1; 0 otherwise.
We can make the syntax and semantics of logic clear by means of tables, called the truth tables.
(14)
P
∼P
T
F
F
T
P
Q
P→Q
T
T
F
F
T
F
T
F
T
F
T
T
P
Q
P∧Q
T
T
F
T
F
T
T
F
F
F
F
F
19
P
Q
P∨Q
T
T
F
F
T
F
T
F
T
T
T
F
P
Q
P↔Q
T
T
F
F
T
F
T
F
T
F
F
T
By investigating the grammar of logic, which is pure and clean, we hope to get some insight into
the grammar of natural language, which is much more complicated and messy.
An important use of logic is to serve as the meta-language for scientific statements. This is
why scientific writings are full of logical and mathematical symbols. Linguistics also employs
logic as a meta-language, in particular natural language semantics. Consider the following
sentence.
(15)
Every man loves BMW.
We can express our linguistic knowledge about this sentence in the following form:
(16)
‘Every man loves BMW’ is true if and only if every man loves BMW.
In (16) we use ordinary English as the meta-language to describe the truth condition of the sentence
in (15). But we can also use logic for the same purpose, with higher a level of precision. (The
symbol man’ means the translation of the word ‘man’ in the logical language.)
(17)
|| ‘Every man loves BMW’ || = 1 ↔ ∀x (man’(x) → love’(x, BMW’))
And the sentence in (18) can be interpreted as in (19).
(18)
Some man loves BMW.
(19)
|| ‘Some man loves BMW’ || = 1 ↔ ∃x (man’(x) ∧ love’(x, BMW’))
20
In conclusion, the study of logic enhances the study of natural language as a model and as a
useful tool. This is why we had better learn some logic before going on to detailed discussion of
natural language semantics.
21
3. Sets, Functions, and Intension/Extension
With logic as a tool at hand, we can now move on to the formal mechanism by which semanticists
construct semantic analyses. In earlier discussion we have seen that it is advantageous to consider
lexical items in natural language as denoting sets. We will begin this section by looking further
into the notion of set.
1. Set and related notions
What is a set? To put it in a simple way, a set is an abstraction of a collection of things. If you put
a number of things together, you may not automatically form a set. Set is more abstract that that.
To understand this, consider the following example. Suppose there is a collection of things C as
follows:
a, a, d, f, t, h, h, j, j, j, l, v, q
We may ask the following questions, and get the answers.

How many elements are included in C? Answer: 13.

Can we say that C is itself a set with 13 elements? Answer: No.
Suppose we have a collection of things C as specified above. We can form a set S out of C by
abstracting away those things that are identical.
S: {a, d, f, t, h, j, l, v, q}
Now we ask again: How many elements are included in the set S? Answer: 9. So, set is a special
and abstract way to represent things. It is not as simple as a collection of things.
Any arbitrary things can compose a set. Example: a set that consists of 1 balloon, a pair of
shoes, and the natural number 5. But typically we are not interested in sets that consist of arbitrary
things. We are more interested in sets that include things as members which share certain
properties or characteristics. Example: S1 = {x | 0 ≤ x ≤ 100}; S2 = {x | x is a person and x lives in
New York}. Sets whose members share certain common properties or characteristics are also
called characteristic functions.
Below are some notions fundamental to the notion of set.

Subset: A is a subset of B iff all members of A are also members of B.

Superset: B is a superset of A iff all members of A are also members of B.

Intersection: A is an intersection of B and C –
A: {x | x ∈ A and x ∈ B}

Union: A is a union of B and C –
A: {x | x ∈ A or x ∈ B}

Power set: The power set of A is the set that consists of all subsets of A as its members.
P(A): {x | x ⊆ A}
Example: The power set of the set {a, b, c} is the set
{{a}, {b}, {c}, {a, b}. {a, c}, {b, c}, {a, b, c}, Ø}

Empty set: Ø is the subset of all sets. This is so because Ø vacuously satisfies the definition
of subset; that is, all members of Ø (which is none) are also member of any given set A.
We may digress a little and see a famous paradox, known as Russell’s Paradox. Suppose there
is a set A which contains all sets that are not an element of themselves; in other words, A = {x | ~(x
∈ x)}. Now we ask a question: Is A an element of itself, A ∈ A? For the answer there are two
possibilities:

If the answer is yes, then A is an element of itself. It does not meet the characteristic
function of A. So it is not an element of itself. Contradiction!

If the answer is no, then A is not an element of itself. Thus it meets the characteristic
function of A. So it is an element of itself. Contradiction!
We see that both possibilities yield contradictory, and hence unacceptable, results. This teaches
us a lesson: Though we say that any arbitrary things may constitute a set, the fact is that you cannot
simply assume an arbitrary property and suppose that a set will come out automatically. A
technically implication of this lesson is that there doesn’t exist a set which contains (literally) “all
sets”; in other words, the mathematical world is “open-ended.”
2. Relations and functions
We said earlier that the denotation of boy is the set {x | x is a boy}, and the denotation of the
predicate be red is the set {x | x is red}. These are easy to understand. We also said that the
denotation of the predicate like is the set {<x, y> | x likes y}, namely the set of pairs such that the
first member of the pair holds the relation of liking with the second member of the pair. But what
is a pair such as <x, y> in this example? Such a pair is called the Cartesian product. Given two sets
A and B, we can form the Cartesian products out of A and B:
23
(1)
A×B = <a, b>, a ∈ A and b ∈ B.
Given sets S1, S2, … Sn, the Cartesian products of all these sets, namely S1×S2×…×Sn, are those
ordered n-tuples.
(2)
<a1, a2, … an>, where a1 ∈ S1, a2 ∈S2, … an ∈ Sn.
Predicates such as like, semantically speaking, are relations. Given two sets A and B, a relation
can be defined as a set of Cartesian products of A and B, i.e. A×B. For example:
(3)
A: {a, b, c}
B: {b, f, h, q}
R: {<a, f>, <b, q>, <c, b>, <b, b>, <a, q>}
We have natural language analogy for relations. Given a domain of discourse D which consists of
5 individuals {John, Mary, Jane, Max, Bill}, we can define the relation of liking as
(4)
R: || like || ∈ D×D = { <John, Bill>,
<Mary Max>,
<Bill, Jane>,
<John, Jane>,
<Max, John> }
Thus, in this mini-world, John likes Bill, Mary likes Max, Bill likes Jane, John likes Jane, and Max
likes John. Other liking relations do not exist. For instance, Bill doesn’t like John, nor does Jane
likes herself.
Some natural language predicates are more specific than relations. They are functions. A
function is a relation subject to the following restriction:
(5)
F is a function ↔ F is a relation & for all <a, b> and <a, c> ∈ F, b = c.
An example of natural language analogy is the following:
24
(6)
Given a domain of discourse D which consists of 5 individuals
{John, Mary, Jane, Max, Bill}
F: || is the spouse of || ∈ D×D = { <John, Jane>,
<Mary, Bill> }
If F is as specified, then <John, Mary> cannot be a possible element of F, though <Max, Jane> is a
possible element of F. This is so because in our society, if a is b’s spouse, b cannot be c’s spouse if
b is not c.
Given two sets A and B, a function F from A to B is said to map A to B.
(7)
{x | <x, y> ∈ A}: the domain of the function F;
{y | <x, y> ∈ B}: the range of the function F.
Some important notions:

Reflexivity
Given a set A and a binary relation R in A:
R is reflexive ↔ ∀x ∈ A [<x, x> ∈ R]
(has the same birthday as)
R is non-reflexive ↔ ∼∀x ∈ A [<x, x> ∈ R]
(is a financial supporter of)
R is irreflexive ↔ ∀x ∈ A ∼[<x, x> ∈ R]
(is taller than)

Symmetry
Given a set A and a binary relation R in A:
R is symmetric ↔ ∀<x, y> ∈ R [<y, x> ∈ R]
(is a cousin of)
R is non-symmetric ↔ ∼∀<x, y> ∈ R [<y, x> ∈ R]
(is a sister of)
R is asymmetric ↔ ∀<x, y> ∈ R ∼[<y, x> ∈ R]
(is older than)
R is antisymmetric ↔ ∀x, y ∈ A [<x, y> ∈ R ∧ <y, x> ∈ R → x = y]
(is a subset of)

Transitivity
Given a set A and a binary relation R in A:
25
R is transitive ↔ ∀x, y, z ∈ R [<x, y> ∈ R ∧ <y, z> ∈ R → <x, z> ∈ R]
(is an ancestor of)
R is non-transitive ↔ ∼∀x, y, z ∈ R [<x, y> ∈ R ∧ <y, z> ∈ R → <x, z> ∈ R]
(is a friend of)
R is intransitive ↔ ∀x, y, z ∈ R [<x, y> ∈ R ∧ <y, z> ∈ R → ∼[<x, z> ∈ R]]
(is the mother of)

Connectedness
Given a set A and a binary relation R in A:
R is connected ↔ ∀x, y ∈ R [<x, y> ∈ R ∨ <y, x> ∈ R]
(is greater than)
3. Sense and reference, intension and extension
People generally consider the German mathematician Gottlob Frege (1848–1925) as the founder of
the modern semantics. Frege has a famous paper discussing the distinction between sense and
reference. Consider the following examples, which we have already seen:
(8)
Hesperus is Phosphorus. (Morning Star is Evening Star.)
(9)
Hesperus is Hesperus.
(Morning Star is Morning Star.)
(9) is an analytically - and trivially - true sentence. It is not informative at all. Now, suppose that
the science of astronomy proves that Hesperus is in fact Phosphorus; that is, the two terms
Hesperus and Phosphorus in fact denote one and the same individual (= Venus). If the semantic
knowledge of human beings depends crucially on the references of linguistic expressions, as
commonly assumed, (8) would turn out to be as analytically - and trivially - true as (9). However,
our semantic knowledge tells us that this is actually not the case. While (9) is anything but
informative, (8) is fully informative. What goes wrong here?
Frege’s answer is this. What goes wrong is the concept of reference as the only source of the
semantic knowledge of the human being. A linguistic expression E has a reference; in addition, it
has a sense. The reference of E is the individual in the real world that E denotes, and the sense of E
is the way E is expressed in a larger linguistic expression. As a result, (8) and (9) are referentially
identical, but in terms of sense they are totally different statements. An equational relation is
established in (8) between two distinct terms, which may or may not be true. For (9) there is no
chance for it to be false; the equational relation is established via logical law.
26
Analytically true sentences (= tautologies) denote the truth value TRUE. Conversely,
analytically false sentences (= contradictions) denote the truth value FALSE. Thus, since the truth
of (9) is established on logical analyticity, (9) (trivially) denotes the truth value TRUE.
Referentially (8) is identical to (9). Thus (9) denotes the truth value TRUE as its reference as
long as it is a true statement. A theorem thus comes out: the references of sentences are truth
values, namely truth or falsity (alternatively, 1 or 0). A consequence is that, all true sentences have
the same reference, namely truth; all false sentences have the same reference too, namely falsity.
But what about sense? How are reference and sense be characterized in semantics?
This has to do with the distinction between extension and intension. Extension is the
reference of a linguistic expression. Based on what we know about the denotation of linguistic
expressions, we may define extension in set-theoretic terms.
(10)
|| Mary || = that individual labeled ‘Mary’.
|| is red || = {x | x is red} (All things that is red in the world (= domain))
|| walk || = {x | x walk} (All things that walks in the world)
|| loves Mary || = {x | x loves Mary} (All things that loves Mary in the world)
|| John loves Mary || = Truth or Falsity
Thus the extension of a linguistic expression is the set that the linguistic expression denotes. This
is what we call reference. Sense, on the other hand, is quite different. The term corresponding to
sense in modern semantics is intension. Suppose that there are many possible worlds, possibly
infinite in number. These possible worlds may or may not resemble the one that we live in, namely
the real world. Each world embodies a set of extensions – that is, the value (= set) of || Mary ||, the
value of || red ||, the value of || loves Mary ||… etc. Let each possible world be represented by an
index i.
(11)
The intension of Mary
= {|| Mary ||i, || Mary ||i’, || Mary ||i”…}
= the set of extensions of Mary in various worlds.
The intension of is red
= {|| is red ||i, || is red ||i’, || is red ||i”…}
= the set of extensions of is red in various worlds.
The intension of loves Mary
= {|| loves Mary ||i, || loves Mary ||i’, || loves Mary ||i…}
= the set of extensions of loves Mary in various worlds.
Let’s look at a concrete example. Suppose that we have a domain of universe consisting of only
five elements, {a, b, c, d, e}. And the extension (namely the reference, the denotation) of the
27
predicate is red is {a, c}. This is the extension of the predicate is red in our real world. Call it w0.
But the predicate is red can have different extensions in different possible worlds. Specifically:
(12)
In w1:
{a, d, e}
In w2:
{b, c}
In w3:
{d}
In w4:
{a, e}
In w5:
{b, c, e, d}
…
Now the intension of the predicate is red is the set that includes all the different extensions of the
predicate is red in different possible worlds. That is:
(13)
The intension of is red:
{ {a, c},
{a, d, e},
{b, c},
{d},
{a, e},
{b, c, e, d} … }
Now coming back to (8) and (9), since (9) denotes truth by logical analyticity, it is true in all
possible words. This is why it is not informative at all. On the other hand, though (8) is true in our
world, it can be false in some other possible worlds. It happens that (8) is true in the world we are
living in, a discovery that adds to our scientific knowledge. The possibility for (8) to be false
makes it an informative statement.
We suggested earlier that to know the meaning of a sentence is to know its truth conditions.
We know (8) is true because it is true of a state of affairs in the real world. The states of affairs in
the world could be otherwise, though. The semantic knowledge of human beings exploits world
knowledge so as to determine the way it should be for a sentence to be true.
28
4. Compositional semantics and functional application
It is also Frege who pointed out that the meaning of a linguistic expression is the sum of the
meanings of all its parts. This is what we called compositionality in earlier discussion. Without
the principle of compositionality, it is difficult to explain why human beings can understand novel
sentences which they never encountered before. Linguistic expressions would be holistic in nature,
and human beings would have to count on memorization to track the meaning of a linguistic
expression. But this is practically impossible - human beings are finite, yet linguistic expressions
are infinite in number.
Exactly like the way we calculate the truth values of a logical expression, we calculate the
meaning of a linguistic expression in a step-by-step fashion. Functions and their saturation are the
basis for meaning calculation of linguistic expressions. Functions are things like the following one:
(14)
f(x) = x2+2x+1 = y
< 0, 1 >
< 1, 4 >
< 2, 9 >
< 3, 16 >
< 4, 25 > …
This function needs an input, x, so as to yield an output, y. The natural numbers 0, 1, 2, 3, 4 …
saturate the variable x and yield the values 1, 4, 9, 16, 25 … for y. Now we suggest that the
predicates in natural language are also functions. They have variables to be saturated, yielding
values.
(15)
|| Mary ||
= Mary
Saturated; no variable in need of filling
|| is red ||
= {x | x is red}
Unsaturated; x in need of filling
|| walk ||
= {x | x walk}
Unsaturated
|| loves Mary ||
= {x | x loves Mary}
Unsaturated
|| John loves Mary || = Truth or Falsity
Saturated
Consider the following functions:
(16)
f(x) = [x is red] = the set of things that are red
f(x) = [x walk] = the set of things that walk
f(x) = [x loves Mary] = the set of things that loves Mary
29
Suppose we fill the variable x for the predicate loves Mary with the terms John, Bill, Max, Joe …
We obtain sentences John loves Mary, Bill loves Mary, Max loves Mary, Joe loves Mary, and so on.
These sentences can be true or false. Thus we input terms such as John and Joe and obtain outputs
such as True and False, exactly like the mathematical functions mentioned above.
At this juncture we need to introduce an important notion, λ-abstraction. This notion involves
a predicate operator, λ (lambda) operator. If a logical expression is preceded by the operator λ, that
means that this logical expression is a predicate in need of saturation.
(17)
λx [x is red]
= The predicate of [x is red]
= The set of {x | x is red}
λx [x walks]
= The predicate of [x walks]
= The set of {x | x walks}
λx [x loves Mary]
= The predicate of [x loves Mary] = The set of {x | x loves Mary}
To saturate a λ-function, we do something called λ-conversion. For example: the sentences below
all involve a subject argument saturating a predicate. The combination of the subject argument and
the predicate is such that the predicate is a λ-predicate, and the subject argument gets replaced into
the position of the x variable.
(18)
That flower is red.
John and Mary walk.
Bill loves Mary.
(19)
λx [x is red] (that flower)
⇒ [that flower is red]
λx [x walk] (John and Mary)
⇒ [John and Mary walk]
λx [x loves Mary] (Bill)
⇒ [Bill loves Mary]
We can also reserve the process; this is called λ-abstraction.
(20)
That flower is red
⇒ λx [x is red] (that flower)
John and Mary walk ⇒ λx [x walk] (John and Mary)
Bill loves Mary
⇒ λx [x loves Mary] (Bill)
The reason that we need to know about λ-abstraction and λ-conversion is that they are frequently
used in semantic calculation. With the help of λ-abstraction/conversion, we can decompose
linguistic expressions into smaller elements, essentially predicates and arguments, and also build up
30
sentences. This in fact reflects the principle of compositionality that we mentioned earlier. A
principle related to this is call the principle of functional application, which states that adjacent
elements in a linguistic expression is either of the composition Argument-Predicate or of the
composition Predicate-Argument.
(21)
Functional Application
If [γ α β] is a constituent, then || γ || = || α ||(|| β ||) or || β ||(|| α ||).
We can do a sample analysis to illustrate all these.
(22)
S
NP
VP
John
j
NP
V
likes
Mary
m
λyλx[x likes y]
The semantic computation of (22) 
Step 1: the predicates likes combines with the argument Mary.
λyλx [x likes y] (m) ⇒ λx [x likes m]

Step 2: the predicates likes Mary combines with the argument John.
λx [x likes m] (j)

⇒ [j likes m]
Conclusion: ‘John likes Mary’ is true iff [j likes m], namely if the pair <j, m> is in the
extension of predicate likes.

|| John likes Mary || = 1 iff <j, m> ∈ {<x, y> | x likes y}
5. Predicates of predicates
The above discussion covers arguments and predicates. These are the most basic elements in
natural language. But there are still other elements in natural language sentences. For examples,
we have adverbials like quickly, carefully, and forcefully; we have prepositional phrases like in the
31
room, to the school, and on the table; we have wh-phrases like who, what, and where; we also have
sentences that don’t seem to have truth values, such as the imperatives and the questions. Though
we cannot cover all these here, we will show that at least some of these elements can be understood
by way of function application.
Consider the following sentences.
(23)
John walks slowly.
(24)
John walks to Taipei.
The part John walks in these sentences is easy; it can be treated in the way sketched above. But
what about slowly and to Taipei? These modifiers ascribe further properties to the main predicate
walk. Thus they are predicates of predicates. We can write the logical formulae for them as
follows.
(25)
|| slowly || = λP[slowly(P)]
|| to || = λxλP[to(x, P)]
See how the two sentences above are analyzed:
(26)
1.
|| slowly ||
= λP[slowly(P)]
2.
|| walks ||
= λx[walk(x)]
3.
|| walks slowly ||
= λP[slowly(P)]( λx[walk(x)])
= λx[slowly(walk(x))]
4.
|| John walks slowly ||
= λx[slowly(walk(x))] (J)
= [slowly(walk(J))]
(27)
1.
|| to ||
= λxλP[to(x, P)]
2.
|| to Taipei ||
= λxλP[to(x, P)] (Taipei)
= λP[to(Taipei, P)]
3.
|| walks to Taipei ||
= λP[to(Taipei, P)] (λx[walk(x)])
= λx[to(Taipei, walk(x))]
4.
|| John walks to Taipei || = λx[to(Taipei, walk(x))] (J)
= [to(Taipei, walk(J))]
32
4. Intensional Semantics
1. Intension and possible worlds
Let’s once again review what intension is. Intension is opposed to extension. The reference (or
denotation) of a linguistic expression (e.g. a noun, a sentence, etc.) is its extension. In set-theoretic
terms, the extension of a linguistic expression Exp is the set of all things that meet the characteristic
function specified by Exp.
(1)
The extension of is red:
{x | x is red}
The extension of boy:
{x | x is a boy}
The extension of boys:
{x | x is a collection of boys}
The extension of John ate a burger:
{x | x is a truth value}
(True or False)
What is intension, then? In set-theoretic terms, the intension of a linguistic expression Exp is the
set of the extensions of Exp in all possible worlds. Suppose that a linguistic expression E has
extensions Ext0 in possible world w0 (the real world), Ext1 in possible world w1, Ext2 in w2, Ext3 in
w3 … and so on. The intension of E is the set that includes all extensions in all possible worlds.
(2)
The intension of E: {Ext0, Ext1, Ext2, Ext3 … } (possibly infinite)
As a matter of fact, whenever you see the term "intensional,” you immediately think of the term
“possible worlds.” The notion of possible worlds is not only used to characterize the sense (as
opposed to reference) of a linguistic expression; it is also useful in dealing with some
“displacement” properties of natural language.
2. Modals
Modals in natural language are good examples of the application of intensional semantics. Natural
language has several kinds of modals. The most basic distinction is one between epistemic modals
and root modals. (Root modals are sometimes called deontic modals.) Epistemic modals are about
the possibility or necessity of propositions; root modals are about the obligation, ability, permission,
and volition involved in propositions.
(3)
John may have come to the party.
(Epistemic, possibility)
John must have come to the party.
(Epistemic, necessity)
John will come to the party
(Epistemic, future)
John should come to the party.
(Root, obligation)
John can come to the party.
(Root, ability)
John may come to the party.
(Root, permission)
John is willing to come to the party.
(Root, volition)
Let’s think about this question. If John can come to the party is true, does it mean that John came /
comes to the party? The answer, of course, is no. Thus a modal sentence doesn’t entail that the
proposition is true in the real world. It means that the proposition is (expected to be) true in some
possible worlds which highlights the ability of John.
(4)
John can come to the party is true if and only if
∃w [w ∈ WAbility & John comes to the party in w]
(WAbility: The set of possible worlds in which John’s ability is full realized.)
(5)
John must have come to the party is true if and only if
∀w [w ∈ WNecessity → John comes to the party in w]
(WNecessity: The set of possible worlds in which all things that must have happened in w0
happen.)
(6)
John will come to the party is true if and only if
∀w [w ∈ WFuture → John comes to the party in w]
(WFuture: The set of possible worlds which follows the real world w0 in time.)
Conclusion: Modals are operators (∃ or ∀) over possible worlds of various sorts.
3. Conditional
Conditionals are also sentences with modal force. Consider the sentence If John comes to the party,
Mary will leave. If this sentence is true, does it entail that John came to the party or John will come
34
to the party? No. Thus this sentence is not talking about something that happens in the real world.
It talks about something that happens in certain possible worlds.
(7)
If John comes to the party, Mary will leave is true if and only if
∀w [w ∈ {wJ | John comes to the party in wJ} → Mary leaves in w]
This sentence can be paraphrased in this way: In all possible worlds w in which John comes to the
party, Mary leaves in w.
A special type of conditional, the counterfactual conditional, is particularly suitable for an
account based on possible world semantics.
(8)
John didn’t buy the book. But if John had bought the book, Mary would be happy.
(9)
If John had bought the book, Mary would be happy is true if and only if
∀w [w ∈ {wm | wm differs minimally from w0 in that John bought the book in wJM} →
Mary is happy in wJM]
Conditionals are about entailment relationship in possible worlds. The consequent clause of a
conditional is itself a modal sentence; the antecedent clause serves to provide a restriction on the
possible worlds to be employed.
4. Belief contexts
Suppose that that Superman is Clark Kent. But this is a secret and only a few people know it. Now
suppose that we have a sentence like this:
(10)
John said that Superman saved the US president.
Can this sentence mean John said that Clark Kent saved the US president? It is perhaps okay. But
now consider the following two sentences:
(11)
John believes that Superman saved the US president.
(12)
John believes that Clark Kent saved the US president.
Can the first sentence entail the second sentence? No, it cannot. The reason is that John may not
35
know Superman is Clark Kent. That is, in John’s belief world, Superman and Clark Kent may not
be identified.
So we say that belief contexts are intensional, in the sense that belief contexts involve possible
worlds.
(13)
John believes that Superman saved the US president is true if and only if
in w0 John believes that [in wBelief Superman saved the US president]
(14)
John believes that Clark Kent saved the US president is true if and only if
in w0 John believes that [in wBelief Clark Kent saved the US president]
Verbs like say don’t seem to involve any belief and thus don’t need to resort to possible worlds
for their semantics. They yield extensional contexts, as opposed to the intensional contexts that
verbs like believe evoke.
5. Tense
The treatment of tense has a direct bearing on the possible worlds too. We may arrange possible
worlds in many different ways. For example, the possible worlds involved in John should go to the
party and those involved in John should win the contest are different, so different groupings of
possible worlds are necessary for appropriate understanding of different kinds of modality. In
addition, we can put possible worlds into certain order. For example, what makes the following
two sentences different?
(15)
John is happy.
(16)
John was happy.
The time dimension itself can give rise to different possible worlds. The above two sentences can
be analyzed in such a way that:
(17)
John is happy in w0, and John is happy in w1, and w0 > w1.
(The symbol ‘>’ stands for temporal precedence)
Thus we may set up a bunch of possible worlds and order them along the time dimension. This is
one way to treat the semantics of tense.
36
There are also other ways. Some semanticists simply take tense itself as primitives in semantic
analysis, doing possible worlds away. This would give you something like this:
(18)
John is happy in t0, and John is happy in t1, and t0 > t1.
But subjunctives seem to be a better candidate for the possible-world analysis. The subjunctive is a
special kind of tense, and it usually denotes some sort of future event. But for subjunctives it is not
enough to posit tense only; subjunctives usually involve certain modal force and therefore need to
take the notion of possible worlds into consideration.
(19)
John requests that Mary buy the book.
(20)
In w0 John requests that [in wR Mary buys the book], w0 < wR,
(wR: a possible world in which John’s request is satisfied.)
6. Conclusion
Intensional semantics is a very important part in semantics and philosophy of language. It is not
just a play of symbols and abstract concepts; it is crucial in understanding linguistic expressions
that are not meant to denote things or propositions that exist or happen in the real world. Such
“displacement” property of human language is special in the natural world, unparalleled by any
other symbolic system, man-made or non-man-made.
37
5. Lexical Semantics
1. Decomposition of lexical meaning
What we have been looking at is called the truth-conditional semantics. But sometimes we need
different approaches to understand the meaning of linguistic expressions. This is particularly clear
with lexical items. For example, consider the following Mandarin words:
(1)
殺,兇殺,謀殺,暗殺,刺殺
We know these words are somehow related; they have similar meanings. But their meanings are
just similar, not identical. They can be used in different contexts denoting different situations.
When we say 張三殺李四, we simply mean that Zhangsan is committed to the attempt or action of
killing Lisi. But if we say 張三謀殺李四, then the situation is more serious: Zhangsan is
committed to the attempt of action of killing Lisi, and, in addition, Zhangsan must have planned the
attempt or action. In other words, Zhangsan must have a plan and put the plan into an attempt or
action. This is what 謀殺 means. 謀殺 can be used as a noun, and as a noun it means basically the
same as its verbal counterpart. 兇殺, on the other hand, can only be used as noun; it is
ungrammatical to say *張三兇殺李四. 謀殺 and 兇殺 are different in meaning, too. Like 謀殺,
兇殺 is a crime, but 兇殺 doesn’t entail a pre-planning. As long as there is violence and someone
is committed to the death of another one through this violence, there is a 兇殺. There is no 兇殺
that involves no violence. But 謀殺 doesn’t need violence; if Zhangsan poisoned Lisi to death by
some secret means, Zhangsan is committed to 謀殺 but not 兇殺. Next we turn 暗殺 and 刺殺. 暗
殺 entails 殺 but 殺 doesn’t entail 暗殺. If 張三暗殺李四 is true, then Zhangsan must have killed
Lisi and, furthermore, there must be a political purpose in such deed, and Lisi must be someone
with an important political standing. In fact the meaning of 暗殺 is quite narrow. If Lisi is the
president of a big company and Zhangsan plans to kill him so as to wipe out a competitor in a big
business deal, can we say 張三暗殺李四? No, because the whole thing is not political in essence.
But 刺殺 is somewhat looser in meaning than 謀殺. If 張三刺殺李四 is true, then definitely 張三
謀殺李四 is true; but more than that, 張三 must have a purpose that involves some bigger deal. If,
let’s say, Zhangsan hates Lisi and plans to kill him. We can say 張三謀殺李四, but we cannot say
張三刺殺李四. On the other hand, in the scenario depicted above, namely Zhangsan kills Lisi to
eliminate a business competitor, 張三刺殺李四 may be okay. Of course, if Zhangsan kills Lisi
with a political purpose and Lisi is a big shot, then 張三刺殺李四 is fine too. Consider the passive
sentence 李四被刺; this sentence can only be understood in a way similar to 暗殺.
The above discussion tells us one thing: sometimes it is necessary to look into the fine-grained
semantics of lexical items to grasp their meanings. The analyses of the words in (1) indicate that (2)
a.
殺
A causes B dead by some means directly associated with A’s ability.
b.
兇殺
A causes B dead by some brutal force.
c.
謀殺
A causes B dead with an intention to make B dead.
d.
暗殺
A causes B dead with a political agenda and B is a political big shot.
e.
刺殺
A causes B dead with a purpose to enhance certain benefit associated with B.
It thus appears that human beings may encompass different cognitive notions into lexical forms and,
by using them, express concepts that are very specific. As a result sometimes we need to look into
the ingredients that compose lexical items in order to grasp the meanings of the lexical items. This
is called the decompositional approach to lexical meaning. This approach is important in the study
of lexical semantic.
2. Semantic features
Sometimes it is helpful to use features to characterize the meanings of words and their relations
with other words.
(3)
Man
[+animate], [+human], [+adult], [+male]
Woman
[+animate], [+human], [+adult], [-male]
Boy
[+animate], [+human], [-adult], [+male]
Girl
[+animate], [+human], [-adult], [-male]
39
(4)
Tiger
[+animate], [-human], [±adult], [+male]
Tigress
[+animate], [-human], [±adult], [-male]
Cow
[+animate], [-human], [+adult], [±male]
Calf
[+animate], [-human], [-adult], [±male]
We can use the feature [±animate], [±human], [±adult], and [±male] to cross-classify the words in
(3) and (4). What are the benefits in doing this? One benefit is that these decompositions (with
features) help us to establish analytic truths in natural languages.
(5)
a.
If John is a boy, then John is a human.
(Legitimate)
b.
If John is a boy, then John is a tiger.
(Illegitimate)
c.
If Jane is not a human, then Jane is a tiger.
(Illegitimate)
d.
If Jane is a calf, then Jane is not an adult.
(Legitimate)
Furthermore, such features as [±animate] and [±human] have empirical motivations in natural
language. Look at the following English sentences:
(6)
a.
Mary killed John.
b.
The tiger killed John.
c.
The car accident killed John.
d.
The storm killed John.
Now compare (6) with the following Mandarin sentences:
(7)
a.
張三殺死了李四
b.
*那隻老虎殺死了李四
c.
*那場車禍殺死了李四
d.
*那場暴風雨殺死了李四
In English, many different things can be associated with the verb kill - a person, an animal, a
natural force, an event, etc. In Mandarin, however, the verb shasi ‘kill’ can only be used with a
human being as the subject. That is, shasi can only take a [+human] noun as the external argument.
This shows that the feature [±human] has a grammatical effect on the semantics of Mandarin.
40
3. Decomposition of verbs
Verbs can be decomposed as well. Before decomposing verbs, we need to have some
understanding on the classification of verbs. There are many ways to classify verbs; we usually
classify verbs along their event properties or aspectual properties. A classification of verbs that is
often referred to is the following:
(8)
Four types of verbs in English
a.
States: verbs that denote static situation which are not affected by time.
Examples: like, resemble, be red, etc.
b.
Activities: verbs that denote actions which can continue in time.
Examples: run, hit, push, study etc.
c.
Accomplishments: verbs that denote actions that culminate in time until an end point
is reached.
Examples: eat, bake, learn, etc.
d.
Achievements: verbs that denote actions that have an abrupt change as the end the
action.
Example: die, arrive, win, etc.
Different types of verbs have different aspectual properties. For example, states are “atemporal”,
so they cannot occur with progressive aspect.
(9)
a.
*John is liking Mary.
b.
John is pushing a cart.
c.
John is eating a burger.
d.
John is winning.
Another example is this. Accomplishments and achievements both involve some sort of
culmination, namely change of state resulted from the action. As such they are sort of holistic and
cannot be cut into smaller units. Activities, on the other hand, denotes actions that continue in time,
and therefore they can be cut into smaller pieces that are still the same activities. States are
atemporal, so they cannot be “cut” in the first place. Consider the following examples:
(10)
a.
*If John likes Mary from 8:00 am to 10:00 am, then John likes Mary at 9:00 am.
b.
If John pushed a cart from 8:00 am to 10:00 am, then John pushed a cart at 9:00 am.
41
c.
*If John baked a cake from 8:00 am to 10:00 am, then John baked a bake at 9:00 am.
d.
*If John won from 8:00 am to 10:00 am, then John won at 9:00 am.
We can take the four types of verbs as results of the clustering of different event predicates.
Essentially, a state verb is a state, an activity verb is a DO predicate, an achievement verb is a
BECOME predicate, and an accomplishment is a CAUSE predicate.
(10)
State
S
A state holds.
Activity
DO (S)
Someone holds onto a state.
Achievement
BECOME (S)
A state comes into existence.
Accomplishment
CAUSE to BECOME (S) Someone causes a state into existence.
Here are some concrete examples:
(11)
(12)
(13)
(14)
John likes Mary
(John is in the state of liking Mary)
That flower is red
(That flower is in the state of being red)
John ran fast
(John held onto the action of running and the running was fast)
John cried
(John holds unto the action of crying)
John died
(John came into the state of death)
John lost his wallet
(John came into the state of losing his wallet)
John baked a cake
(John caused a cake into existence by way of baking)
John built a house
(John caused a house into existence by way of building)
A classic example of decomposition of verbs is kill:
(15)
John killed Mary = John CAUSE Mary to BECOME NOT alive.
A piece of evidence for the functions of these event predicates is the transitivity alternation in
English.
(16)
(17)
a.
The window broke.
b.
John broke the window.
c.
John [CAUSE the window broke]
a.
The ball rolls down the hill.
b.
John rolls the ball down the hill.
42
(18)
c.
John [CAUSE the ball rolls down the hill]
a.
The ship sank.
b.
The navy sank the ship.
c.
The navy [CAUSE the ship sank]
Also, in English we have a phenomenon called denominalization, namely nouns used as verbs:
(19)
(20)
a.
The books are on the shelf
b.
John shelves the books
c.
John [CAUSE the books to BECOME on the shelf]
a.
The saddle is on the horse.
b.
John saddles the horse.
c.
John [CAUSE the horse to BECOME have a saddle]
These phenomena show that event predicates can be a good tool in characterizing the meanings of
verbs.
43
6. Event Semantics in English and Chinese
1. What is event semantics?
The above discussion on the semantics of sentences makes use of standard logic and set as the
meta-language. The language of logic and set includes a very limited vocabulary, such as the
operators ∧, ∨, ∀, ∈, and the propositional and predicate symbols. An implication of this is that the
content of natural language semantics is limited to what these operators and symbols can express.
But this may not be correct, since, in the previous chapter, we saw that sometimes it is necessary to
decompose lexical items into smaller elements that express components in human’s mental
cognition. So natural language is in fact richer than what the language of logic and set can express.
One thing that has not been included in standard logic and set is the notion event. What is an
event? A famous philosopher Donald Davidson observes the following sentence.
(1)
John flew the spaceship to Mars.
Look at the PP to Mars. Our semantic intuition tells us that it is a goal phrase. But what is it a goal
of? Is it the goal of John, or is it the goal of the spaceship? No, it is not the goal of John or
spaceship. To be precise the PP to Mars is the goal of the action of flying, and John is only the
performer of the action, and the spaceship, the vessel that carries out the action. We need to
incorporate this intuition into the semantics of the sentence (1). If we follow what we have learned
in the previous discussion, then we might have (2) as the semantic representation of (1):
(2)
to_Mars (flew(John, the_spaceship))
This representation only says that John and the spaceship are the arguments of the two-place
predicate flew, and that the PP to Mars is a predicate of the predicate flew. This doesn’t capture the
intuition that we mentioned above. According to Donald Davidson, (1) should be represented as in
(3):
(3)
There is an event such that the event is a flying event (which occurred in some past), and
the agent of the event is John, and the theme of the event is the spaceship, and the goal of
the event is Mars.
A semanticist Terry Parsons formalizes this idea and suggests that such expressions as (3) can be
reformulated as:
(4)
∃e (Flying(e) ∧ Agent(e) = John ∧ Theme(e) = the spaceship ∧ Goal(e) = to Mars)
This captures the intuition that to Mars is the goal of the action (event) of flying, rather than of
John or the spaceship.
Terry Parsons provides further evidence for the event argument in natural language. Consider
the following inference.
(5)
Premise 1:
Burning consumes oxygen.
Premise 2:
Mary burned wood.
Inference:
Oxygen was consumed.
This inference looks intuitively correct. But the problem with this inference is that it doesn’t really
meet the format of Modus Ponens – bruning in premise 1 is different from burned in premise 2;
they are not identical in form. No matter how you apply the rules of inference in standard logic,
you just cannot get the conclusion, event though the conclusion is intuitively correct. The solution
to this problem, notes Terry Parsons, is that the notion event is at play here.
(5)
Premise 1:
∀e (burning(e) → ∃e’ (consuming(e) ∧ theme(e) = oxygen))
Premise 2:
∃e (burning(e) ∧ Agent(e) = Mary ∧ Patient(e) = wood)
From these two premises we can easily infer that oxygen was consumed in the event of Mary’s
burning wood. From premise 2 we infer that there is a burning event, and from premise 1 we know
that any burning event consumes oxygen. Therefore oxygen was consumed in the event of Mary’s
burning wood.
45
2. Advantages of event semantics
There are many advantages to look at natural language semantics in terms of the notion event. First,
we saw that adverbials and PPs are predicates of predicates. For example, John walks slowly and
John walks to Taipei are represented in the following way:
(6)
|| John walks slowly || = [slowly(walk(J))]
|| John walks to Taipei || = [to(Taipei, walk(J))]
But what they mean are not intuitively clear. Employing event semantics, the meaning of these two
sentences will become very clear.
(7)
||John walks slowly|| is true ↔ ∃e [walking(e) & Agent(e) = John & slow(e)]
(There is an event e such that e is walking and the agent of e is John and e is slow)
||John walks to Taipei|| is true ↔ ∃e [walking(e) & Agent(e) = John & Goal(e) = Taipei]
(There is an event e such that e is walking and the agent of e is John and the goal of e is
Taipei)
Thus event semantic makes much more sense than the purely logical language.
A second advantage of event semantics is that, in fact, we have event argument everywhere in
natural language. Look at the following examples.
(8)
A beautiful dancer
A fast driver
These two expressions are ambiguous. A beautiful dancer can be a dancer who is beautiful; it can
also be a dancer who dances beautifully (though the person himself or herself is not beautiful).
Likewise, a fast driver can be a driver who is fast (in running, let’s say), and it can also be a driver
who drives fast. In other words, the adjectives in (8), beautiful and fast, can modify the individual
or the action represented by the noun (dancing and driving). The former reading is the ordinary
modification reading, and the latter reading is the reading in which the event argument contained in
the noun is modified. So nouns like dancer and driver must have an event argument as part of their
lexical properties, and, furthermore, this event argument can be modified by syntactic modifiers.
Another advantage of event semantics is that it helps us to clarify what is meant by verbs like
kill. We saw that many different kinds of nouns can be the subject of the verb kill:
46
(9)
a.
Mary killed John.
c.
The accident killed John.
d.
This gun killed John.
These three sentences have the same syntactic form, namely Subject-killed-Object, but the
meanings of the subjects are clearly different. How can we clarify the differences? Event
semantics provides a good way to do this. The semantic representation of these three sentences, in
terms of event semantics, will be as follows.
(10)
a.
∃e (Killing(e) & Patient(e) = John & Agent(e) = Mary)
b.
∃e (Killing(e) & Patient(e) = John & ∃e’ (Accident(e’) & e’ CAUSE e))
c.
∃e (Killing(e) & Patient(e) = John & Instrument(e) = this gun)
The underlined portion is the subject of the sentence. With the use of the event arguments, the
semantic relations between the subject and the predicate in these sentences can be made very clear.
A side note here. The event argument doesn’t realize in the same way in different languages.
Consider the following Mandarin examples.
(11)
很漂亮的舞者
This NP has only one reading, namely the individual-modification reading. It doesn’t have the
event-modification reading. In other words, (11) only denotes a dancer who is beautiful; it doesn’t
denote a dancer who dances beautifully. Notice that this has nothing to do with the modifier 很漂
亮, because this modifier can be used to modify dancing:
(12)
他跳舞跳得很漂亮
To express the event-modification reading, we need to spell out the verb, as in the following:
(13)
[ 跳舞跳得很漂亮 ] 的舞者
It is likely that nouns in Chinese do not have event argument. This doesn’t mean that event
semantics doesn’t hold in Chinese; it just means that the event argument in Chinese needs to be
47
realized by verbs. This somehow yields the following conclusion: event argument is a lexical
property in English, but it is a syntactic property in Chinese. One piece of evidence for this guess
is the following contrast. In English we can say things like:
(14)
John began a book.
This sentence can be understood as ‘John began [to write / read] a book’, with the verb omitted.
How is this possible? First of all, the verb begin selects an event as complement. See the
following examples:
(15)
a.
John began [to read the book]
b.
John began [the discussion]
c.
John began [the game]
d.
*John began [the bicycle]
e.
*John began [the sincerity]
Discussion and game are event nouns; that is, they denote events. An infinitival clause denotes an
event too. On the other hand, bicycle and sincerity are difficult to be understood as denoting events,
so they are bad as complement of begin. Now why is book okay in the complement position of
begin? That’s because it is a common thing in our society that people read or write books. So,
suppose that book has an event argument in it which denotes the action of writing or reading. (14)
is okay because the verb begin “looks into” the event argument in the noun book, gets satisfied with
it. This is what (14) can be understood as ‘John began [to write / read] a book’.
Now look at the corresponding examples in Mandarin.
(16)
*張三開始那本書
This sentence is ungrammatical. To express the intended meanings, we need to give the full verbs:
(17)
張三開始 [ 讀 / 寫 ] 那本書
The ungrammaticality of (16) can be explained if we assume that nouns in Chinese don’t have an
inherent event argument. This is in line with our suggestion above.
Notice that if a noun denotes an event, it can be the complement of the verb 開始. This is
indeed true, as in the following examples.
48
(18)
a.
張三開始那場比賽
b.
美國開始對伊拉克的戰爭
Thus the verb 開始, like the English verb begin, takes an event as complement. The only
difference between the two languages is that in English nouns have event argument, but in Chinese
nouns don’t have event argument (except those that inherently denote events).
49