Download tbmk5ictk6

Document related concepts

Catuṣkoṭi wikipedia , lookup

History of logic wikipedia , lookup

Jesús Mosterín wikipedia , lookup

Truth wikipedia , lookup

Combinatory logic wikipedia , lookup

Analytic–synthetic distinction wikipedia , lookup

Laws of Form wikipedia , lookup

Meaning (philosophy of language) wikipedia , lookup

Intuitionistic logic wikipedia , lookup

Inquiry wikipedia , lookup

Modal logic wikipedia , lookup

Interpretation (logic) wikipedia , lookup

Law of thought wikipedia , lookup

Propositional calculus wikipedia , lookup

Principia Mathematica wikipedia , lookup

Natural deduction wikipedia , lookup

Truth-bearer wikipedia , lookup

Argument wikipedia , lookup

Transcript
3.1 Basic Concepts in Deductive Reasoning
As noted in Chapter 2, at the broadest level there are two types of arguments: deductive and inductive.
The difference between these types is largely a matter of the strength of the connection between
premises and conclusion. Inductive arguments are defined and discussed in Chapter 5; this chapter
focuses on deductive arguments. In this section we will learn about three central concepts: validity,
soundness, and deduction.
Validity
Deductive arguments aim to achieve validity, which is an extremely strong connection between the
premises and the conclusion. In logic, the word valid is only applied to arguments; therefore, when the
concept of validity is discussed in this text, it is solely in reference to arguments, and not to claims,
points, or positions. Those expressions may have other uses in other fields, but in logic, validity is a strict
notion that has to do with the strength of the connection between an argument’s premises and
conclusion.
To reiterate, an argument is a collection of sentences, one of which (the conclusion) is supposed to
follow from the others (the premises). A valid argument is one in which the truth of the premises
absolutely guarantees the truth of the conclusion; in other words, it is an argument in which it is
impossible for the premises to be true while the conclusion is false. Notice that the definition of valid
does not say anything about whether the premises are actually true, just whether the conclusion could
be false if the premises were true. As an example, here is a silly but valid argument:
Everything made of cheese is tasty.
The moon is made of cheese.
Therefore, the moon is tasty.
No one, we hope, actually thinks that the moon is made of cheese. You may or may not agree that
everything made of cheese is tasty. But you can see that if everything made of cheese were tasty, and if
the moon were made of cheese, then the moon would have to be tasty. The truth of that conclusion
simply logically follows from the truth of the premises.
Here is another way to better understand the strictness of the concept of validity: You have probably
seen some far-fetched movies or read some bizarre books at some point. Books and movies have magic,
weird science fiction, hallucinations, and dream sequences—almost anything can happen. Imagine that
you were writing a weird, bizarre novel, a novel as far removed from reality as possible. You certainly
could write a novel in which the moon was made of cheese. You could write a novel in which everything
made of cheese was tasty. But you could not write a novel in which both of these premises were true,
but in which the moon turned out not to be tasty. If the moon were made of cheese but was not tasty,
then there would be at least one thing that was made of cheese and was not tasty, making the first
premise false.
Therefore, if we assume, even hypothetically, that the premises are true (even in strange hypothetical
scenarios), it logically follows that the conclusion must be as well. Therefore, the argument is valid. So
when thinking about whether an argument is valid, think about whether it would be possible to have a
movie in which all the premises were true but the conclusion was false. If it is not possible, then the
argument is valid.
Here is another, more realistic, example:
All whales are mammals.
All mammals breathe air.
Therefore, all whales breathe air.
Is it possible for the premises to be true and the conclusion false? Well, imagine that the conclusion is
false. In that case there must be at least one whale that does not breathe air. Let us call that whale Fred.
Is Fred a mammal? If he is, then there is at least one mammal that does not breathe air, so the second
premise would be false. If he isn’t, then there is at least one whale that is not a mammal, so the first
premise would be false. Again, we see that it is impossible for the conclusion to be false and still have all
the premises be true. Therefore, the argument is valid.
Here is an example of an invalid argument:
All whales are mammals.
No whales live on land.
Therefore, no mammals live on land.
In this case we can tell that the truth of the conclusion is not guaranteed by the premises because the
premises are actually true and the conclusion is actually false. Because a valid argument means that it is
impossible for the premises to be true and the conclusion false, we can be sure that an argument in
which the premises are actually true and the conclusion is actually false must be invalid. Here is a trickier
example of the same principle:
All whales are mammals.
Some mammals live in the water.
Therefore, some whales live in the water.
A wet, tree-lined road.
Consider the following argument: “If it is raining, then the streets are wet. The streets are wet.
Therefore, it is raining.” Is this a valid argument? Could there be another reason why the road is wet?
This one is trickier because both premises are true, and the conclusion is true as well, so many people
may be tempted to call it valid. However, what is important is not whether the premises and conclusion
are actually true but whether the premises guarantee that the conclusion is true. Think about making a
movie: Could you make a movie that made this argument’s premises true and the conclusion false?
Suppose you make a movie that is set in a future in which whales move back onto land. It would be
weird, but not any weirder than other ideas movies have presented. If seals still lived in the water in this
movie, then both premises would be true, but the conclusion would be false, because all the whales
would live on land.
Because we can create a scenario in which the premises are true and the conclusion is false, it follows
that the argument is invalid. So even though the conclusion isn’t actually false, it’s enough that it is
possible for it to be false in some situation that would make the premises true. This mere possibility
means the argument is invalid.
Soundness
Once you understand what valid means in logic, it is very easy to understand the concept of soundness.
A sound argument is just a valid argument in which all the premises are true. In defining validity, we saw
two examples of valid arguments; one of them was sound and the other was not. Since both examples
were valid, the one with true premises was the one that was sound.
We also saw two examples of invalid arguments. Both of those are unsound simply because they are
invalid. Sound arguments have to be valid and have all true premises. Notice that since only arguments
can be valid, only arguments can be sound. In logic, the concept of soundness is not applied to
principles, observations, or anything else. The word sound in logic is only applied to arguments.
Here is an example of a sound argument, similar to one you may recall seeing in Chapter 2:
All men are mortal.
Bill Gates is a man.
Therefore, Bill Gates is mortal.
There is no question about the argument’s validity. Therefore, as long as these premises are true, it
follows that the conclusion must be true as well. Since the premises are, in fact, true, we can reason the
conclusion is too.
It is important to note that having a true conclusion is not part of the definition of soundness. If we were
required to know that the conclusion was true before deciding whether the argument is sound, then we
could never use a sound argument to discover the truth of the conclusion; we would already have to
know that the conclusion was true before we could judge it to be sound. The magic of how deductive
reasoning works is that we can judge whether the reasoning is valid independent of whether we know
that the premises or conclusion are actually true. If we also notice that the premises are all true, then
we may infer, by the power of pure reasoning, the truth of the conclusion.
Therefore, knowledge of the truth of the premises and the ability to reason validly enable us to arrive at
some new information: that the conclusion is true as well. This is the main way that logic can add to our
bank of knowledge.
Although soundness is central in considering whether to accept an argument’s conclusion, we will not
spend much time worrying about it in this book. This is because logic really deals with the connections
between sentences rather than the truth of the sentences themselves. If someone presents you with an
argument about biology, a logician can help you see whether the argument is valid—but you will need a
biologist to tell you whether the premises are true. The truth of the premises themselves, therefore, is
not usually a matter of logic. Because the premises can come from any field, there would be no way for
logic alone to determine whether such premises are true or false. The role of logic—specifically,
deductive reasoning—is to determine whether the reasoning used is valid.
Deduction
You have likely heard the term deduction used in other contexts: As Chapter 2 noted, the detective
Sherlock Holmes (and others) uses deduction to refer to any process by which we infer a conclusion
from pieces of evidence. In rhetoric classes and other places, you may hear deduction used to refer to
the process of reasoning from general principles to a specific conclusion. These are all acceptable uses of
the term in their respective contexts, but they do not reflect how the concept is defined in logic.
In logic, deduction is a technical term. Whatever other meanings the word may have in other contexts,
in logic, it has only one meaning: A deductive argument is one that is presented as being valid. In other
words, a deductive argument is one that is trying to be valid. If an argument is presented as though it is
supposed to be valid, then we may infer it is deductive. If an argument is deductive, then the argument
can be evaluated in part on whether it is, in fact, valid. A deductive argument that is not found to be
valid has failed in its purpose of demonstrating its conclusion to be true.
In Chapters 5 and 6, we will look at arguments that are not trying to be valid. Those are inductive
arguments. As noted in Chapter 2, inductive arguments simply attempt to establish their conclusion as
probable—not as absolutely guaranteed. Thus, it is not important to assess whether inductive
arguments are valid, since validity is not the goal. However, if a deductive argument is not valid, then it
has failed in its goal; therefore, for deductive reasoning, validity is a primary concern.
Consider someone arguing as follows:
All donuts have added sugar.
All donuts are bad for you.
Therefore, everything with added sugar is bad for you.
Two men engage in a discussion.
Interpreting the intention of the person making an argument is a key step in determining whether the
argument is deductive.
Even though the argument is invalid—exactly why this is so will be clearer in the next section—it seems
clear that the person thinks it is valid. She is not merely suggesting that maybe things with added sugar
might be bad for you. Rather, she is presenting the reasoning as though the premises guarantee the
truth of the conclusion. Therefore, it appears to be an attempt at deductive reasoning, even though this
one happens to be invalid.
Because our definition of validity depends on understanding the author’s intention, this means that
deciding whether something is a deductive argument requires a bit of interpretation—we have to figure
out what the person giving the argument is trying to do. As noted briefly in Chapter 2, we ought to seek
to provide the most favorable possible interpretation of the author’s intended reasoning. Once we know
that an argument is deductive, the next question in evaluating it is whether it is valid. If it is deductive
but not valid, we really do not need to consider anything further; the argument fails to demonstrate the
truth of its conclusion in the intended sense.
3.2 Evaluating Deductive Arguments
In addition to his well-known literary works, Lewis Carroll wrote several mathematical works, including
three books on logic: Symbolic Logic Parts 1 and 2, and The Game of Logic, which was intended to
introduce logic to children.
If validity is so critical in evaluating deductive arguments, how do we go about determining whether an
argument is valid or invalid? In deductive reasoning, the key is to look at the pattern of an argument ,
which is called its logical form. As an example, see if you can tell whether the following argument is
valid:
All quidnuncs are shunpikers.
All shunpikers are flibbertigibbets.
Therefore, all quidnuncs are flibbertigibbets.
You could likely tell that the argument is valid even though you do not know the meanings of the words.
This is an important point. We can often tell whether an argument is valid even if we are not in a
position to know whether any of its propositions are true or false. This is because deductive validity
typically depends on certain patterns of argument. In fact, even nonsense arguments can be valid. Lewis
Carroll (a pen name for C. L. Dodgson) was not only the author of Alice’s Adventures in Wonderland, but
also a clever logician famous for both his use of nonsense words and his tricky logic puzzles.
We will look at some of Carroll’s puzzles in this chapter’s sections on categorical logic, but for now, let us
look at an argument using nonsense words from his poem “Jabberwocky.” See if you can tell whether
the following argument is valid:
All bandersnatches are slithy toves.
All slithy toves are uffish.
Therefore, all bandersnatches are uffish.
If you could tell the argument about quidnuncs was valid, you were probably able to tell that this
argument is valid as well. Both arguments have the same pattern, or logical form.
Representing Logical Form
Logical form is generally represented by using variables or other symbols to highlight the pattern. In this
case the logical form can be represented by substituting capital letters for certain parts of the
propositions. Our argument then has the form:
All S are M.
All M are P.
Therefore, all S are P.
Any argument that follows this pattern, or form, is valid. Try it for yourself. Think of any three plural
nouns; they do not have to be related to each other. For example, you could use submarines, candy
bars, and mountains. When you have thought of three, substitute them for the letters in the pattern
given. You can put them in any order you like, but the same word has to replace the same letter. So you
will put one noun in for S in the first and third lines, one noun for both instances of M, and your last
noun for both cases of P. If we use the suggested nouns, we would get:
All submarines are candy bars.
All candy bars are mountains.
Therefore, all submarines are mountains.
This argument may be close to nonsense, but it is logically valid. It would not be possible to make up a
story in which the premises were true but the conclusion was false. For example, if one wizard turns all
submarines into candy bars, and then a second wizard turns all candy bars into mountains, the story
would not make any sense (nor would it be logical) if, in the end, all submarines were not mountains.
Any story that makes the premises true would have to also make the conclusion true, so that the
argument is valid.
As mentioned, the form of an argument is what you get when you remove the specific meaning of each
of the nonlogical words in the argument and talk about them in terms of variables. Sometimes,
however, one has to change the wording of a claim to make it fit the required form. For example,
consider the premise “All men like dogs.” In this case the first category would be “men,” but the second
category is not represented by a plural noun but by a predicate phrase, “like dogs.” In such cases we
turn the expression “like dogs” into the noun phrase “people who like dogs.” In that case the form of the
sentence is still “All A are B,” in which B is “people who like dogs.” As another example, the argument:
All whales are mammals.
Some mammals live in the water.
Therefore, at least some whales live in the water.
can be rewritten with plural nouns as:
All whales are mammals.
Some mammals are things that live in the water.
Therefore, at least some whales are things that live in the water.
and has the form:
All A are B.
Some B are C.
Therefore, at least some A are C.
The variables can represent anything (anything that fits grammatically, that is). When we substitute
specific expressions (of the appropriate grammatical category) for each of the variables, we get an
instance of that form. So another instance of this form could be made by replacing A with Apples, B with
Bananas, and C with Cantaloupes. This would give us
All Apples are Bananas.
Some Bananas are Cantaloupes.
Therefore, at least some Apples are Cantaloupes.
It does not matter at this stage whether the sentences are true or false or whether the reasoning is valid
or invalid. All we are concerned with is the form or pattern of the argument.
We will see many different patterns as we study deductive logic. Different kinds of deductive arguments
require different kinds of forms. The form we just used is based on categories; the letters represented
groups of things, like dogs, whales, mammals, submarines, or candy bars. That is why in these cases we
use plural nouns. Other patterns will require substituting entire sentences for letters. We will study
forms of this type in Chapter 4. The patterns you need to know will be introduced as we study each kind
of argument, so keep your eyes open for them.
Using the Counterexample Method
By definition, an argument form is valid if and only if all of its instances are valid. Therefore, if we can
show that a logical form has even one invalid instance, then we may infer that the argument form is
invalid. Such an instance is called a counterexample to the argument form’s validity; thus, the
counterexample method for showing that an argument form is invalid involves creating an argument
with the exact same form but in which the premises are true and the conclusion is false. (We will
examine other methods in this chapter and in later chapters.) In other words, finding a counterexample
demonstrates the invalidity of the argument’s form.
Consider the invalid argument example from the prior section:
All donuts have added sugar.
All donuts are bad for you.
Therefore, everything with added sugar is bad for you.
By replacing predicate phrases with noun phrases, this argument has the form:
All A are B.
All A are C.
Therefore, all B are C.
This is the same form as that of the following, clearly invalid argument:
All birds are animals.
All birds have feathers.
Therefore, all animals have feathers.
Because we can see that the premises of this argument are true and the conclusion is false, we know
that the argument is invalid. Since we have identified an invalid instance of the form, we know that the
form is invalid. The invalid instance is a counterexample to the form. Because we have a
counterexample, we have good reason to think that the argument about donuts is not valid.
One of our recent examples has the form:
All A are B.
Some B are C.
Therefore, at least some A are C.
Here is a counterexample that challenges this argument form’s validity:
All dogs are mammals.
Some mammals are cats.
Therefore, at least some dogs are cats.
By substituting dogs for A, mammals for B, and cats for C, we have found an example of the argument’s
form that is clearly invalid because it moves from true premises to a false conclusion. Therefore, the
argument form is invalid.
Here is another example of an argument:
All monkeys are primates.
No monkeys are reptiles.
Therefore, no primates are reptiles.
The conclusion is true in this example, so many may mistakenly think that the reasoning is valid.
However, to better investigate the validity of the reasoning, it is best to focus on its form. The form of
this argument is:
All A are B.
No A are C.
Therefore, no B are C.
To demonstrate that this form is invalid, it will suffice to demonstrate that there is an argument of this
exact form that has all true premises and a false conclusion. Here is such a counterexample:
All men are human.
No men are women.
Therefore, no humans are women.
Clearly, there is something wrong with this argument. Though this is a different argument, the fact that
it is clearly invalid, even though it has the exact same form as our original argument, means that the
original argument’s form is also invalid.
3.3 Types of Deductive Arguments
Once you learn to look for arguments, you will see them everywhere. Deductive arguments play very
important roles in daily reasoning. This section will discuss some of the most important types of
deductive arguments.
Mathematical Arguments
Arguments about or involving mathematics generally use deductive reasoning. In fact, one way to think
about deductive reasoning is that it is reasoning that tries to establish its conclusion with mathematical
certainty. Let us consider some examples.
A mathematical proof is a valid deductive argument that attempts to prove the conclusion. Because
mathematical proofs are deductively valid, mathematicians establish mathematical truth with complete
certainty (as long as they agree on the premises).
Suppose you are splitting the check for lunch with a friend. In calculating your portion, you reason as
follows:
I had the chicken sandwich plate for $8.49.
I had a root beer for $1.29.
I had nothing else.
$8.49 + $1.29 = $9.78.
Therefore, my portion of the bill, excluding tip and tax, is $9.78.
Notice that if the premises are all true, then the conclusion must be true also. Of course, you might be
mistaken about the prices, or you might have forgotten that you had a piece of pie for dessert. You
might even have made a mistake in how you added up the prices. But these are all premises. So long as
your premises are correct and the argument is valid, then the conclusion is certain to be true.
But wait, you might say—aren’t we often mistaken about things like this? After all, it is common for
people to make mistakes when figuring out a bill. Your friend might even disagree with one of your
premises: For example, he might think the chicken sandwich plate was really $8.99. How can we say that
the conclusion is established with mathematical certainty if we are willing to admit that we might be
mistaken?
These are excellent questions, but they pertain to our certainty of the truth of the premises. The
important feature of valid arguments is that the reasoning is so strong that the conclusion is just as
certain to be true as the premises. It would be a very strange friend indeed who agreed with all of your
premises and yet insisted that your portion of the bill was something other than $9.78. Still, no matter
how good our reasoning, there is almost always some possibility that we are mistaken about our
premises.
Arguments From Definitions
Another common type of deductive argument is argument from definition. This type of argument
typically has two premises. One premise gives the definition of a word; the second premise says that
something meets the definition. Here is an example:
Bachelor means “unmarried male.”
John is an unmarried male.
Therefore, John is a bachelor.
Notice that as with arguments involving math, we may disagree with the premises, but it is very hard to
agree with the premises and disagree with the conclusion. When the argument is set out in standard
form, it is typically relatively easy to see that the argument is valid.
On the other hand, it can be a little tricky to tell whether the argument is sound. Have we really gotten
the definition right? We have to be very careful, as definitions often sound right even though they are a
little bit off. For example, the stated definition of bachelor is not quite right. At the very least, the
definition should apply only to human males, and probably only adult ones. We do not normally call
children or animals “bachelors.”
When crafting or evaluating a deductive argument via definition, special attention should be paid to the
clarity of the definition.
An interesting feature of definitions is that they can be understood as going both ways. In other words,
if bachelor means “unmarried male,” then we can reason either from the man being an unmarried male
to his being a bachelor, as in the previous example, or from the man being a bachelor to his being an
unmarried male, as in the following example.
Bachelor means “unmarried male.”
John is a bachelor.
Therefore, John is an unmarried male.
Arguments from definition can be very powerful, but they can also be misused. This typically happens
when a word has two meanings or when the definition is not fully accurate. We will learn more about
this when we study fallacies in Chapter 7, but here is an example to consider:
Murder is the taking of an innocent life.
Abortion takes an innocent life.
Therefore, abortion is murder.
This is an argument from definition, and it is valid—the premises guarantee the truth of the conclusion.
However, are the premises true? Both premises could be disputed, but the first premise is probably not
right as a definition. If the word murder really just meant “taking an innocent life,” then it would be
impossible to commit murder by killing someone who was not innocent. Furthermore, there is nothing
in this definition about the victim being a human or the act being intentional. It is very tricky to get
definitions right, and we should be very careful about reaching conclusions based on oversimplified
definitions. We will come back to this example from a different angle in the next section when we study
syllogisms.
Categorical Arguments
Historically, some of the first arguments to receive a detailed treatment were categorical arguments,
having been thoroughly explained by Aristotle himself (Smith, 2014). Categorical arguments are
arguments whose premises and conclusions are statements about categories of things. Let us revisit an
example from earlier in this chapter:
All whales are mammals.
All mammals breathe air.
Therefore, all whales breathe air.
In each of the statements of this argument, the membership of two categories is compared. The
categories here are whales, mammals, and air breathers. As discussed in the previous section on
evaluating deductive arguments, the validity of these arguments depends on the repetition of the
category terms in certain patterns; it has nothing to do with the specific categories being compared. You
can test this by changing the category terms whales, mammals, and air breathers with any other
category terms you like. Because this argument’s form is valid, any other argument with the same form
will be valid. The branch of deductive reasoning that deals with categorical arguments is known as
categorical logic. We will discuss it in the next two sections.
Propositional Arguments
Propositional arguments are a type of reasoning that relates sentences to each other rather than
relating categories to each other. Consider this example:
Either Jill is in her room, or she’s gone out to eat.
Jill is not in her room.
Therefore, she’s gone out to eat.
Notice that in this example the pattern is made by the sentences “Jill is in her room” and “she’s gone out
to eat.” As with categorical arguments, the validity of propositional arguments can be determined by
examining the form, independent of the specific sentences used. The branch of deductive reasoning that
deals with propositional arguments is known as propositional logic, which we will discuss in Chapter 4.
3.4 Categorical Logic: Introducing Categorical Statements
The field of deductive logic is a rich and productive one; one could spend an entire lifetime studying it.
(See A Closer Look: More Complicated Types of Deductive Reasoning.) Because the focus of this book is
critical thinking and informal logic (rather than formal logic), we will only look closely at categorical and
propositional logic, which focus on the basics of argument. If you enjoy this introductory exposure, you
might consider looking for more books and courses in logic.
A Closer Look: More Complicated Types of Deductive Reasoning
As noted, deductive logic deals with a precise kind of reasoning in which logical validity is based on
logical form. Within logical forms, we can use letters as variables to replace English words. Logicians also
frequently replace other words that occur within arguments—such as all, some, or, and not—to create a
kind of symbolic language. Formal logic represented in this type of symbolic language is called symbolic
logic.
Because of this use of symbols, courses in symbolic logic end up looking like math classes. An
introductory course in symbolic logic will typically begin with propositional logic and then move to
something called predicate logic. Predicate logic combines everything from categorical and propositional
logic but allows much more flexibility in the use of some and all. This flexibility allows it to represent
much more complex and powerful statements.
Predicate logic forms the basis for even more advanced types of logic. Modal logic, for example, can be
used to represent many deductive arguments about possibility and necessity that cannot be symbolized
using predicate logic alone. Predicate logic can even help provide a foundation for mathematics. In
particular, when predicate logic is combined with a mathematical field called set theory, it is possible to
prove the fundamental truths of arithmetic. From there it is possible to demonstrate truths from many
important fields of mathematics, including calculus, without which we could not do physics, engineering,
or many other fascinating and useful fields. Even the computers that now form such an essential part of
our lives are founded, ultimately, on deductive logic.
Categorical arguments have been studied extensively for more than 2,000 years, going back to Aristotle.
Categorical logic is the logic of argument made up of categorical statements. It is a logic that is
concerned with reasoning about certain relationships between categories of things. To learn more about
how categorical logic works, it will be useful to begin by analyzing the nature of categorical statements,
which make up the premises and conclusions of categorical arguments. A categorical statement talks
about two categories or groups. Just to keep things simple, let us start by talking about dogs, cats, and
animals.
One thing we can say about these groups is that all dogs are animals. Of course, all cats are animals, too.
So we have the following true categorical statements:
All dogs are animals.
All cats are animals.
In categorical statements, the first group name is called the subject term; it is what the sentence is
about. The second group name is called the predicate term. In the categorical sentences just mentioned,
dogs and cats are both in the subject position, and animals is in the predicate position. Group terms can
go in either position, but of course, the sentence might be false. For example, in the sentence “All
animals are dogs” the term dogs is in the predicate position.
You may recall that we can represent the logical form of these types of sentences by replacing the
category terms with single letters. Using this method, we can represent the form of these categorical
statements in the following way:
All D are A.
All C are A.
Another true statement we can make about these groups is “No dogs are cats.” Which term is in subject
position, and which is in predicate position? If you said that dogs is the subject and cats is the predicate,
you’re right! The logical form of “No dogs are cats” can be given as “No D are C.”
We now have two sentences in which the category dogs is the subject: “All dogs are animals” and “No
dogs are cats.” Both of these statements tell us something about every dog. The first, which starts with
all, tells us that each dog is an animal. The second, which begins with no, tells us that each dog is not a
cat. We say that both of these types of sentences are universal because they tell us something about
every member of the subject class.
Not all categorical statements are universal. Here are two statements about dogs that are not universal:
Some dogs are brown.
Some dogs are not tall.
Statements that talk about some of the things in a category are called particular statements. The
distinction between a statement being universal or particular is a distinction of quantity.
Another distinction is that we can say that the things mentioned are in or not in the predicate category.
If we say the things are in that category, our statement is affirmative. If we say the things are not in that
category, our statement is negative. The distinction between a statement being affirmative or negative
is a distinction of quality. For example, when we say “Some dogs are brown,” the thing mentioned
(dogs) is in the predicate category (brown things), making this an affirmative statement. When we say
“Some dogs are not tall,” the thing mentioned (dogs) is not in the predicate category (tall things), and so
this is a negative statement.
Taking both of these distinctions into account, there are four types of categorical statements: universal
affirmative, universal negative, particular affirmative, and particular negative. Table 3.1 shows the form
of each statement along with its quantity and quality.
Table 3.1: Types of categorical statements
Quantity
Quality
All S is P
Universal
Affirmative
No S is P
Universal
Negative
Some S is P
Particular
Affirmative
Some S is not P
Particular
Negative
To abbreviate these categories of statement even further, logicians over the millennia have used letters
to represent each type of statement. The abbreviations are as follows:
A: Universal affirmative (All S is P)
E: Universal negative (No S is P)
I: Particular positive (Some S is P)
O: Particular negative (Some S is not P)
Accordingly, the statements are known as A propositions, E propositions, I propositions, and O
propositions. Remember that the single capital letters in the statements themselves are just
placeholders for category terms; we can fill them in with any category terms we like. Figure 3.1 shows a
traditional way to arrange the four types of statements by quantity and quality.
Now we need to get just a bit clearer on what the four statements mean. Granted, the meaning of
categorical statements seems clear: To say, for example, that “no dogs are reptiles” simply means that
there are no things that are both dogs and reptiles. However, there are certain cases in which the way
that logicians understand categorical statements may differ somewhat from how they are commonly
understood in everyday language. In particular, there are two specific issues that can cause confusion.
Clarifying Particular Statements
The first issue is with particular statements (I and O propositions). When we use the word some in
everyday life, we typically mean more than one. For example, if someone says that she has some apples,
we generally think that this means that she has more than one. However, in logic, we take the word
some simply to mean at least one. Therefore, when we say that some S is P, we mean only that at least
one S is P. For example, we can say “Some dogs live in the White House” even if only one does.
Clarifying Universal Statements
The second issue involves universal statements (A and E propositions). It is often called the “issue of
existential presupposition”—the issue concerns whether a universal statement implies a particular
statement. For example, does the fact that all dogs are animals imply that some dogs are animals? The
question really becomes an issue only when we talk about things that do not really exist. For example,
consider the claim that all the survivors of the Civil War live in New York. Given that there are no
survivors of the Civil War anymore, is the statement true or not?
The Greek philosopher Aristotle, the inventor of categorical logic, would have said the statement is false.
He thought that “All S is P” could only be true if there was at least one S (Parsons, 2014). Modern
logicians, however, hold that that “All S is P” is true even when no S exists. The reasons for the modern
view are somewhat beyond the scope of this text—see A Closer Look: Existential Import for a bit more
of an explanation—but an example will help support the claim that universal statements are true when
no member of the subject class exists.
Suppose we are driving somewhere and stop for snacks. We decide to split a bag of M&M’s. For some
reason, one person in our group really wants the brown M&M’s, so you promise that he can have all of
them. However, when we open the bag, it turns out that there are no brown candies in it. Since this
friend did not get any brown M&M’s, did you break your promise? It seems clear that you did not. He
did get all of the brown M&M’s that were in the bag; there just weren’t any. In order for you to have
broken your promise, there would have to be a brown M&M that you did not let your friend have.
Therefore, it is true that your friend got all the brown M&M’s, even though he did not get any.
This is the way that modern logicians think about universal propositions when there are no members of
the subject class. Any universal statement with an empty subject class is true, regardless of whether the
statement is positive or negative. It is true that all the brown M&M’s were given to your friend and also
true that no brown M&M’s were given to your friend.
A Closer Look: Existential Import
It is important to remember that particular statements in logic (I and O propositions) refer to things that
actually exist. The statement “Some dogs are mammals” is essentially saying, “There is at least one dog
that exists in the universe, and that dog is a mammal.” The way that logicians refer to this attribute of I
and O statements is that they have “existential import.” This means that for them to be true, there must
be something that actually exists that has the property mentioned in the statement.
The 19th-century mathematician George Boole, however, presented a problem. Boole agreed with
Aristotle that the existential statements I and O had to refer to existing things to be true. Also, for
Aristotle, all A statements that are true necessarily imply the truth of their corresponding I statements.
The same goes with E and O statements.
George Boole, for whom Boolean logic is named, challenged Aristotle’s assertion that the truth of A
statements implies the truth of corresponding I statements. Boole suggested that some valid forms of
syllogisms had to be excluded.
Boole pointed out that some true A and E statements refer to things that do not actually exist. Consider
the statement “All vampires are creatures that drink blood.” This is a true statement. That means that
the corresponding I statement, “Some vampires are creatures that drink blood,” would also be true,
according to Aristotle. However, Boole noted that there are no existing things that are vampires. If
vampires do not exist, then the I statement, “Some vampires are creatures that drink blood,” is not true:
The truth of this statement rests on the idea that there is an actually existing thing called a vampire,
which, at this point, there is no evidence of.
Boole reasoned that Aristotle’s ideas did not work in cases where A and E statements refer to
nonexisting classes of objects. For example, the E statement “No vampires are time machines” is a true
statement. However, both classes in this statement refer to things that do not actually exist. Therefore,
the statement “Some vampires are not time machines” is not true, because this statement could only be
true if vampires and time machines actually existed.
Boole reasoned that Aristotle’s claim that true A and E statements led necessarily to true I and O
statements was not universally true. Hence, Boole claimed that there needed to be a revision of the
forms of categorical syllogisms that are considered valid. Because one cannot generally claim that an
existential statement (I or O) is true based on the truth of the corresponding universal (A or E), there
were some valid forms of syllogisms that had to be excluded under the Boolean (modern) perspective.
These syllogisms were precisely those that reasoned from universal premises to a particular conclusion.
Of course, we all recognize that in everyday life we can logically infer that if all dogs are mammals, then
it must be true that some dogs are mammals. That is, we know that there is at least one existing dog
that is a mammal. However, because our logical rules of evaluation need to apply to all instances of
syllogisms, and because there are other instances where universals do not lead of necessity to the truth
of particulars, the rules of evaluation had to be reformed after Boole presented his analysis. It is
important to avoid committing the existential fallacy, or assuming that a class has members and then
drawing an inference about an actually existing member of the class.
Accounting for Conversational Implication
These technical issues likely sound odd: We usually assume that some implies that there is more than
one and that all implies that something exists. This is known as conversational implication (as opposed
to logical implication). It is quite common in everyday life to make a conversational implication and take
a statement to suggest that another statement is true as well, even though it does not logically imply
that the other must be true. In logic, we focus on the literal meaning.
One of the common reasons that a statement is taken to conversationally imply another is that we are
generally expected to make the most fully informative statement that we can in response to a question.
For example, if someone asks what time it is and you say, “Sometime after 3,” your statement seems to
imply that you do not know the exact time. If you knew it was 3:15 exactly, then you probably should
have given this more specific information in response to the question.
For example, we all know that all dogs are animals. Suppose, however, someone says, “Some dogs are
animals.” That is an odd thing to say: We generally would not say that some dogs are animals unless we
thought that some of them are not animals. However, that would be making a conversational
implication, and we want to make logical implications. For the purposes of logic, we want to know
whether the statement “some dogs are animals” is true or false. If we say it is false, then we seem to
have stated it is not true that some dogs are animals; this, however, would seem to mean that there are
no dogs that are animals. That cannot be right. Therefore, logicians take the statement “Some dogs are
animals” simply to mean that there is at least one dog that is an animal, which is true. The statement
“Some dogs are not animals” is not part of the meaning of the statement “Some dogs are animals.” In
the language of logic, the statement that some S are not P is not part of the meaning of the statement
that some S are P.
Of course, it would be odd to make the less informative statement that some dogs are animals, since we
know that all dogs are animals. Because we tend to assume someone is making the most informative
statement possible, the statement “Some dogs are animals” may conversationally imply that they are
not all animals, even though that is not part of the literal meaning of the statement.
In short, a particular statement is true when there is at least one thing that makes it true, even if the
universal statement would also be true. In fact, sometimes we emphasize that we are not talking about
the whole category by using the words at least, as in, “At least some planets orbit stars.” Therefore, it
appears to be nothing more than conversational implication, not literal meaning, that leads our
statement “Some dogs are animals” to suggest that some also are not. When looking at categorical
statements, be sure that you are thinking about the actual meaning of the sentence rather than what
might be conversationally implied.
3.5 Categorical Logic: Venn Diagrams as Pictures of Meaning
Given that it is sometimes tricky to parse out the meaning and implications of categorical st
atements, a logician named John Venn devised a method that uses diagrams to clarify the lit
eral meanings and logical implications of categorical claims. These diagrams are appropriat
ely called Venn diagrams (Stapel, n.d.). Venn diagrams not only give a visual picture of the
meanings of categorical statements, they also provide a method by which we can test the va
lidity of many categorical arguments.
Drawing Venn Diagrams
Here is how the diagramming works: Imagine we get a bunch of people together and all go t
o a big field. We mark out a big circle with rope on the field and ask everyone with brown e
yes to stand in the circle. Would you stand inside the circle or outside it? Where would you
stand if we made another circle and asked everyone with brown hair to stand inside? If you
r eyes or hair are sort of brownish, just pick whether you think you should be inside or outs
ide the circles. No standing on the rope allowed! Remember your answers to those two que
stions.
Here is an image of the browneye circle, labeled “E” for “eyes”; touch inside or outside the circle indicating where you wo
uld stand.
Here is a picture of the brownhair circle, labeled “H” for “hair”; touch inside or outside the circle indicating where you wo
uld stand.
Notice that each circle divides the people into two groups: Those inside the circle have the f
eature we are interested in, and those outside the circle do not.
Where would you stand if we put both circles on the ground at the same time?
As long as you do not have both brown eyes and brown hair, you should be able to figure ou
t where to stand. But where would you stand if you have brown eyes and brown hair? Ther
e is not any spot that is in both circles, so you would have to choose. In order to give brown
-eyed, brown-haired people a place to stand, we have to overlap the circles.
Now there is a spot where people who have both brown hair and brown eyes can stand: wh
ere the two circles overlap. We noted earlier that each circle divides our bunch of people in
to two groups, those inside and those outside. With two circles, we now have four groups. F
igure 3.2 shows what each of those groups are and where people from each group would st
and.
Figure 3.2: Sample Venn diagram
With this background, we can now draw a picture for each categorical statement. When we
know a region is empty, we will darken it to show there is nobody there. If we know for sur
e that someone is in a region, we will put an x in it to represent a person standing there. Fig
ure 3.3 shows the pictures for each of the four kinds of statements.
Figure 3.3: Venn diagrams of categorical statements
Each of the four categorical statements can be represented visually with a Venn diagram.
In drawing these pictures, we adopt the convention that the subject term is on the left and t
he predicate term is on the right. There is nothing special about this way of doing it, but dia
grams are easier to understand if we draw them the same way as much as possible. The im
portant thing to remember is that a Venn diagram is just a picture of the meaning of a state
ment. We will use this fact in our discussion of inferences and arguments.
Drawing Immediate Inferences
As mentioned, Venn diagrams help us determine what inferences are valid. The most basic
of such inferences, and a good place to begin, is something called immediate inference. Imm
ediate inferences are arguments from one categorical statement as premise to another as
conclusion. In other words, we immediately infer one statement from another. Despite the f
act that these inferences have only one premise, many of them are logically valid. This secti
on will use Venn diagrams to help discern which immediate inferences are valid.
The basic method is to draw a diagram of the premises of the argument and determine if th
e diagram thereby shows the conclusion is true. If it does, then the argument is valid. In oth
er words, if drawing a diagram of just the premises automatically creates a diagram of the c
onclusion, then the argument is valid. The diagram shows that any way of making the prem
ises true would also make the conclusion true; it is impossible for the conclusion to be false
when the premises are true. We will see how to use this method with each of the immediate
inferences and later extend the method to more complicated arguments.
Conversion
Conversion is just a matter of switching the positions of the subject and predicate terms. T
he resulting statement is called the converse of the original statement. Table 3.2 shows the
converse of each type of statement.
Table 3.2: Conversion
Statement
Converse
All S is P.
All P is S.
No S is P.
No P is S.
Some S is P.
Some P is S.
Statement
Converse
Some S is not P.
Some P is not S.
Forming the converse of a statement is easy; just switch the subject and predicate terms wi
th each other. The question now is whether the immediate inference from a categorical stat
ement to its converse is valid or not. It turns out that the argument from a statement to its c
onverse is valid for some statement types, but not for others. In order to see which, we hav
e to check that the converse is true whenever the original statement is true.
An easy way to do this is to draw a picture of the two statements and compare them. Let us
start by looking at the universal negative statement, or E proposition, and its converse. If w
e form an argument from this statement to its converse, we get the following:
No S is P.
Therefore, no P is S.
Figure 3.4 shows the Venn diagrams for these statements.
As you can see, the same region is shaded in both pictures—
the region that is inside both circles. It does not matter which order the circles are in, the pi
cture is the same. This means that the two statements have the same meaning; we call such
statements equivalent.
The Venn diagrams for these statements demonstrate that all of the information in the conc
lusion is present in the premise. We can therefore infer that the inference is valid. A shorter
way to say it is that conversion is valid for universal negatives.
We see the same thing when we look at the particular affirmative statement, or I propositio
n.
In the case of particular affirmatives as well, we can see that all of the information in the co
nclusion is contained within the premises. Therefore, the immediate inference is valid. In fa
ct, because the diagram for “Some S is P” is the same as the diagram for its converse, “Some
P is S” (see Figure 3.5), it follows that these two statements are equivalent as well.
Figure 3.4: Universal negative statement and its converse
In this representation of “No S is P. Therefore, no P is S,” the areas shaded are the same, meaning the stat
ements are equivalent.
Figure 3.5: Particular affirmative statement and its converse
As with the E proposition, all of the information contained in the conclusion of the I proposition is also c
ontained within the premises, making the inference valid.
However, there will be a big difference when we draw pictures of the universal affirmative
(A proposition), the particular negative (O proposition), and their converses (see Figure 3.6
and Figure 3.7).
In these two cases we get different pictures, so the statements do not mean the same thing.
In the original statements, the marked region is inside the S circle but not in the P circle. In
the converse statements, the marked region is inside the P circle but not in the S circle. Bec
ause there is information in the conclusions of these arguments that is not present in the pr
emises, we may infer that conversion is invalid in these two cases.
Figure 3.6: Universal affirmative statement and its converse
Unlike Figures 3.4 and 3.5 where the diagrams were identical, we get two different diagrams for A propo
sitions. This tells us that there is information contained in the conclusion that was not included in the pr
emises, making the inference invalid.
Figure 3.7: Particular negative statement and its converse
As with A propositions, O propositions present information in the conclusion that was not present in the
premises, rendering the inference invalid.
Let us consider another type of immediate inference.
Contraposition
Before we can address contraposition, it is necessary to introduce the idea of a complemen
t class. Remember that for any category, we can divide things into those that are in the cate
gory and those that are out of the category. When we imagined rope circles on a field, we as
ked all the brownhaired people to step inside one of the circles. That gave us two groups: the brownhaired people inside the circle, and the non-brownhaired people outside the circle. These two groups are complements of each other. The com
plement of a group is everything that is not in the group. When we have a term that gives us
a category, we can just add nonbefore the term to get a term for the complement group. The complement of term S is nonS, the complement of term animal is nonanimal, and so on. Let us see what complementing
a term does to our Venn diagrams.
Recall the diagram for browneyed people. You were inside the circle if you have brown eyes, and outside the circle if you
do not. (Remember, we did not let people stand on the rope; you had to be either in or out.)
So now consider the diagram for non-brown-eyed people.
If you were inside the brown-eyed circle, you would be outside the non-browneyed circle. Similarly, if you were outside the browneyed circle, you would be inside the non-browneyed circle. The same would be true for complementing the brownhaired circle. Complementing just switches the inside and outside of the circle.
Do you remember the four regions from Figure 3.2? See if you can find the regions that wou
ld have the same people in the complemented picture. Where would someone with blue eye
s and brown hair stand in each picture? Where would someone stand if he had red hair and
green eyes? How about someone with brown hair and brown eyes?
In Figure 3.8, the regions are colored to indicate which ones would have the same people in
them. Use the diagram to help check your answers from the previous paragraph. Notice tha
t the regions in both circles and outside both circles trade places and that the region in the l
eft circle only trades places with the region in the right circle.
Figure 3.8: Complement class
Now that we know what a complement is, we are ready to look at the immediate inference
of contraposition. Contraposition combines conversion and complementing; to get the con
trapositive of a statement, we first get the converse and then find the complement of both t
erms.
Let us start by considering the universal affirmative statement, “All S is P.” First we form its
converse, “All P is S,” and then we complement both class terms to get the contrapositive, “
All non-P is nonS.” That may sound like a mouthful, but you should see that there is a simple, straightforwa
rd process for getting the contrapositive of any statement. Table 3.3 shows the process for
each of the four types of categorical statements.
Table 3.3: Contraposition
Original
Converse
Contrapositive
All S is P.
All P is S.
All non-P is non-S.
No S is P.
No P is S.
No non-P is non-S.
Some S is P.
Some P is S.
Some non-P is non-S.
Some S is not P.
Some P is not S.
Some non-P is not non-S.
Figure 3.9 shows the diagrams for the four statement types and their contrapositives, color
ed so that you can see which regions represent the same groups.
Figure 3.9: Contrapositive Venn diagrams
Using the converse and contrapositive diagrams, you can infer the original statement.
As you can see, contraposition preserves meaning in universal affirmative and particular ne
gative statements. So from either of these types of statements, we can immediately infer th
eir contrapositive, and from the contrapositive, we can infer the original statement. In othe
r words, these statements are equivalent; therefore, in those two cases, the contrapositive i
s valid.
In the other cases, particular affirmative and universal negative, we can see that there is inf
ormation in the conclusion that is not present in diagram of the premise; these immediate i
nferences are invalid.
There are more immediate inferences that can be made, but our main focus in this chapter i
s on arguments with multiple premises, which tend to be more interesting, so we are going
to move on to syllogisms.
3.6 Categorical Logic: Categorical Syllogisms
Whereas contraposition and conversion can be seen as arguments with only one premise, a
syllogism is a deductive argument with two premises. The categorical syllogism, in which
a conclusion is derived from two categorical premises, is perhaps the most famous—
and certainly one of the oldest—forms of deductive argument. The categorical syllogism—
which we will refer to here as just “syllogism”—
presented by Aristotle in his Prior Analytics (350 BCE/1994), is a very specific kind of dedu
ctive argument and was subsequently studied and developed extensively by logicians, math
ematicians, and philosophers.
Terms
We will first discuss the syllogism’s basic outline, following Aristotle’s insistence that syllog
isms are arguments that have two premises and a conclusion. Let us look again at our stand
ard example:
All S are M.
All M are P.
Therefore, all S are P.
There are three total terms here: S, M, and P. The term that occurs in the predicate position
in the conclusion (in this case, P) is the major term. The term that occurs in the subject posi
tion in the conclusion (in this case, S) is the minor term. The other term, the one that occurs
in both premises but not the conclusion, is the middle term (in this case, M).
The premise that includes the major term is called the major premise. In this case it is the fir
st premise. The premise that includes the minor term, the second one here, is called the min
or premise. The conclusion will present the relationship between the predicate term of the
major premise (P) and the subject term of the minor premise (S) (Smith, 2014).
There are 256 possible different forms of syllogisms, but only a small fraction of those are v
alid, which can be shown by testing syllogisms through the traditional rules of the syllogis
m or by using Venn diagrams, both of which we will look at later in this section.
Distribution
As Aristotle understood logical propositions, they referred to classes, or groups: sets of thin
gs. So a universal affirmative (type A) proposition that states “All Clydesdales are horses” r
efers to the group of Clydesdales and says something about the relationship between all of t
he members of that group and the members of the group “horses.” However, nothing at all i
s said about those horses that might not be Clydesdales, so not all members of the group of
horses are referred to. The idea of referring to members of such groups is the basic idea be
hind distribution: If all of the members of a group are referred to, the term that refers to th
at group is said to be distributed.
Using our example, then, we can see that the proposition “All Clydesdales are horses” refers
to all the members of that group, so the term Clydesdales is said to be distributed. Universal
affirmatives like this one distribute the term that is in the first, or subject, position.
However, what if the proposition were a universal negative (type E) proposition, such as “N
o koala bears are carnivores”? Here all the members of the group “koala bears” (the subject
term) are referred to, but all the members of the group “carnivores” (the predicate term) ar
e also referred to. When we say that no koala bears are carnivores, we have said something
about all koala bears (that they are not carnivores) and also something about all carnivores
(that they are not koala bears). So in this universal negative proposition, both of its terms a
re distributed.
To sum up distribution for the universal propositions, then: Universal affirmative (A) propo
sitions distribute only the first (subject) term, and universal negative (E) propositions distr
ibute both the first (subject) term and the second (predicate) term.
The distribution pattern follows the same basic idea for particular propositions. A particula
r affirmative (type I) proposition, such as “Some students are football players,” refers only t
o at least one member of the subject class (“students”) and only to at least one member of t
he predicate class (“football players”). Thus, remembering that some is interpreted as mean
ing “at least one,” the particular affirmative proposition distributes neither term, for this pr
oposition does not refer to all the members of either group.
Finally, a particular negative (type O) proposition, such as “Some Floridians are not surfers,
” only refers to at least one Floridian—
but says that at least one Floridian does not belong to the entire class of surfers or is exclud
ed from the entire class of surfers. In this way, the particular negative proposition distribut
es only the term that refers to surfers, or the predicate term.
To sum up distribution for the particular propositions, then: particular affirmative (I) prop
ositions distribute neither the first (subject) nor the second (predicate) term, and particula
r negative (O) propositions distribute only the second (predicate) term. This is a lot of detai
l, to be sure, but it is summarized in Table 3.4.
Proposition
Subject
Predicate
A
Distributed
Not
E
Distributed
Distributed
I
Not
Not
O
Not
Distributed
Table 3.4: Distribution
Once you understand how distribution works, the rules for determining the validity of syllo
gisms are fairly straightforward. You just need to see that in any given syllogism, there are t
hree terms: a subject term, a predicate term, and a middle term. But there are only two posi
tions, or “slots,” a term can appear in, and distribution relates to those positions.
Rules for Validity
Once we know how to determine whether a term is distributed, it is relatively easy to learn
the rules for determining whether a categorical syllogism is valid. The traditional rules of t
he syllogism are given in various ways, but here is one standard way:
Rule 1: The middle term must be distributed at least once.
Rule 2: Any term distributed in the conclusion must be distributed in its corresponding pre
mise.
Rule 3: If the syllogism has a negative premise, it must have a negative conclusion, and if th
e syllogism has a negative conclusion, it must have a negative premise.
Rule 4: The syllogism cannot have two negative premises.
Rule 5: If the syllogism has a particular premise, it must have a particular conclusion, and if
the syllogism has a particular conclusion, it must have a particular premise.
A syllogism that satisfies all five of these rules will be valid; a syllogism that does not will be
invalid. Perhaps the easiest way of seeing how the rules work is to go through a few examp
les. We can start with our standard syllogism with all universal affirmatives:
All M are P.
All S are M.
Therefore, all S are P.
The Origins of Logic
The text describes five rules for determining a syllogism's validity, but Aristotle's fundamental
rules were far more basic.
Critical Thinking Questions
1. The law of noncontradiction and the excluded middle establish that a proposition cannot b
e both true and false and must be either true or false. Can you think of a proposition that vi
olates either of these rules?
2. Aristotle's syllogism form, or the standard argument form, allows us to condense argument
s into their fundamental pieces for easier evaluation. Try putting an argument you have hea
rd into the standard form.
Rule 1 is satisfied: The middle term is distributed by the first premise; a universal affirmati
ve (A) proposition distributes the term in the first (subject) position, which here is M. Rule
2 is satisfied because the subject term that is distributed by the conclusion is also distribute
d by the second premise. In both the conclusion and the second premise, the universal affir
mative proposition distributes the term in the first position. Rule 3 is also satisfied because
there is not a negative premise without a negative conclusion, or a negative conclusion with
out a negative premise (all the propositions in this syllogism are affirmative). Rule 4 is pass
ed because both premises are affirmative. Finally, Rule 5 is passed as well because there is
a universal conclusion. Since this syllogism passes all five rules, it is valid.
These get easier with practice, so we can try another example:
Some M are not P.
All M are S.
Therefore, some S are not P.
Rule 1 is passed because the second premise distributes the middle term, M, since it is the s
ubject in the universal affirmative (A) proposition. Rule 2 is passed because the major term,
P, that is distributed in the O conclusion is also distributed in the corresponding O premise
(the first premise) that includes that term. Rule 3 is passed because there is a negative conc
lusion to go with the negative premise. Rule 4 is passed because there is only one negative
premise. Rule 5 is passed because the first premise is a particular premise (O). Since this sy
llogism passes all five rules, it is valid; there is no way that all of its premises could be true
and its conclusion false.
Both of these have been valid; however, out of the 256 possible syllogisms, most are invalid
. Let us take a look at one that violates one or more of the rules:
No P are M.
Some S are not M.
Therefore, all S are P.
Rule 1 is passed. The middle term is distributed in the first (major) premise. However, Rule
2 is violated. The subject term is distributed in the conclusion, but not in the correspondin
g second (minor) premise. It is not necessary to check the other rules; once we know that o
ne of the rules is violated, we know that the argument is invalid. (However, for the curious,
Rule 3 is violated as well, but Rules 4 and 5 are passed).
Venn Diagram Tests for Validity
Another value of Venn diagrams is that they provide a nice method for evaluating the validi
ty of a syllogism. Because every valid syllogism has three categorical terms, the diagrams w
e use must have three circles:
The idea in diagramming a syllogism is that we diagram each premise and then check to see
if the conclusion has been automatically diagrammed. In other words, we determine wheth
er the conclusion must be true, according to the diagram of the premises.
It is important to remember that we never draw a diagram of the conclusion. If the argume
nt is valid, diagramming the premises will automatically provide a diagram of the conclusio
n. If the argument is invalid, diagramming the premises will not provide a diagram of the co
nclusions.
Diagramming Syllogisms With Universal Statements
Particular statements are slightly more difficult in these diagrams, so we will start by looki
ng at a syllogism with only universal statements. Consider the following syllogism:
All S is M.
No M is P.
Therefore, no S is P.
Remember, we are only going to diagram the two premises; we will not diagram the conclu
sion. The easiest way to diagram each premise is to temporarily ignore the circle that is not
relevant to the premise. Looking just at the S and M circles, we diagram the first premise lik
e this:
Here is what the diagram for the second premise looks like:
Now we can take those two diagrams and superimpose them, so that we have one diagram
of both premises:
Now we can check whether the argument is valid. To do this, we see if the conclusion is tru
e according to our diagram. In this case our conclusion states that no S is P; is this statemen
t true, according to our diagram? Look at just the S and P circles; you can see that the area b
etween the S and P circles (outlined) is fully shaded. So we have a diagram of the conclusio
n. It does not matter if the S and P circles have some extra shading in them, so long as the di
agram has all the shading needed for the truth of the conclusion.
Let us look at an invalid argument next.
All S is M.
All P is M.
Therefore, all S is P.
Again, we diagram each premise and look to see if we have a diagram of the conclusion. Her
e is what the diagram of the premises looks like:
Now we check to see whether the conclusion must be true, according to the diagram. Our c
onclusion states that all S is P, meaning that no unshaded part of the S circle can be outside
of the P circle. In this case you can see that we do not have a diagram of the conclusion. Sinc
e we have an unshaded part of S outside of P (outlined), the argument is invalid.
Let us do one more example with all universals.
All M are P.
No M is S.
Therefore, no S is P.
Here is how to diagram the premises:
Is the conclusion true in this diagram? In order to know that the conclusion is true, we woul
d need to know that there are no S that are P. However, we see in this diagram that there is
room for some S to be P. Therefore, these premises do not guarantee the truth of this concl
usion, so the argument is invalid.
Diagramming Syllogisms With Particular Statements
Particular statements (I and O) are a bit trickier, but only a bit. The problem is that when yo
u diagram a particular statement, you put an x in a region. If that region is further divided b
y a third circle, then the single x will end up in one of those subregions even though we do n
ot know which one it should go in. As a result, we have to adopt a convention to indicate th
at the x may be in either of them. To do this, we will draw an x in each subregion and conne
ct them with a line to show that we mean the individual might be in either subregion. To se
e how this works, let us consider the following syllogism.
Some S is not M.
All P are M.
Therefore, some S is not P.
We start by diagramming the first premise:
Then we add the diagram for the second premise:
Notice that in diagramming the second premise, we shaded over one of the linked x’s. This l
eaves us with just one x. When we look at just the S and P circles, we can see that the remai
ning is inside the S circle but outside the P circle.
To see if the argument is valid, we have to determine whether the conclusion must be true
according to this diagram. The truth of our conclusion depends on there being at least one S
that is not P. Here we have just such an entity: The remaining x is in the S circle but not in t
he P circle, so the conclusion must be true. This shows that the conclusion validly follows fr
om the premises.
Here is an example of an invalid syllogism.
Some S is M.
Some M is P.
Therefore, some S is P.
Here is the diagram with both premises represented:
Now it seems we have x’s all over the place. Remember, our job now is just to see if the conc
lusion is already diagrammed when we diagram the premises. The diagram of the conclusio
n would have to have an x that was in the region between where the S and P circles overlap.
We can see that there are two in that region, each linked to an x outside the region. The fact
that they are linked to other x’s means that neither x has to be in the middle region; they mi
ght both be at the other end of the link. We can show this by carefully erasing one of each p
air of linked x’s. In fact, we will erase one x from each linked pair, trying to do so in a way th
at makes the conclusion false. First we erase the righthand x from the pair in the S circle. Here is what the diagram looks like now:
Now we erase the left-hand x from the remaining pair. Here is the final diagram:
Notice that there are no x’s remaining in the overlapped region of S and P. This modificatio
n of the diagram still makes both premises true, but it also makes the conclusion false. Beca
use this combination is possible, that means that the argument must be invalid.
Here is a more common example of an invalid categorical syllogism:
All S are M.
Some M are P.
Therefore, some S are P.
This argument form looks valid, but it is not. One way to see that is to notice that Rule 1 is v
iolated: The middle term does not distribute in either premise. That is why this argument f
orm represents an example of the common deductive error in reasoning known as the “und
istributed middle.”
A perhaps more intuitive way to see why it is invalid is to look at its Venn diagram. Here is
how we diagram the premises:
The two x’s represent the fact that our particular premise states that some M are P and doe
s not state whether or not they are in the S circle, so we represent both possibilities here. N
ow we simply need to check if the conclusion is necessarily true.
We can see that it is not, because although one x is in the right place, it is linked with anothe
r x in the wrong place. In other words, we do not know whether the x in “some M are P” is i
nside or outside the S boundary. Our conclusion requires that the x be inside the S boundar
y, but we do not know that for certain whether it is. Therefore, the argument is invalid. We
could, for example, erase the linked x that is inside of the S circle, and we would have a diag
ram that makes both premises be true and the conclusion false.
Because this diagram shows that it is possible to make the premises true and the conclusio
n false, it follows that the argument is invalid.
A final way to understand why this form is invalid is to use the counterexample method an
d consider that it has the same form as the following argument:
All dogs are mammals.
Some mammals are cats.
Therefore, some dogs are cats.
This argument has the same form and has all true premises and a false conclusion. This cou
nterexample just verifies that our Venn diagram test got the right answer. If applied correct
ly, the Venn diagram test works every time. With this example, all three methods agree that
our argument is invalid.
3.7 Categorical Logic: Types of Categorical Arguments
Many examples of deductively valid arguments that we have considered can seem quite
simple, even if the theory and rules behind them can be a bit daunting. You might even
wonder how important it is to study deduction if even silly arguments about the moon
being tasty are considered valid. Remember that this is just a brief introduction to
deductive logic. Deductive arguments can get quite complex and difficult, even though they
are built from smaller pieces such as those we have covered in this chapter. In the same
way, a brick is a very simple thing, interesting in its form, but not much use all by itself. Yet
someone who knows how to work with bricks can make a very complex and sturdy
building from them.
Thus, it will be valuable to consider some of the more complex types of categorical
arguments, sorites and enthymemes. Both of these types of arguments are often
encountered in everyday life.
Sorites
A sorites is a specific kind of argument that strings together several subarguments. The
word sorites comes from the Greek word meaning a “pile” or a “heap”; thus, a sorites-style
argument is a collection of arguments piled together. More specifically, a sorites is any
categorical argument with more than two premises; the argument can then be turned into a
string of categorical syllogisms. Here is one example, taken from Lewis Carroll’s book
Symbolic Logic (1897/2009):
The only animals in this house are cats;
Every animal is suitable for a pet, that loves to gaze at the moon;
When I detest an animal, I avoid it;
No animals are carnivorous, unless they prowl at night;
No cat fails to kill mice;
No animals ever take to me, except what are in this house;
Kangaroos are not suitable for pets;
None but carnivora kill mice;
I detest animals that do not take to me;
Animals, that prowl at night, always love to gaze at the moon.
Therefore, I always avoid kangaroos. (p. 124)
Figuring out the logic in such complex sorites can be challenging and fun. However, it is
easy to get lost in sorites arguments. It can be difficult to keep all the premises straight and
to make sure the appropriate relationships are established between each premise in such a
way that, ultimately, the conclusion follows.
Carroll’s sorites sounds ridiculous, but as discussed earlier in the chapter, many of us
develop complex arguments in daily life that use the conclusion of an earlier argument as
the premise of the next argument. Here is an example of a relatively short one:
All of my friends are going to the party.
No one who goes to the party is boring.
People that are not boring interest me.
Therefore, all of my friends interest me.
Here is another example that we might reason through when thinking about biology:
All lizards are reptiles.
No reptiles are mammals.
Only mammals nurse their young.
Therefore, no lizards nurse their young.
There are many examples like these. It is possible to break them into smaller syllogistic
subarguments as follows:
All lizards are reptiles.
No reptiles are mammals.
Therefore, no lizards are mammals.
No lizards are mammals.
Only mammals nurse their young.
Therefore, no lizards nurse their young.
Breaking arguments into components like this can help improve the clarity of the overall
reasoning. If a sorites gets too long, we tend to lose track of what is going on. This is part of
what can make some arguments hard to understand. When constructing your own
arguments, therefore, you should beware of bunching premises together unnecessarily. Try
to break a long argument into a series of smaller arguments instead, including
subarguments, to improve clarity.
Enthymemes
While sorites are sets of arguments strung together into one larger argument, a related
argument form is known as an enthymeme, a syllogistic argument that omits either a
premise or a conclusion. There are also many nonsyllogistic arguments that leave out
premises or conclusions; these are sometimes also called enthymemes as well, but here we
will only consider enthymemes based on syllogisms.
A good question is why the arguments are missing premises. One reason that people may
leave a premise out is that it is considered to be too obvious to mention. Here is an
example:
All dolphins are mammals.
Therefore, all dolphins are animals.
Here the suppressed premise is “All mammals are animals.” Such a statement probably
does not need to be stated because it is common knowledge, and the reader knows how to
fill it in to get to the conclusion. Technically speaking, we are said to “suppress” the premise
that does not need to be stated.
Sometimes people even leave out conclusions if they think that the inference involved is so
clear that no one needs the conclusion stated explicitly. Arguments with unstated
conclusions are considered enthymematic as well. Let us suppose a baseball fan complains,
“You have to be rich to get tickets to game 7, and none of my friends is rich.” What is the
implied conclusion? Here is the argument in standard form:
Everyone who can get tickets to game 7 is rich.
None of my friends is rich.
Therefore, ???
In this case we may validly infer that none of the fan’s friends can get tickets to game 7.
To be sure, you cannot always assume your audience has the required background
knowledge, and you must attempt to evaluate whether a premise or conclusion does need
to be stated explicitly. Thus, if you are talking about math to professional physicists, you do
not need to spell out precisely what the hypotenuse of an angle is. However, if you are
talking to third graders, that is certainly not a safe assumption. Determining the
background knowledge of those with whom one is talking—and arguing—is more of an art
than a science.
Validity in Complex Arguments
Recall that a valid argument is one whose premises guarantee the truth of the conclusion.
Sorites are illustrations of how we can “stack” smaller valid arguments together to make
larger valid arguments. Doing so can be as complicated as building a cathedral from bricks,
but so long as each piece is valid, the structure as a whole will be valid.
How do we begin to examine a complex argument’s validity? Let us start by looking at
another example of sorites from Lewis Carroll’s book Symbolic Logic (1897/2009):
Babies are illogical.
Nobody is despised who can manage a crocodile.
Illogical persons are despised.
Therefore, no babies can manage a crocodile. (p. 112)
Is this argument valid? We can see that it is by breaking it into a pair of syllogisms. Start by
considering the first and third premises. We will rewrite them slightly to show the All that
Carroll has assumed. With those two premises, we can build the following valid syllogism:
All babies are illogical.
All illogical persons are despised.
Therefore, all babies are despised.
Using the tools from this chapter (the rules, Venn diagrams, or just by thinking it through
carefully), we can check that the syllogism is valid. Now we can use the conclusion of our
syllogism along with the remaining premise and conclusion from the original argument to
construct another syllogism.
All babies are despised.
No despised persons can manage a crocodile.
Therefore, no babies can manage a crocodile.
Again, we can check that this syllogism is valid using the tools from this chapter. Since both
of these arguments are valid, the string that combines them is valid as well. Therefore, the
original argument (the one with three premises) is valid.
This process is somewhat like how we might approach adding a very long list of numbers. If
you need to add a list of 100 numbers (suppose you are checking a grocery bill), you can do
it by adding them together in groups of 10, and then adding the subtotals together. As long
as you have done the addition correctly at each stage, your final answer will be the correct
total. This is one reason validity is important. It allows us to have confidence in complex
arguments by examining the smaller arguments from which they are, or can be, built. If one
of the smaller arguments was not valid, then we could not have complete confidence in the
larger argument.
But what about soundness? What use is the argument about babies managing crocodiles
when we know that babies are not generally despised? Again, let us make a comparison to
adding up your grocery bill. Arithmetic can tell you if your bill is added correctly, but it
cannot tell you if the prices are correct or if the groceries are really worth the advertised
price. Similarly, logic can tell you whether a conclusion validly follows from a set of
premises, but it cannot generally tell you whether the premises are true, false, or even
interesting. By themselves, random deductive arguments are as useful as sums of random
numbers. They may be good practice for learning a skill, but they do not tell us much about
the world unless we can somehow verify that their premises are, in fact, true. To learn
about the world, we need to apply our reasoning skills to accurate facts (usually outside of
arithmetic and logic) known to be true about the world.
This is why logicians are not as concerned with soundness as they are with validity, and
why a mathematician is only concerned with whether you added correctly, and not with
whether the prices were correctly recorded. Logic and mathematics give us skills to apply
valid reasoning to the information around us. It is up to us, and to other fields, to make sure
the information that we use in the premises is correct.
4.1 Basic Concepts in Propositional Logic
Propositional logic aims to make the concept of validity formal and precise. Remember
from Chapter 3 that an argument is valid when the truth of its premises guarantees the
truth of its conclusion. Propositional logic demonstrates exactly why certain types of
premises guarantee the truth of certain types of conclusions. It does this by breaking down
the forms of complex claims into their simple component parts. For example, consider the
following argument:
Either the maid or the butler did it.
The maid did not do it.
Therefore, the butler did it.
This argument is valid, but not because of anything about the maid or butler. It is valid
because of the way that the sentences combine words like or and not to make a logically
valid form. Formal logic is not concerned about the content of arguments but with their
form. Recall from Chapter 3, Section 3.2, that an argument’s form is the way it combines its
component parts to make an overall pattern of reasoning. In this argument, the component
parts are the small sentences “the butler did it” and “the maid did it.” If we give those parts
the names P and Q, then our argument has the form:
P or Q.
Not P.
Therefore, Q.
Note that the expression “not P” means “P is not true.” In this case, since P is “the butler did
it,” it follows that “not P” means “the butler did not do it.” An inspection of this form should
reveal it is logically valid reasoning.
As the name suggests, propositional logic deals with arguments made up of propositions,
just as categorical logic deals with arguments made up of categories (see Chapter 3). In
philosophy, a proposition is the meaning of a claim about the world; it is what that claim
asserts. We will refer to the subject of this chapter as “propositional logic” because that is
the most common terminology in the field. However, it is sometimes called “sentence logic.”
The principles are the same no matter which terminology we use, and in the rest of the
chapter we will frequently talk about P and Q as representing sentences (or “statements”)
as well.
The Value of Formal Logic
This process of making our reasoning more precise by focusing on an argument’s form has
proved to be enormously useful. In fact, formal logic provides the theoretical
underpinnings for computers. Computers operate on what are called “logic circuits,” and
computer programs are based on propositional logic. Computers are able to understand
our commands and always do exactly what they are programmed to do because they use
formal logic. In A Closer Look: Alan Turing and How Formal Logic Won the War, you will
see how the practical applications of logic changed the course of history.
Another value of formal logic is that it adds efficiency, precision, and clarity to our
language. Being able to examine the structure of people’s statements allows us to clarify the
meanings of complex sentences. In doing so, it creates an exact, structured way to assess
reasoning and to discern between formally valid and invalid arguments.
A Closer Look: Alan Turing and How Formal Logic Won the War
The Enigma cipher machine.
Science and Society/SuperStock
An Enigma cipher machine, which was widely used by the Nazi Party to encipher and
decipher secret military messages during World War II.
The idea of a computing machine was conceived over the last few centuries by great
thinkers such as Gottfried Leibniz, Blaise Pascal, and Charles Babbage. However, it was not
until the first half of the 20th century that philosophers, logicians, mathematicians, and
engineers were actually able to create “thinking machines” or “electronic brains” (Davis,
2000).
One pioneer of the computer age was British mathematician, philosopher, and logician Alan
Turing. He came up with the concept of a Turing machine, an electronic device that takes
input in the form of zeroes and ones, manipulates it according to an algorithm, and creates
a new output (BBC News, 1999).
Computers themselves were invented by creating electric circuits that do basic logical
operations that you will learn about in this chapter. These electric circuits are called “logic
gates” (see Figure 4.2 later in the chapter). By turning logic into circuits, basic “thinking”
could be done with a series of fast electrical impulses.
Using logical brilliance, Turing was able to design early computers for use during World
War II. The British used these early computers to crack the Nazis’ very complex Enigma
code. The ability to know the German plans in advance gave the Allies a huge advantage.
Prime Minister Winston Churchill even said to King George VI, “It was thanks to Ultra [one
of the computers used] that we won the war” (as cited in Shaer, 2012).
Statement Forms
A cartoon showing that if one apple plus one apple equals two apples, then 1 plus 1 equals
1 1.
Bill Long/Cartoonstock
Formal logic uses symbols and statement forms to clarify an argument’s reasoning.
As we have discussed, propositional logic clarifies formal reasoning by breaking down the
forms of complex claims into the simple parts of which they are composed. It does this by
using symbols to represent the smaller parts of complex sentences and showing how the
larger sentence results from combining those parts in a certain way. By doing so, formal
logic clarifies the argument’s form, or the pattern of reasoning it uses.
Consider what this looks like in mathematics. If you have taken a course in algebra, you will
remember statements such as the following:
x+y=y+x
This statement is true no matter what we put for x and for y. That is why we call x and y
variables; they do not represent just one number but all numbers. No matter what specific
numbers we put in, we will still get a true statement, like the following:
5+3=3+5
7+2=2+7
1,235 + 943 = 943 + 1,235
By replacing the variables in the general equation with these specific values, we get
instances (as discussed in Chapter 3) of that general truth. In other words, 5 + 3 = 3 + 5 is
an instance of the general statement x + y = y + x. One does not even need to use a
calculator to know that the last statement of the three is true, for its truth is not based on
the specific numbers used but on the general form of the equation. Formal logic works in
the exact same way.
Take the statement “If you have a dog, then you have a dog or you have a cat.” This
statement is true, but its truth does not depend on anything about dogs or cats; its truth is
based on its logical form—the way the sentence is structured. Here are two other
statements with the same logical form: “If you are a miner, then you are a miner or you are
a trapper” and “If you are a man, then you are a man or a woman.” These statements are all
true not because of their content, but because of their shared logical form.
To help us see exactly what this form is, propositional logic uses variables to represent the
different sentences within this form. Just as algebra uses letters like x and y to represent
numbers, logicians use letters like P and Q to represent sentences. These letters are
therefore called sentence variables.
The chief difference between propositional and categorical logic is that, in categorical logic
(Chapter 3), variables (like M and S) are used to represent categories of things (like dogs
and mammals), whereas variables in propositional logic (like P and Q) represent whole
sentences (or propositions).
In our current example, propositional logic enables us to take the statement “If you have a
dog, then you have a dog or you have a cat” and replace the simple sentences “You have a
dog” and “You have a cat,” with the variables P and Q, respectively (see Figure 4.2). The
result, “If P, then P or Q,” is known as the general statement form. Our specific sentence, “If
you have a dog, then you have a dog or you have a cat,” is an instance of this general form.
Our other example statements—”If you are a miner, then you are a miner or you are a
trapper” and “If you are a man, then you are a man or a woman”—are other instances of
that same statement form, “If P, then P or Q.” We will talk about more specific forms in the
next section.
Figure 4.1: Finding the form
In this instance of the statement form, you can see that P and Q relate to the prepositions
“you have a dog” and “you have a cat,” respectively.
An illustration showing a sentence and how that sentence translates into the statement
form. The sentence is “If you have a dog, then you have a dog and you have a cat.” The form
is “If P then P and Q,” where P means “you have a dog” and Q means “you have a cat.”
At first glance, propositional logic can seem intimidating because it can be very
mathematical in appearance, and some students have negative associations with math. We
encourage you to take each section one step at a time and see the symbols as tools you can
use to your advantage. Many students actually find that logic helps them because it
presents symbols in a friendlier manner than in math, which can then help them warm up
to the use of symbols in general.
4.2 Logical Operators
In the prior section, we learned about what constitutes a statement form in propositional
logic: a complex sentence structure with propositional variables like P and Q. In addition to
the variables, however, there are other words that we used in representing forms, words
like and and or. These terms, which connect the variables together, are called logical
operators, also known as connectives or logical terms.
Logicians like to express formal precision by replacing English words with symbols that
represent them. Therefore, in a statement form, logical operators are represented by
symbols. The resulting symbolic statement forms are precise, brief, and clear. Expressing
sentences in terms of such forms allows logic students more easily to determine the
validity of arguments that include them. This section will analyze some of the most
common symbols used for logical operators.
Conjunction
Those of you who have heard the Schoolhouse Rock! song “Conjunction Junction” (what’s
your function?)—or recall past English grammar lessons—will recognize that a conjunction
is a word used to connect, or conjoin, sentences or concepts. By that definition, it refers to
words like and, but, and or. Logic, however, uses the word conjunction to refer only to and
sentences. Accordingly, a conjunction is a compound statement in which the smaller
component statements are joined by and.
For example, the conjunction of “roses are red” and “violets are blue” is the sentence “roses
are red and violets are blue.” In logic, the symbol for and is an ampersand (&). Thus, the
general form of a conjunction is P & Q. To get a specific instance of a conjunction, all you
have to do is replace the P and the Q with any specific sentences. Here are some examples:
P
Q
P&Q
Joe is nice.
Joe is tall.
Joe is nice, and Joe is tall.
Mike is sad.
Mike is lonely.
Mike is sad, and Mike is lonely.
Winston is gone.
Winston is not forgotten.
Winston is gone and not forgotten.
Notice that the last sentence in the example does not repeat “Winston is” before
“forgotten.” That is because people tend to abbreviate things. Thus, if we say “Jim and Mike
are on the team,” this is actually an abbreviation for “Jim is on the team, and Mike is on the
team.”
The use of the word and has an effect on the truth of the sentence. If we say that P & Q is
true, it means that both P and Q are true. For example, suppose we say, “Joe is nice and Joe
is tall.” This means that he is both nice and tall. If he is not tall, then the statement is false. If
he is not nice, then the statement is false as well. He has to be both for the conjunction to be
true. The truth of a complex statement thus depends on the truth of its parts. Whether a
proposition is true or false is known as its truth value: The truth value of a true sentence is
simply the word true, while the truth value of a false sentence is the word false.
To examine how the truth of a statement’s parts affects the truth of the whole statement,
we can use a truth table. In a truth table, each variable (in this case, P and Q) has its own
column, in which all possible truth values for those variables are listed. On the right side of
the truth table is a column for the complex sentence(s) (in this case the conjunction P & Q)
whose truth we want to test. This last column shows the truth value of the statement in
question based on the assigned truth values listed for the variables on the left. In other
words, each row of the truth table shows that if the letters (like P and Q) on the left have
these assigned truth values, then the complex statements on the right will have these
resulting truth values (in the complex column).
Here is the truth table for conjunction:
P
Q
P&Q
T (Joe is nice.)
T (Joe is tall.)
T (Joe is nice, and Joe is tall.)
T (Joe is nice.)
F (Joe is not tall.)
F (It is not true that Joe is nice and tall.)
F (Joe is not nice.)
T (Joe is tall.)
F (It is not true that Joe is nice and tall.)
F (Joe is not nice.)
F (Joe is not tall.)
F (It is not true that Joe is nice and tall.)
What the first row means is that if the statements P and Q are both true, then the
conjunction P & Q is true as well. The second row means that if P is true and Q is false, then
P & Q is false (because P & Q means that both statements are true). The third row means
that if P is false and Q is true, then P & Q is false. The final row means that if both
statements are false, then P & Q is false as well.
A shorter method for representing this truth table, in which T stands for “true” and F
stands for “false,” is as follows:
P
Q
P&Q
T
T
T
T
F
F
F
T
F
F
F
F
The P and Q columns represent all of the possible truth combinations, and the P & Q
column represents the resulting truth value of the conjunction. Again, within each row, on
the left we simply assume a set of truth values (for example, in the second row we assume
that P is true and Q is false), then we determine what the truth value of P & Q should be to
the right. Therefore, each row is like a formal “if–then” statement: If P is true and Q is false,
then P & Q will be false.
Truth tables highlight why propositional logic is also called truth-functional logic. It is
truth-functional because, as truth tables demonstrate, the truth of the complex statement
(on the right) is a function of the truth values of its component statements (on the left).
Everyday Logic: The Meaning of But
Like the word and, the word but is also a conjunction. If we say, “Mike is rich, but he’s
mean,” this seems to mean three things: (1) Mike is rich, (2) Mike is mean, and (3) these
things are in contrast with each other. This third part, however, cannot be measured with
simple truth values. Therefore, in terms of logic, we simply ignore such conversational
elements (like point 3) and focus only on the truth conditions of the sentence. Therefore,
strange as it may seem, in propositional logic the word but is taken to be a synonym for
and.
Disjunction
Disjunction is just like conjunction except that it involves statements connected with an or
(see Figure 4.2 for a helpful visualization of the difference). Thus, a statement like “You can
either walk or ride the bus” is the disjunction of the statements “You can walk” and “you
can ride the bus.” In other words, a disjunction is an or statement: P or Q. In logic the
symbol for or is ∨. An or statement, therefore, has the form P ∨ Q.
Here are some examples:
P
Q
P∨Q
Mike is tall.
Doug is rich.
Mike is tall, or Doug is rich.
You can complain.
You can change things.
You can complain, or you can change things.
The maid did it.
The butler did it.
Either the maid or the butler did it.
Notice that, as in the conjunction example, the last example abbreviates one of the clauses
(in this case the first clause, “the maid did it”). It is common in natural (nonformal)
languages to abbreviate sentences in such ways; the compound sentence actually has two
complete component sentences, even if they are not stated completely. The nonabbreviated
version would be “Either the maid did it, or the butler did it.”
The truth table for disjunction is as follows:
P
Q
P∨Q
T
T
T
T
F
T
F
T
T
F
F
F
Note that or statements are true whenever at least one of the component sentences (the
“disjuncts”) is true. The only time an or statement is false is when P and Q are both false.
Figure 4.2: Simple logic circuits
These diagrams of simple logic circuits (recall the reference to these circuits in A Closer
Look: Alan Turing and How Formal Logic Won the War) help us visualize how the rules for
conjunctions (AND gate) and disjunctions (OR gate) work. With the AND gate, there is only
one path that will turn on the light, but with the OR gate, there are two paths to
illumination.
Two diagrams of basic electrical circuits. The first is a basic AND gate, meaning that if both
P and Q are true, the gates will close, completing the circuit, and the light will turn on. Both
the P and Q gates must be closed for the light to turn on. In the basic OR gate, either P or Q
must be true for the circuit to be complete and the light to turn on. In the OR diagram, there
are two paths to the light, the P path and the Q path; in the AND diagram, there is only one
path that passes through both P and Q.
Everyday Logic: Inclusive Versus Exclusive Or
The top line of the truth table for disjunctions may seem strange to some. Some think that
the word or is intended to allow only one of the two sentences to be true. They therefore
argue for an interpretation of disjunction called exclusive or. An exclusive or is just like the
or in the truth table, except that it makes the top row (the one in which P and Q are both
true) false.
One example given to justify this view is that of a waiter asking, “Do you want soup or
salad?” If you want both, the answer should not be “yes.” Some therefore suggest that the
English or should be understood in the exclusive sense.
However, this example can be misleading. The waiter is not asking “Is the statement ‘do
you want soup or salad’ true?” The waiter is asking you to choose between the two options.
When we ask for the truth value of a sentence of the form P or Q, on the other hand, we are
asking whether the sentence is true. Consider it this way: If you wanted both soup and
salad, the answer to the waiter’s question would not be “no,” but it would be if you were
using an exclusive or.
When we see the connective or used in English, it is generally being used in the inclusive
sense (so called because it includes cases in which both disjuncts are true). Suppose that
your tax form states, “If you made more than $20,000, or you are self-employed, then fill
out form 201-Z.” Suppose that you made more than $20,000, and you are self-employed—
would you fill out that form? You should, because the standard or that we use in English
and in logic is the inclusive version. Therefore, in logic we understand the word or in its
inclusive sense, as seen in the truth table.
Negation
The simplest logical symbol we use on sentences simply negates a claim. Negation is the act
of asserting that a claim is false. For every statement P, the negation of P states that P is
false. It is symbolized ~P and pronounced “not P.” Here are some examples:
P
~P
Snow is white.
Snow is not white.
I am happy.
I am not happy.
Either John or Mike got the job.
Neither John nor Mike got the job.
Since ~P states that P is not true, its truth value is the opposite of P’s truth value. In other
words, if P is true, then ~P is false; if P is false then ~P is true. Here, then, is the truth table:
P
~P
T
F
F
T
Everyday Logic: The Word Not
Sometimes just putting the word not in front of the verb does not quite capture the
meaning of negation. Take the statement “Jack and Jill went up the hill.” We could change it
to “Jack and Jill did not go up the hill.” This, however, seems to mean that neither Jack nor
Jill went up the hill, but the meaning of negation only requires that at least one did not go
up the hill. The simplest way to correctly express the negation would be to write “It is not
true that Jack and Jill went up the hill” or “It is not the case that Jack and Jill went up the
hill.”
Similar problems affect the negation of claims such as “John likes you.” If John does not
know you, then this statement is not true. However, if we put the word not in front of the
verb, we get “John does not like you.” This seems to imply that John dislikes you, which is
not what the negation means (especially if he does not know you). Therefore, logicians will
instead write something like, “It is not the case that John likes you.”
Conditional
A conditional is an “if–then” statement. An example is “If it is raining, then the street is
wet.” The general form is “If P, then Q,” where P and Q represent any two claims. Within a
conditional, P—the part that comes between if and then—is called the antecedent; Q—the
part after then—is called the consequent. A conditional statement is symbolized P → Q and
pronounced “if P, then Q.”
Here are some examples:
P
Q
P→Q
You are rich.
You can buy a boat.
If you are rich, then you can buy a boat.
You are not satisfied.
You can return the product.
If you are not satisfied, then you can return the product.
You need bread or milk.
You should go to the market.
If you need bread or milk, then you should go to the market.
Everyday Logic: Other Instances of Conditionals
A woman rests her head in her hand at her desk and closes her eyes.
Monkey Business/Thinkstock
People use conditionals frequently in real life. Think of all the times someone has said, “Get
some rest if you are tired” or “You don’t have to do something if you don’t want to.”
Sometimes conditionals are expressed in other ways. For example, sometimes people leave
out the then. They say things like, “If you are hungry, you should eat.” In many of these
cases, we have to be clever in determining what P and Q are.
Sometimes people even put the consequent first: for example, “You should eat if you are
hungry.” This statement means the same thing as “If you are hungry, then you should eat”;
it is just ordered differently. In both cases the antecedent is what comes after the if in the
English sentence (and prior to the → in the logical form). Thus, “If P then Q” is translated “P
→ Q,” and “P if Q” is translated “Q → P.”
Formulating the truth table for conditional statements is somewhat tricky. What does it
take for a conditional statement to be true? This is actually a controversial issue within
philosophy. It is actually easier to think of it as: What does it mean for a conditional
statement to be false?
Suppose Mike promises, “If you give me $5, then I will wash your car.” What would it take
for this statement to be false? Under what conditions, for example, could you accuse Mike
of breaking his promise?
It seems that the only way for Mike to break his promise is if you give him the $5, but he
does not wash the car. If you give him the money and he washes the car, then he kept his
word. If you did not give him the money, then his word was simply not tested (with no
payment on your part, he is under no obligation). If you do not pay him, he may choose to
wash the car anyway (as a gift), or he may not; neither would make him a liar. His promise
is only broken in the case in which you give him the money but he does not wash it.
Therefore, in general, we call conditional statements false only in the case in which the
antecedent is true and the consequent is false (in this case, if you give him the money, but
he still does not wash the car). This results in the following truth table:
P
Q
P→Q
T
T
T
T
F
F
F
T
T
F
F
T
Some people question the bottom two lines. Some feel that the truth value of those rows
should depend on whether he would have washed the car if you had paid him. However,
this sophisticated hypothetical is beyond the power of truth-functional logic. The truth
table is as close as we can get to the meaning of “if . . . then . . .” with a simple truth table; in
other words, it is best we can do with the tool at hand.
Finally, some feel that the third row should be false. That, however, would mean that Mike
choosing to wash the car of a person who had no money to give him would mean that he
broke his promise. That does not appear, however, to be a broken promise, only an act of
generosity on his part. It therefore does not appear that his initial statement “If you give me
$5, then I will wash your car” commits to washing the car only if you give him $5. This is
instead a variation on the conditional theme known as “only if.”
Only If
So what does it mean to say “P only if Q”? Let us take a look at another example: “You can
get into Harvard only if you have a high GPA.” This means that a high GPA is a requirement
for getting in. Note, however, that that is not the same as saying, “You can get into Harvard
if you have a high GPA,” for there might be other requirements as well, like having high test
scores, good letters of recommendation, and a good essay.
Thus, the statement “You can get into Harvard only if you have a high GPA” means:
You can get into Harvard → You have a high GPA
However, this does not mean the same thing as “You have a high GPA → You can get into
Harvard.”
In general, “P only if Q” is translated P → Q. Notice that this is the same as the translation of
“If P, then Q.” However, it is not the same as “P if Q,” which is translated Q → P. Here is a
summary of the rules for these translations:
P only if Q is translated: P → Q
P if Q is translated: Q → P
Thus, “P if Q” and “P only if Q” are the converse of each other. Recall the discussion of
conversion in Chapter 3; the converse is what you get when you switch the order of the
elements within a conditional or categorical statement.
To say that P → Q is true is to assert that the truth of Q is necessary for the truth of P. In
other words, Q must be true for P to be true. To say that P → Q is true is also to say that the
truth of P is sufficient for the truth of Q. In other words, knowing that P is true is enough
information to conclude that Q is also true.
In our earlier example, we saw that having a high GPA is necessary but not sufficient for
getting into Harvard, because one must also have high test scores and good letters of
recommendation. Further discussion of the concepts of necessary and sufficient conditions
will occur in Chapter 5.
In some cases P is both a necessary and a sufficient condition for Q. This is called a
biconditional.
Biconditional
A biconditional asserts an “if and only if” statement. It states that if P is true, then Q is true,
and if Q is true, then P is true. For example, if I say, “I will go to the party if you will,” this
means that if you go, then I will too (P → Q), but it does not rule out the possibility that I
will go without you. To rule out that possibility, I could state “I will go to the party only if
you will” (Q → P). If we want to assert both conditionals, I could say, “I will go to the party if
and only if you will.” This is a biconditional.
The statement “P if and only if Q” literally means “P if Q and P only if Q.” Using the
translation methods for if and only if, this is translated “(Q → P) & (P → Q).” Because the
biconditional makes the arrow between P and Q go both ways, it is symbolized: P ↔ Q.
Here are some examples:
P
Q
P↔Q
You can go to the party.
You are invited.
You can go to the party if and only if you are invited.
You will get an A.
You get above a 92%.
You will get an A if and only if you get above a 92%.
You should propose.
You are ready to marry her.
You should propose if and only if you are ready to marry her.
There are other phrases that people sometimes use instead of “if and only if.” Some people
say “just in case” or something else like it. Mathematicians and philosophers even use the
abbreviation iff to stand for “if and only if.” Sometimes people even simply say “if” when
they really mean “if and only if.” One must be clever to understand what people really mean
when they speak in sloppy, everyday language. When it comes to precision, logic is perfect;
English is fuzzy!
Here is how we do the truth table: For the biconditional P ↔ Q to be true, it must be the
case that if P is true then Q is true and vice versa. Therefore, one cannot be true when the
other one is false. In other words, they must both have the same truth value. That means
the truth table looks as follows:
The biconditional is true in exactly those cases in which P and Q have the same truth value.
4.3 Symbolizing Complex Statements
We have learned the basic logical operators and their corresponding symbols and truth tabl
es. However, these basic symbols also allow us analyze much more complicated statements.
Within the statement form P → Q, what if either P or Q itself is a complex statement? For ex
ample:
P
Q
P→Q
You are hungry or thirsty.
We should go to the diner.
If you are hungry or thirsty, then
In this example, the antecedent, P, states, “You are hungry or thirsty,” which can be symboli
zed H ∨ T, using the letter H for “You are hungry” and T for “You are thirsty.” If we use the l
etter D for “We should go to the diner,” then the whole statement can be symbolized (H ∨ T
) → D.
Notice the use of parentheses. Parentheses help specify the order of operations, just like in
arithmetic. For example, how would you evaluate the quantity 3 + (2 × 5)? You would execu
te the mathematical operation within the parentheses first. In this case you would first mult
iply 2 and 5 and then add 3, getting 13. You would not add the 3 and the 2 first and then mu
ltiply by 5 to get 25. This is because you know to evaluate what is within the parentheses fir
st.
It is the exact same way with logic. In the statement (H ∨ T) → D, because of the parenthese
s, we know that this statement is a conditional (not a disjunction). It is of the form P → Q, w
here P is replaced by H ∨ T and Q is replaced by D.
Here is another example:
N&S
G
(N & S) →
He is nice and smart.
You should get to know him.
If he is nice and smart, then yo
This example shows a complex way to make a sentence out of three component sentences.
N is “He is nice,” S is “he is smart,” and G is “you should get to know him.” Here is another:
R
(S & C)
You want to be rich.
You should study hard and go to college.
R → (S &
If you want to be rich, then you shou
If R is “You want to be rich,” S is “You should study hard,” and C is “You should go to college,
” then the whole statement in this final example, symbolized R → (S & C), means “If you wan
t to be rich, then you should study hard and go to college.”
Complex statements can be created in this manner for every form. Take the statement (~A
& B) ∨ (C → ~D). This statement has the general form of a disjunction. It has the form P ∨ Q,
where P is replaced with ~A & B, and Q is replaced with C → ~D.
Everyday Logic: Complex Statements in Ordinary Language
It is not always easy to determine how to translate complex, ordinary language statements i
nto logic; one sometimes has to pick up on clues within the statement.
For instance, notice in general that neither P nor Q is translated ~(P ∨ Q). This is because P
∨ Q means that either one is true, so ~(P ∨ Q) means that neither one is true. It happens to
be equivalent to saying ~P & ~Q (we will talk about logical equivalence later in this chapter
).
Here are some more complex examples:
Statement
Tr
If you don’t eat spinach, then you will neither be big nor strong.
~S
Either he is strong and brave, or he is both reckless and foolish.
(S
Come late and wear wrinkled clothes only if you don’t want the job.
(L
He is strong and brave, and if he doesn’t like you, he will let you know.
(S
Truth Tables With Complex Statements
We have managed to symbolize complex statements by seeing how they are systematically
constructed out of their parts. Here we use the same principle to create truth tables that all
ow us to find the truth values of complex statements based on the truth values of their part
s. It will be helpful to start with a summary of the truth values of sentences constructed wit
h the basic truth-functional operators:
P
Q
~P
P&Q
P∨ Q
T
T
F
T
T
T
T
F
F
F
T
F
F
T
T
F
T
T
F
F
T
F
F
The truth values of more complex statements can be discovered by applying these basic for
mulas one at a time. Take a complex statement like (A ∨ B) → (A & B). Do not be intimidate
d by its seemingly complex form; simply take it one operator at a time. First, notice the mai
P→
T
n form of the statement: It is a conditional (we know this because the other operators are w
ithin parentheses). It therefore has the form P → Q, where P is “A ∨ B” and Q is “A & B.”
The antecedent of the conditional is A ∨ B; the consequent is A & B. The way to find the trut
h values of such statements is to start inside the parentheses and find those truth values firs
t, and then work our way out to the main operator—in this case →.
Here is the truth table for these components:
A
B
A∨ B
T
T
T
T
F
T
F
T
T
F
F
F
Now we take the truth tables for these components to create the truth table for the overall
conditional:
A
B
A∨
A&B
T
T
T
T
T
F
T
F
F
T
T
F
F
F
F
F
In this way the truth values of very complex statements can be determined from the values
of their parts. We may refer to these columns (in this case A ∨ B and A & B) as helper colum
ns, because they are there just to assist us in determining the truth values for the more com
plex statement of which they are a part.
Here is another one: (A & ~B) → ~(A ∨ B). This one is also a conditional, where the anteced
ent is A & ~B and the consequent is ~(A ∨ B). We do these components first because they a
re inside parentheses. However, to find the truth table for A & ~B, we will have to fill out th
e truth table for ~B first (as a helper column).
A
B
~B
T
T
F
T
F
T
F
T
F
F
F
T
We found ~B by simply negating B. We then found A & ~B by applying the truth table for c
onjunctions to the column for A and the column for ~B.
Now we can fill out the truth table for A ∨ B and then use that to find the values of ~(A ∨ B):
A
B
A∨ B
T
T
T
T
F
T
F
T
T
F
F
F
Finally, we can now put A & ~B and ~(A ∨ B) together with the conditional to get our truth
table:
A
B
A & ~B
~(A ∨ B)
T
T
F
F
T
F
T
F
F
T
F
F
F
F
F
T
Although complicated, it is not hard when one realizes that one has to apply only a series of
simple steps in order to get the end result.
Here is another one: (A → ~B) ∨ ~(A & B). First we will do the truth table for the left part of
the disjunction (called the left disjunct), A → ~B:
A
B
~B
T
T
F
T
F
T
F
T
F
F
F
T
Of course, the last column is based on combining the first column, A, with the third column,
~B, using the conditional. Now we can work on the right disjunct, ~(A & B):
A
B
A&B
T
T
T
T
F
F
F
T
F
F
F
F
The final truth table, then, is:
A
B
A→~B
~(A & B)
(
T
T
F
F
T
F
T
T
F
T
T
T
F
F
T
T
You may have noticed that three formulas in the truth table have the exact same values on e
very row. That means that the formulas are logically equivalent. In propositional logic, two
formulas are logically equivalent if they have the same truth values on every row of the tr
uth table. Logically equivalent formulas are therefore true in the exact same circumstances.
Logicians consider this important because two formulas that are logically equivalent, in th
e logical sense, mean the same thing, even though they may look quite different. The conditi
ons for their truth and falsity are identical.
The fact that the truth value of a complex statement follows from the truth values of its com
ponent parts is why these operators are called truthfunctional. The operators, &, ∨, ~, →, and ↔, are truthfunctions, meaning that the truth of the whole sentence is a function of the truth of the parts
.
Because the validity of argument forms within propositional logic is based on the behavior
of the truth-functional operators, another name for propositional logic is truthfunctional logic.
Truth Tables With Three Letters
In each of the prior complex statement examples, there were only two letters (variables lik
e P and Q or constants like A and B) in the top left of the truth table. Each truth table had on
ly four rows because there are only four possible combinations of truth values for two varia
bles (both are true, only the first is true, only the second is true, and both are false).
It is also possible to do a truth table for sentences that contain three or more variables (or c
onstants). Recall one of the earlier examples: “Come late and wear wrinkled clothes only if
you don’t want the job,” which we represented as (L & W) → ~J. Now that there are three le
tters, how many possible combinations of truth values are there for these letters?
The answer is that a truth table with three variables (or constants) will have eight lines. Th
e general rule is that whenever you add another letter to a truth table, you double the num
ber of possible combinations of truth values. For each earlier combination, there are now t
wo: one in which the new letter is true and one in which it is false. Therefore, to make a trut
h table with three letters, imagine the truth table for two letters and imagine each row split
ting in two, as follows:
The resulting truth table rows would look like this:
P
Q
T
T
T
T
T
F
T
F
F
T
F
T
F
F
F
F
The goal is to have a row for every possible truth value combination. Generally, to fill in the
rows of any truth table, start with the last letter and simply alternate T, F, T, F, and so on, as
in the R column. Then move one letter to the left and do twice as many Ts followed by twic
e as many Fs (two of each): T, T, F, F, and so on, as in the Q column. Then move another lett
er to the left and do twice as many of each again (four each), in this case T, T, T, T, F, F, F, F,
as in the P column. If there are more letters, then we would repeat the process, adding twic
e as many Ts for each added letter to the left.
With three letters, there are eight rows; with four letters, there are sixteen rows, and so on.
This chapter does not address statements with more than three letters, so another way to e
nsure you have enough rows is to memorize this pattern.
The column with the forms is filled out the same way as when there were two letters. The f
act that they now have three letters makes little difference, because we work on only one o
perator, and therefore at most two columns of letters, at a time. Let us start with the examp
le of P → (Q & R). We begin by solving inside the parentheses by determining the truth valu
es for Q & R, then we create the conditional between P and that result. The table looks like t
his:
P
Q
R
Q&R
T
T
T
T
T
T
F
F
T
F
T
F
T
F
F
F
F
T
T
T
F
T
F
F
F
F
T
F
F
F
F
F
The rules for determining the truth values of Q & R and then of P → (Q & R) are exactly the s
ame as the rules for & and → that we used in the twoletter truth tables earlier; now we just use them for more rows. It is a formal process that g
enerates truth values by the same strict algorithms as in the two-letter tables.
4.4 Using Truth Tables to Test for Validity
Truth tables serve many valuable purposes. One is to help us better understand how the log
ical operators work. Another is to help us understand how truth is determined within form
ally structured sentences. One of the most valuable things truth tables offer is the ability to
test argument forms for validity. As mentioned at the beginning of this chapter, one of the
main purposes of formal logic is to make the concept of validity precise. Truth tables help u
s do just that.
As mentioned in previous chapters, an argument is valid if and only if the truth of its premi
ses guarantees the truth of its conclusion. This is equivalent to saying that there is no way t
hat the premises can be true and the conclusion false.
Truth tables enable us to determine precisely if there is any way for all of the premises to b
e true and the conclusion false (and therefore whether the argument is valid): We simply cr
eate a truth table for the premises and conclusion and see if there is any row on which all of
the premises are true and the conclusion is false. If there is, then the argument is invalid, b
ecause that row shows that it is possible for the premises to be true and the conclusion fals
e. If there is no such line, then the argument is valid:
Since the rows of a truth table cover all possibilities, if there is no row on which all of the pr
emises are true and the conclusion is false, then it is impossible, so the argument is valid.
Let us start with a simple example—note that the ∴symbol means “therefore”:
P∨Q
~Q
∴P
This argument form is valid; if there are only two options, P and Q, and one of them is false,
then it follows that the other one must be true. However, how can we formally demonstrate
its validity? One way is to create a truth table to find out if there is any possible way to mak
e all of the premises true and the conclusion false.
Here is how to set up the truth table, with a column for each premise (P1 and P2) and the c
onclusion (C):
P
Q
P1
P2
P∨ Q
~Q
T
T
T
F
F
T
F
F
We then fill in the columns, with the correct truth values:
P1
P2
P
Q
P∨ Q
~Q
T
T
T
F
T
F
T
T
F
T
T
F
F
F
F
We then check if there are any rows in which all of the premises are true and the conclusio
n is false. A brief scan shows that there are no such lines. The first two rows have true concl
usions, and the remaining two rows each have at least one false premise. Since the rows of
a truth table represent all possible combinations of truth values, this truth table therefore d
emonstrates that there is no possible way to make all of the premises true and the conclusi
on false. It follows, therefore, that the argument is logically valid.
To summarize, the steps for using the truth table method to determine an argument’s validi
ty are as follows:
1. Set up the truth table by creating rows for each possible combination of truth values for the
basic letters and a column for each premise and the conclusion.
2. Fill out the truth table by filling out the truth values in each column according to the rules f
or the relevant operator (~, &, ∨, →, ↔).
3. Use the table to evaluate the argument’s validity. If there is even one row on which all of th
e premises are true and the conclusion is false, then the argument is invalid; if there is no s
uch row, then the argument is valid.
This truth table method works for all arguments in propositional logic: Any valid propositio
nal logic argument will have a truth table that shows it is valid, and every invalid propositio
nal logic argument will have a truth table that shows it is invalid. Therefore, this is a perfect
test for validity: It works every time (as long as we use it accurately).
Examples With Arguments With Two Letters
Let us do another example with only two letters. This argument will be slightly more compl
ex but will still involve only two letters, A and B.
Example 1
T
A→B
~(A & B)
∴ ~(B ∨ A)
To test this symbolized argument for validity, we first set up the truth table by creating row
s with all of the possible truth values for the basic letters on the left and then create a colu
mn for each premise (P1 and P2) and conclusion (C), as follows:
A
B
T
T
T
F
F
T
P1
P2
A→B
~(A & B)
F
F
Second, we fill out the truth table using the rules created by the basic truth tables for each o
perator. Remember to use helper columns where necessary as steps toward filling in the co
lumns of complex formulas. Here is the truth table with only the helper columns filled in:
P1
A→B
P2
A
B
A&B
T
T
T
T
T
F
F
T
F
T
F
T
F
F
F
Here is the truth table with the rest of the columns filled in:
F
P1
~(A & B)
B∨
P2
A
B
A→B
A&B
~(A & B)
T
T
T
T
F
T
T
F
F
F
T
T
F
T
T
F
T
T
F
F
T
F
T
Finally, to evaluate the argument’s validity, all we have to do is check to see if there are any
lines in which all of the premises are true and the conclusion is false. Again, if there is such
a line, since we know it is possible for all of the premises to be true and the conclusion false
, the argument is invalid. If there is no such line, then the argument is valid.
It does not matter what other rows may exist in the table. There may be rows in which all o
f the premises are true and the conclusion is also true; there also may be rows with one or
B∨
F
more false premises. Neither of those types of rows determine the argument’s validity; our
only concern is whether there is any possible row on which all of the premises are true and
the conclusion false. Is there such a line in our truth table? (Remember: Ignore the helper c
olumns and just focus on the premises and conclusion.)
The answer is yes, all of the premises are true and the conclusion is false in the third row. T
his row supplies a proof that this argument’s form is invalid. Here is the line:
A
B
P1
P2
A→B
~(A & B)
F
T
T
T
Again, it does not matter what is on the other row. As long as there is (at least) one row in
which all of the premises are true and the conclusion false, the argument is invalid.
Example 2
A → (B & ~A)
A ∨ ~B
∴ ~(A ∨ B)
First we set up the truth table:
P1
A
B
T
T
T
F
F
T
~A
B & ~A
A → (B & ~A)
P2
~B
A ∨ ~B
F
F
Next we fill in the values, filling in the helper columns first:
P1
A → (B & ~A)
P2
A
B
~A
B & ~A
~B
T
T
F
F
F
T
F
F
F
T
F
T
T
T
F
A ∨ ~B
F
F
T
F
T
Now that the helper columns are done, we can fill in the rest of the table’s values:
P1
P2
A
B
~A
B & ~A
A → (B & ~A)
~B
A ∨ ~B
T
T
F
F
F
F
T
T
F
F
F
F
T
T
F
T
T
T
T
F
F
F
F
T
F
T
T
Finally, we evaluate the table for validity. Here we see that there are no lines in which all of
the premises are true and the conclusion is false. Therefore, there is no possible way to ma
ke all of the premises true and the conclusion false, so the argument is valid.
T
The earlier examples each had two premises. The following example has three premises. Th
e steps of the truth table test are identical.
Example 3
~(M ∨ B)
M → ~B
B ∨ ~M
∴ ~M & B)
First we set up the truth table. This table already has the helper columns filled in.
P1
~(M ∨ B)
P2
M
B
M >∨ B
~B
M → ~B
T
T
T
F
F
T
F
T
T
F
F
T
T
F
T
F
F
F
T
Now we fill in the rest of the columns, using the helper columns to determine the truth valu
es of our premises and conclusion on each row:
P1
~M
T
P2
M
B
M∨B
~(M ∨ B)
~B
M → ~B
~M
T
T
T
F
F
F
F
T
F
T
F
T
T
F
F
T
T
F
F
T
T
F
F
F
T
T
T
Now we look for a line in which all of the premises are true and the conclusion false. The fin
al row is just such a line. This demonstrates conclusively that the argument is invalid.
T
Examples With Arguments With Three Letters
The last example had three premises, but only two letters. These next examples will have th
ree letters. As explained earlier in the chapter, the presence of the extra letter doubles the n
umber of rows in the truth table.
Example 1
A → (B ∨ C)
~(C & B)
∴ ~(A & B)
First we set up the truth table. Note, as mentioned earlier, now there are eight possible com
binations on the left.
P1
A
B
C
T
T
T
T
T
F
T
F
T
T
F
F
F
T
T
F
T
F
F
F
T
B∨ C
A → (B ∨ C)
P2
C&B
~(C & B)
F
F
F
Then we fill the table out. Here it is with just the helper columns:
P1
A → (B ∨ C)
P2
A
B
C
B∨ C
C&B
T
T
T
T
T
T
T
F
T
F
T
F
T
T
F
T
F
F
F
F
F
T
T
T
T
F
T
F
T
F
F
F
T
T
F
F
F
F
F
F
~(C & B)
Here is the full truth table:
P1
P2
A
B
C
B∨ C
A → (B ∨ C)
C&B
~ (C & B)
T
T
T
T
T
T
F
T
T
F
T
T
F
T
T
F
T
T
T
F
T
T
F
F
F
F
F
T
F
T
T
T
T
T
F
F
T
F
T
T
F
T
F
F
T
T
T
F
T
F
F
F
F
T
F
T
Finally, we evaluate; that is, we look for a line in which all of the premises are true and the c
onclusion false. This is the case with the second line. Once you find such a line, you do not n
eed to look any further. The existence of even one line in which all of the premises are true
and the conclusion is false is enough to declare the argument invalid.
Let us do another one with three letters:
Example 2
A → ~B
B∨C
∴A→C
We begin by setting up the table:
P1
A
B
C
T
T
T
T
T
F
T
F
T
T
F
F
F
T
T
F
T
F
F
F
T
~B
F
F
F
Now we can fill in the rows, beginning with the helper columns:
A → ~B
P2
B∨
P1
P2
A
B
C
~B
A → ~B
T
T
T
F
F
T
T
T
F
F
F
T
T
F
T
T
T
T
T
F
F
T
T
F
F
T
T
F
T
T
F
T
F
F
T
T
F
F
T
T
T
T
F
F
F
T
T
Here, when we look for a line in which all of the premises are true and the conclusion false,
we do not find one. There is no such line; therefore the argument is valid.
4.5 Some Famous Propositional Argument Forms
Using the truth table test for validity, we have seen that we can determine the validity or in
validity of all propositional argument forms. However, there are some basic argument form
s that are so common that it is worthwhile simply to memorize them and whether or not th
ey are valid. We will begin with five very famous valid argument forms and then cover two
of the most famous invalid argument forms.
Common Valid Forms
It is helpful to know some of the most commonly used valid argument forms. Those present
ed in this section are used so regularly that, once you learn them, you may notice people usi
ng them all the time. They are also used in what are known as deductive proofs (see A Close
r Look: Deductive Proofs).
A Closer Look: Deductive Proofs
B∨
F
Mark Wragg/iStock/Thinkstock
Rather than base decisions on chance, people use the information around them to m
ake deductive and inductive inferences with varying degrees of strength and validity.
Logicians use proofs to show the validity of inferences.
A big part of formal logic is constructing proofs. Proofs in logic are a lot like proofs in mathe
matics. We start with certain premises and then use certain rules—
called rules of inference—in a step-bystep way to arrive at the conclusion. By using only valid rules of inference and applying the
m carefully, we make certain that every step of the proof is valid. Therefore, if there is a logi
cal proof of the conclusion from the premises, then we can be certain that the argument itse
lf is valid.
The rules of inference used in deductive proofs are actually just simple valid argument for
ms. In fact, the valid argument forms covered here—
including modus ponens, hypothetical syllogisms, and disjunctive syllogisms—
are examples of argument forms that are used as inference rules in logical proofs. Using the
se and other formal rules, it is possible to give a logical proof for every valid argument in pr
opositional logic (Kennedy, 2012).
Logicians, mathematicians, philosophers, and computer scientists use logical proofs to sho
w that the validity of certain inferences is absolutely certain and founded on the most basic
principles. Many of the inferences we make in daily life are of limited certainty; however, th
e validity of inferences that have been logically proved is considered to be the most certain
and uncontroversial of all knowledge because it is derivable from pure logic.
Covering how to do deductive proofs is beyond the scope of this book, but readers are invit
ed to peruse a book or take a course on formal logic to learn more about how deductive pro
ofs work.
Modus Ponens
Perhaps the most famous propositional argument form of all is known as modus ponens—
Latin for “the way of putting.” (You may recognize this form from the earlier section on the
truth table method.) Modus ponens has the following form:
P→Q
P
∴Q
You can see that the argument is valid just from the meaning of the conditional. The first pr
emise states, “If P is true, then Q is true.” It would logically follow that if P is true, as the sec
ond premise states, then Q must be true. Here are some examples:
If you want to get an A, you have to study.
You want to get an A.
Therefore, you have to study.
If it is raining, then the street is wet.
It is raining.
Therefore, the street is wet.
If it is wrong, then you shouldn’t do it.
It is wrong.
Therefore, you shouldn’t do it.
A truth table will verify its validity.
P1
P2
P
Q
P→Q
P
T
T
T
T
T
F
F
T
F
T
T
F
F
F
T
There is no line in which all of the premises are true and the conclusion false, verifying the
validity of this important logical form.
F
Modus Tollens
A closely related form has a closely related name. Modus tollens—
Latin for “the way of taking”—has the following form:
P→Q
~Q
∴ ~P
A truth table can be used to verify the validity of this form as well. However, we can also se
e its validity by simply thinking it through. Suppose it is true that “If P, then Q.” Then, if P w
ere true, it would follow that Q would be true as well. But, according to the second premise,
Q is not true. It follows, therefore, that P must not be true; otherwise, Q would have been tr
ue. Here are some examples of arguments that fit this logical form:
Ruth Black/iStock/Thinkstock
Evaluate this argument form for validity: If the cake is made with sugar, then the cak
e is sweet. The cake is not sweet. Therefore, the cake is not made with sugar.
In order to get an A, I must study.
I will not study.
Therefore, I will not get an A.
If it rained, then the street would be wet.
The street is not wet.
Therefore, it must not have rained.
If the ball hit the window, then I would hear glass shattering.
I did not hear glass shattering.
Therefore, the ball must not have hit the window.
For practice, construct a truth table to demonstrate the validity of this form.
Disjunctive Syllogism
A disjunctive syllogism is a valid argument form in which one premise states that you hav
e two options, and another premise allows you to rule one of them out. From such premises
, it follows that the other option must be true. Here are two versions of it formally (both are
valid):
P∨Q
P∨Q
~P
~Q
∴Q
∴P
In other words, if you have “P or Q” and not Q, then you may infer P. Here is another exampl
e: “Either the butler or the maid did it. It could not have been the butler. Therefore, it must
have been the maid.” This argument form is quite handy in real life. It is frequently useful to
consider alternatives and to rule one out so that the options are narrowed down to one.
Hypothetical Syllogism
One of the goals of a logically valid argument is for the premises to link together so that the
conclusion follows smoothly, with each premise providing a link in the chain. Hypothetical
syllogism provides a nice demonstration of just such premise linking. Hypothetical syllogi
sm takes the following form:
P→Q
Q→R
∴P→R
For example, “If you lose your job, then you will have no income. If you have no income, the
n you will starve. Therefore, if you lose your job, then you will starve!”
Double Negation
Negating a sentence (putting a ~ in front of it) makes it say the opposite of what it originally
said. However, if we negate it again, we end up with a sentence that means the same thing
as our original sentence; this is called double negation.
Imagine that our friend Johnny was in a race, and you ask me, “Did he win?” and I respond,
“He did not fail to win.” Did he win? It would appear so. Though some languages allow doub
le negations to count as negative statements, in logic a double negation is logically equivale
nt to the original statement. Both of these forms, therefore, are valid:
P
~~P
∴ ~~P
∴P
A truth table will verify that each of these forms is valid; both P and ~~P have the same tru
th values on every row of the truth table.
Common Invalid Forms
Both modus ponens and modus tollens are logically valid forms, but not all famous logical for
ms are valid. The last two forms we will discuss—
denying the antecedent and affirming the consequent—
are famous invalid forms that are the evil twins of the previous two.
Denying the Antecedent
Take a look at the following argument:
If you give lots of money to charity, then you are nice.
You do not give lots of money to charity.
Therefore, you must not be nice.
This might initially seem like a valid argument. However, it is actually invalid in its form. To
see that this argument is logically invalid, take a look at the following argument with the sa
me form:
If my cat is a dog, then it is a mammal.
My cat is not a dog.
Therefore, my cat is not a mammal.
This second example is clearly invalid since the premises are true and the conclusion is fals
e. Therefore, there must be something wrong with the form. Here is the form of the argume
nt:
P→Q
~P
∴ ~Q
Because this argument form’s second premise rejects the antecedent, P, of the conditional i
n the first premise, this argument form is referred to as denying the antecedent. We can c
onclusively demonstrate that the form is invalid using the truth table method.
Here is the truth table:
P1
P2
P
Q
P→Q
~P
T
T
T
F
T
F
F
F
F
T
T
T
F
F
T
We see on the third line that it is possible to make both premises true and the conclusion fa
lse, so this argument form is definitely invalid. Despite its invalidity, we see this form all the
time in real life. Here some examples:
If you are religious, then you believe in living morally.
Jim is not religious, so he must not believe in living morally.
Plenty of people who are not religious still believe in living morally. Here is another one:
If you are training to be an athlete, then you should stay in shape.
You are not training to be an athlete.
Thus, you should not stay in shape.
There are plenty of other good reasons to stay in shape.
If you are Republican, then you support small government.
Jack is not Republican, so he must not support small government.
Libertarians, for example, are not Republicans, yet they support small government. These e
xamples abound; we can generate them on any topic.
Because this argument form is so common and yet so clearly invalid, denying the anteceden
t is a famous fallacy of formal logic.
Affirming the Consequent
Another famous formal logical fallacy also begins with a conditional. However, the other tw
o lines are slightly different. Here is the form:
P→Q
Q
∴P
T
Because the second premise states the consequent of the conditional, this form is called affi
rming the consequent. Here is an example:
If you get mono, you will be very tired.
You are very tired.
Therefore, you have mono.
The invalidity of this argument can be seen in the following argument of the same form:
If my cat is a dog, then it is a mammal.
My cat is a mammal.
Therefore, my cat is a dog.
Clearly, this argument is invalid because it has true premises and a false conclusion. Theref
ore, this must be an invalid form. A truth table will further demonstrate this fact:
P1
P2
P
Q
P→Q
Q
T
T
T
T
T
F
F
F
F
T
T
T
F
F
T
The third row again demonstrates the possibility of true premises and a false conclusion, so
the argument form is invalid. Here are some examples of how this argument form shows u
p in real life:
In order to get an A, I have to study.
I am going to study.
Therefore, I will get an A.
There might be other requirements to get an A, like showing up for the test.
If it rained, then the street would be wet.
The street is wet.
Therefore, it must have rained.
Sprinklers may have done the job instead.
If he committed the murder, then he would have had to have motive and opportunity.
He had motive and opportunity.
Therefore, he committed the murder.
This argument gives some evidence for the conclusion, but it does not give proof. It is possi
ble that someone else also had motive and opportunity.
F
The reader may have noticed that in some instances of affirming the consequent, the premi
ses do give us some reason to accept the conclusion. This is because of the similarity of this
form to the inductive form known as inference to the best explanation, which is covered in
more detail in Chapter 6. In such inferences we create an “if–
then” statement that expresses something that would be the case if a certain assumption w
ere true. These things then act as symptoms of the truth of the assumption. When those sy
mptoms are observed, we have some evidence that the assumption is true. Here are some e
xamples:
If you have measles, then you would present the following symptoms. . . .
You have all of those symptoms.
Therefore, it looks like you have measles.
If he is a faithful Catholic, then he would go to Mass.
I saw him at Mass last Sunday.
Therefore, he is probably a faithful Catholic.
All of these seem to supply decent evidence for the conclusion; however, the argument for
m is not logically valid. It is logically possible that another medical condition could have the
same symptoms or that a person could go to Mass out of curiosity. To determine the (induc
tive) inferential strength of an argument of that form, we need to think about how likely Q i
s under different assumptions.
A Closer Look: Translating Categorical Logic
The chapter about categorical logic seems to cover a completely different type of reasoning
than this chapter on propositional logic. However, logical advancements made just over a c
entury ago by a man named Gottlob Frege showed that the two types of logic can be combin
ed in what has come to be known as quantificational logic (also known as predicate logic) (
Frege, 1879).
In addition to truthfunctional logic, quantificational logic allows us to talk about quantities by including logical
terms for all and some. The addition of these terms dramatically increases the power of our
logical language and allows us to represent all of categorical logic and much more. Here is a
brief overview of how the basic sentences of categorical logic can be represented within qu
antificational logic.
The statement “All dogs are mammals” can be understood to mean “If you are a dog, then y
ou are a mammal.” The word you in this sentence applies to any individual. In other words,
the sentence states, “For all individuals, if that individual is a dog, then it is a mammal.” In g
eneral, statements of the form “All S is M” can be represented as “For all things, if that thing
is S, then it is M.”
The statement “Some dogs are brown” means that there exist dogs that are brown. In other
words, there exist things that are both dogs and brown. Therefore, statements of the form “
Some S is M” can be represented as “There exists a thing that is both S and M” (propositions
of the form “Some S are not M” can be represented by simply adding a negation in front of t
he M).
Statements like “No dogs are reptiles” can be understood to mean that all dogs are not repti
les. In general, statements of the form “No S are M” can be represented as “For all things, if t
hat thing is an S, then it is not M.”
Quantificational logic allows us to additionally represent the meanings of statements that g
o well beyond the AEIO propositions of categorical logic. For example, complex statements l
ike “All dogs that are not brown are taller than some cats” can also be represented with the
power of quantificational logic though they are well beyond the capacity of categorical logic
. The additional power of quantificational logic enables us to represent the meaning of vast
stretches of the English language as well as statements used in formal disciplines like math
ematics. More instruction in this interesting area can be found in a course on formal logic.