Download can machines think? - The Dartmouth Undergraduate Journal of

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Wearable computer wikipedia , lookup

Artificial intelligence in video games wikipedia , lookup

Computer vision wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Wizard of Oz experiment wikipedia , lookup

Chinese room wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Visual Turing Test wikipedia , lookup

Computer Go wikipedia , lookup

Human–computer interaction wikipedia , lookup

Turing test wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Transcript
AI ESSAY CONTEST THIRD PLACE
CAN MACHINES THINK?
MORGAN COHEN ‘08
Great minds have struggled for centuries
to explain the origin and nature of knowledge.
The rise of computers has compounded this
great philosophical dilemma as we now
must struggle to reconcile the advancements
of technology with the hitherto accepted
assumptions about the relationship of man to
the universe. Traditionally, we have considered
our ability to think as the defining difference
between mankind and all other beings. Indeed,
thought has long been cited as the hallmark
quality of man. In deciding whether or not a
computer can think, we are reevaluating one of
the most distinctive features of our humanity.
Before we can determine whether or not
computers are capable of thought, we must
first dispel the equivocation and ambiguity that
obscures the meaning of the word thought itself,
specifically the notion that consciousness and
thought are mutually inclusive. Once the word
thought has been thoroughly disambiguated,
we will examine the test devised by Alan
Turing used to test for thought in computers.
Finally, we will look at the structure of the main
objection to the Turing Test and the effect it has
on the question at hand.
Any attempt to answer the question
“Can machines think?” must begin with the
definition of thought itself. It is a common
enough assumption that only conscious beings
can think, and it would therefore seem that
computers - composed of plastic, metal, and
wires – could never achieve consciousness, and
thus, never achieve thought. This assumption
is patently false. It is widely accepted amongst
psychologists, Freudian and non-Freudian
alike, that we are not consciously aware of
all, or even most of our thought processes.
This notion of subconscious mental activity is
reinforced by the concept of speech. We almost
always find ourselves saying what we want to
say without consciously choosing the words.
When engaging in any form of conversation, we
are not conscious of the thought processes that
transform the sounds we hear into the meaning
attached to the words that have been spoken.
SPRING 2006
Thought and consciousness are not mutually
inclusive, realizing this allows us to consider
the question of artificial intelligence within a
much more precise framework. Those who
hold that the terms “thinking” and “consciously
thinking” can be used interchangeably are
merely equivocating thinking with being aware
that one is thinking. Many fundamental human
mental activities, from the understanding of
speech to the perception of the external world
can be performed non-consciously. The real
question is whether an inanimate object (such
as a computer) could be said to perform these
cognitive activities; and since we can perform
these activities non-consciously, the question
can be discussed without considering whether or
not the object could be conscious. The answer
to this new question is a resounding yes, but
the problem of how to test and recognize this
new disambiguated definition of thought still
remains.
The new problem can be resolved by the
laboratory experiment devised by Alan Turing.
The experiment, which can be used to settle the
question of whether a given computer is capable
of thought is known simply as the Turing Test.
The Test, also known as The Imitation Game
supposes that we have a person, a computer,
and an interrogator. The interrogator is in a
room separated from the other person and the
computer. The objective of the test is for the
interrogator to determine through conversation
which of the other two is the person and which is
the computer. The interrogator converses with
the computer and the other human by means
of a keyboard and screen and is allowed to ask
as penetrating and wide-ranging questions as
he or she likes. The object of the computer is
to try and cause the interrogator to mistakenly
conclude that the computer is the other person,
while the object of the other person is to try and
help the interrogator expose the computer. The
experiment is then repeated a number of times
with a wide range of people in the two human
positions, and if the number of successful
identifications is not significantly higher than
15
“guessing frequency” – which in the case of the
Turing Test is fifty percent – we can conclude
that the computer in question can think.
The Turing Test is predicated upon the
underlying assumption that if a computer can
convince a computer expert that it has mental
states, then it really has those mental states. If,
for example, a machine could “converse” with
a native English speaker in such a way as to
convince the speaker that it understood English
then it would literally understand English (1).
If trees could converse with us as fluently
as they do in some fairy tales, wouldn’t we
unhesitatingly say that trees can think? The
question and answer format presented by the
Turing Test allows for almost any human mental
process – from mathematics, to poetry, to chess
- to be tested and searched for in a computer.
The notion that language is the hallmark quality
of thought has a long philosophical history. The
French philosopher Rene Descartes ironically
believed that conversation was the surest way
of differentiating a genuine thinking being from
a machine. He wrote that it is “not conceivable
that such a machine should produce different
arrangements of words so as to give an
appropriately meaningful answer to whatever
is said in its presence, as the dullest of men
can do.” This statement seems anachronistic
in light of the technological achievements of
the computer age, where computers now can
respond not only textually, but also audibly to
a wide-range of questions. If Descartes’ notion
of thought – the “appropriately meaningful
answer to whatever is said” – is held to be
true, then the Turing Test is an accurate way to
evaluate whether or not a computer is capable
of thought.
So far, we have reshaped and
disambiguated our notion of thought, and
concluded that the Turing Test is an appropriate
way to inductively suggest whether computers
can think. However, the Turing Test, and the
definition of thought that engenders it are
not beyond reproach. We must now consider
the main objections to these theories, the
most prominent of which is the simulation
objection. The objection centers on the belief
that simulation does not constitute reality.
Artificially created coal is not really coal just as
simulated diamonds are not really diamonds. A
simulated X is X-like, but it is not the original.
Should a computer pass the Turing Test, it would
16
only demonstrate its ability to simulate thought,
which, proponents of the simulation objection
argue, is very different actually possessing the
capability of thought. The difference lies in the
fact that computers, by definition, consist of sets
of purely formalized operations on formally
specified symbols. As far as the computer is
concerned, these symbols don’t symbolize
anything or represent anything. Not only does
the computer attach no meaning, interpretation,
or content to these formal symbols, but any
attempt to endow a computer with the power
to interpret its symbols would only result in
more uninterpreted symbols. If, for example,
I were to print on a calculator “4 X 4 =,” the
calculator would print “16,” but it has no idea
that “4” means “4” or that “16” means “16” or
that anything means anything. The computer
only manipulates the formal symbols that is
has been programmed with but attaches no
meaning to them.
This simple observation is intended to
refute the possibility that computers are capable
of thought, but it is riddled with ambiguity and
equivocation. Primarily, it is false to assert
that a simulated X is never X. Consider voice
simulation. If a computer can produce a voice
identical to a human voice, would this not be
considered a voice? Granted, it is an artificial
voice as opposed to a human voice, but it is
still a voice nonetheless. Similarly, organic
compounds produced in a laboratory were not
created naturally, but are they not still organic
compounds? A comparison of these examples
with the case of simulated death, for instance,
makes clear the equivocation that surrounds
the word “simulation,”(2) Indeed, there are
two very different grounds for classifying
something as a simulation. The first type of
simulation, which we shall call simulation A,
occurs if a simulation lacks the essential feature
of whatever is being simulated. Therefore, a
simulated death is a simulation A because the
person involved in the simulation is still living.
The second form of simulation, or simulation
B, is identical in every way to whatever is
being simulated except for the fact that it has
been produced by non-standard means. Coal
produced artificially in a laboratory may
still be called simulated coal even though it
is identical in appearance and composition
to naturally coal. The same is true for the
aforementioned artificial voice, which contains
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
all the essential characteristics of human voice
but is artificially produced. The main flaw
of the simulation objective is now apparent.
The objection assumes, without cause, that a
computer simulation of thought must always
be a mere simulation and never the real thing
- always simulation A and never simulation
B - no matter how accurate the simulation
has become. However, the main question we
specifically ask concerning the possibility of
“thinking machines” is whether a computer
simulation of thinking could be a simulation B.
Therefore, by assuming that the purpose of the
Turing Test is to examine whether a computer
can produce a simulation A of thought, the
simulation objection simply pre-judges the
central issue and is thus irrelevant.
Our concept of thinking beings was shaped
by an environment in which only naturally
occurring, living organisms were capable of
thought. Now, in the face of computers that are
indistinguishable from humans in their ability
to solve problems, understand concepts, make
decisions, and so on, we must choose whether or
not their nature warrants the application of the
term “thinking thing.” Much as a judge consults
the intention behind the laws when he issues his
verdict, we must look to the purpose for which
we use the concept of thinking and decide if
it is best served by counting a computer as a
thinking thing. Thus, the concept of thought is
crucial in making a distinction between objects
that think and those that do not. Intrinsic to the
notion of thought are highly adaptable mental
processes that allow thinking things to reason,
develop analogies, deliberate, and revise
beliefs in light of experience. All we must
consider now is whether the mental processes
of the properly programmed computer more or
less match ours in adaptability. Currently, such
massively adaptive programs do not exist in the
programming lexicon. If we assume that AI
researches will one day succeed in achieving
this auspicious feat, it would become extremely
difficult to say that computers are not capable
of thought. These highly adaptive computers
would display the same purposefulness and
ingenuity that we do, and more importantly,
their behavior would be the result of their
programs engaging in what we refer to (when
biologically rooted of course) as reasoning,
analyzing, making informed guesses, and
forming plans. As long as we are willing to
describe these acts in the same manner that
we describe them when referring to our own
adaptable mental process, we can and must
conclude that it will be possible for computers
to think.
References
1. Robert J. Fogelin & Walter Sinnott-Armstrong,
Understanding Arguments, pg. 591
2. Brian Copeland, Artificial Intelligence: A
Philosophical Introduction, pg. 47
Congratulations to our contest winners
and thanks to all who partcipated.
For more about artificial intelligence, be
sure to check out the AI@50 conference
from July 13th-15th.
SPRING 2006
17