Download The Chinese Room Argument

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Existential risk from artificial general intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Computer Go wikipedia , lookup

Embodied cognitive science wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Functionalism (philosophy of mind) wikipedia , lookup

John Searle wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Chinese room wikipedia , lookup

Transcript
The Chinese Room
Argument
Presented to: Brent Kyle
For Philosophy of Technology
Tyson Belliveau, 3239097
10/29/2010
This paper will attempt to analyze and prove the fallacy to the argument put forth by Searle in the
Chinese Room argument. It will make use of works by both Margaret Boden and Larry Hauser to
demonstrate the invalidity of the Argument, and how AI is both possible and eventual.
0
The Chinese Room Argument
Known widely as the most influential argument against Artificial Intelligence, Searle’s Chinese
Room Experiment (hereafter, CRA) claims to disprove what Searle calls “Strong AI”. The argument is
formulated as such:
“Suppose that [Searle] is locked in a room and given a batch of Chinese writing. Suppose furthermore that
[Searle] know[s] no Chinese, either written of spoken, and that [he is] not even confident he could identify
Chinese writing as Chinese writing distinct from, say, Japanese writing […]. Suppose after the first batch,
[he] is given a second batch of Chinese script combined with a set of rules for correlating the second batch
with the first batch. The rules are in English, and [he] understands these rules […]. Suppose [he] is given a
third batch together with some instructions, again in English, which allow [him] to correlate elements of the
third batch to elements of the first two batches, and these rules instruct [him] how to give back certain
Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given in the third batch.”
(Searle, 1980)
What Searle aims to show us here is that, in essence, he is trapped in a room with no knowledge of what
is being fed to him, or what he is putting out. He knows no Chinese (as he said in the thought
experiment), and the only understanding taking place is of the English instructions. It is certain that,
given time, he will eventually be able to relate certain symbols coming in with certain symbols that will
go out; this is purely habit. Searle wants to argue that despite developing the habit of associating certain
strings of symbols with others faster than he initially could, he still does not know the meaning of what
he is outputting. In the paper, Searle claims just that; he will eventually be able to put out responses
that will be indistinguishable from a native Chinese speaker. The same would be true of English
questions being input; he would, in time, be able to output answers indistinguishable from an English
speaking person. The gist of what Searle wants to point out is the absence of understanding of
input/output. Like a computer, he says, “I perform computational operations on formally specified
1
elements.”1 In the scenario described by Searle, the first batch is called the script, the second, story, and
third, and the rules given is the program. This is, according to Searle, how a computer functions. The
computer is given the responses to give, fed a story to go with it, and then given questions to which it
provides a response in accordance with the rules.
In the argument, it is clear that there is no understanding2 of what is going in and coming out by
the computer (or, in this case, Searle). Proponents of artificial intelligence want to claim that the
computer does, in fact, understand the stories and the program both facilitates this, and explains human
understanding. The example illustrates a clear counterexample to this: Searle, despite having the same
“programming” as a computer, does not understand Chinese. The proponents will tend to argue that
when we understand a story in English, it is a matter of formal symbol-manipulation. Searle debunks this
with the CRA, as he demonstrates that symbol-manipulation is insufficient for understanding; it must be,
at least, only part of the scheme. Searle holds that “whatever formal principles you put into the
computer, they will not be sufficient for understanding, since a human will be able to follow the formal
principles without understanding anything.”3 An argument raised is that despite one program not
understanding, the entire system can. This is refuted by stating that were an individual to internalize
everything related to this system, understanding would still not follow. The individual parts of the
system (referred to as “subsystems”) still do their part, still act in such a way to manipulate symbols and
output them at the end, however understanding still does not occur. The fact is that the computer does
not lack some sort of information pertaining to the interpretations of the manipulated symbols, there is
no interpretation of the symbols at all.
1
(Searle, Minds, Brains, and Programs, 1980)
Searle points out that “understanding implies both the possession of mental states and the truth of these states”
(Searle, 1980), however, only the possession of the mental states is necessary to Searle’s argument.
3
Ibid
2
2
Searle asserts that symbol-manipulation is insufficient to understanding input/output data. He
also asserts that for AI to develop understanding, it would have to replicate a “mind”. The argument is
thus formulated as such:
(A1) Programs are formal (syntactic)
(A2) Minds have mental contents
(A3) Syntax is neither constitutive of nor sufficient for semantics
(C1) Programs are neither constitutive of nor sufficient for minds
(A4) Brains cause minds
(C2) Any other system capable of causing minds would have to have causal powers (at least) equivalent to
those of brains
(C3) Any artifact that produced mental phenomena, any artificial brain, would have to be able to duplicate
the specific causal powers of brains, and it could not do that just by running a formal program
(C4) The way that human brains actually produce mental phenomena cannot be solely by virtue of running a
4
computer program
Since programs are neither constitutive of, nor sufficient for, mind, then something else must take place
in order to have AI. This “something else” is, according to Searle, the brain. His solution to the mindbody question is a unity; the mind is created by the brain. Therefore, in order to have understanding
(and, ultimately, to have mind), the brain must be simulated adequately. This requires a very specific
understanding of how mind and brain exist; they are one entity. This, he claims, is another refutation of
“strong AI”, since it hinges on the idea that the brain is not necessary to developing AI. “The mind is to
the brain as the program is to the hardware”5; they are independent aspects of the computer/robot. No
one would argue that the computer program which simulates fire would entice us to believe that the
house is burning, or that a flood simulation will actually drown someone.
Searle does not reject the fact that a computer could come to develop understanding, since our
minds are computers (in effect) themselves. What he does not see is how this could be the case with
computers as we understand them; we know computers to be machines which operate on programs
4
5
(Hauser, 1997)
(Searle, 1980)
3
which operate along” formally defined principles.”6 He sees understanding in the human as independent
of the fact his brain is an instantiation of a program, but dependent on his biological make-up. This
biological make-up, combined with adequate conditions and opportunity, grants the ability to produce
actions, understanding, learning, etc. This is what is referred to as “causal power.”7 With this in mind,
Searle postulates that a computer could come to develop consciousness and understanding if and only if
we could replicate the nervous system we possess, since the duplication of the cause would inevitably
mean the duplication of the effects. This is a key refutation to AI, since the proponents of AI opt to argue
that a thorough understanding of brain function is not needed in order to develop AI.
Searle’s last poke at the proponent of AI is a point to definition of terms and how those who
believe in AI see things. Firstly, Searle points to the previously mentioned dualism: AI believers argue
that the mind is independent of the brain and that an understanding of the brain is unnecessary to
developing an artificial mind. He indicates that this is “Cartesian in the sense that it insists that what is
specifically mental about the mind has no intrinsic connection with the actual properties of the brain.”8
This is the technological equivalent of claiming that the mind is about the software, not the hardware. In
this light, however, it is plain to see that a “mind” cannot simply consist of a program in a disc sitting on
the desk. There must be some connection to a material being (in this case, a computer) in order to run
the program. Searle argues, lastly, that any instantiation of a machine can run a program, but it would
not be sufficient to produce mental states, which are fundamental to the development of
understanding, action, learning and comprehension.
6
(Searle, 1980)
Ibid
8
Ibid
7
4
Finding an Exit
Margaret Boden, in her work “Escaping from the Chinese Room”, points out that Searle, in his
work, adopts the strictly formalist view: “that programs are merely the manipulation of symbols and the
application of formal rules.”9 This indicates that he is not refuting the belief that a computer does, can
or ever will think; he is pointing out that meaning and understanding cannot derive merely from these
instantiations. The CRA is displaying what Boden calls a “question-answering text-analysis program,”10
while in reality not actually providing “answers”, since he does not understand Chinese, and therefore
cannot interpret the input/output data. The argument, Searle points out, is that if a program were
sufficient for understanding symbols (not simply manipulating them), then he would have learned
Chinese.
Searle explains that the mind is developed by the brain, and that the biological make-up of the
brain is central to the mind’s existence. He claims that metal and silicon are not sufficient to support or
develop mind. Boden points to a fallacy in the argument; Searle compares the production of
intentionality to that of photosynthesis. She indicates that in the case of photosynthesis, we clearly
know how the event takes place, why chlorophyll can support the event and what the event produces;
the brain is not the same deal. The very concept of intentionality is, to Boden, still undecided. There is
not even an indication that intentionality can be definitely identified. What is certain about
intentionality is that it “tends to direct the mind to the world and to possible worlds.”11 She indicates
that there is no theory directed at intentionality which is solid enough to be unproblematic.12
Another fallacy in the argument hinges on conclusion C2 drawn from A4 in the formulation of
the argument noted on page 3. The understanding of causal powers equivalent to that of brains can be
9
(Boden, 1988)
Ibid
11
Ibid
12
Ibid
10
5
understood as equal in all respects. This being the case, Hauser claims that “A4 to C2 seems guilty of
denying the antecedent in its seeming assumption that if A’s cause B’s, then only A’s can cause B’s.”13
This is the logical equivalent of arguing the following: human brains cause mental states; no computer
has a human brain; therefore, no computers can cause mental states.14 On the flip side, we can see the
equivalent causal powers as being equal in terms of the ability to cause a specified effect. This
formulation gives the “equivalent causal powers” the sense of being equal with respect to causing
minds; thus begging the question to assume whether computers truly do not have this. There needs to
be some form of clarification on C2 in order for the argument to have validity. Either we can look at it
from the first formulation, which invalidates the claim, or the second formulation which is too weak to
hold water. Hauser proposes the use of equal in the causally relevant sense.15
To look at causal relevance in this way, one must note how Searle views this.
“[It] is a bit like saying that if my petrol engine drives my car at seventy-five miles an hour, then any diesel
engine that was capable of doing that would have to have a power output at least equivalent to that of my
16
petrol engine”
This analogy is drawn forth to illustrate Searle’s view that there is “mental force” much in the same
sense that there “physical force.” Adding to this, Searle points out that there is a hierarchy in mental
forces, much as Aristotle saw the virtues.17 Searle argues that the failure of an organism to express the
lower mental capabilities as indication that the higher capabilities are impossible. The lower capabilities
are those of perception and intentionality, and possibly desire and belief. The higher capabilities are
those of deduction, reasoning and mathematical skill. Hauser points to a contradiction in this view.
Namely this: The ability of a calculator to calculate doesn’t presuppose lower capacities like sense
13
(Hauser, 1997, p. 204)
Ibid
15
Ibid; argument from relevance
16
(Searle, Minds, Brains and Science, 1984)
17
(Searle, Intentionality: an Essay in the Philosophy of Mind, 1983)
14
6
perception, and the lack of lower powers of a calculator leads us to conclude that the calculator doesn’t
actually calculate.18
Searle is given another fatal blow when it is pointed out that his biological requirement is also
false. Searle claims that the horsepower of his engine is dependant on the firing of pistons within the
engine. This is falsified by the existence of electric engines, which derive their power from elsewhere.
The same can be said for a computer, if we grant Searle the premise that the brain is necessary for the
mind: the fact that a human mind is dependant on a brain does not imply that a computer/artificial mind
would necessitate a brain; the mind could be developped elsewhere by something different.
Lastly, Hauser gives us an argument from the point of view of functionalism:
(A1) Thinking is a species of computation
(A2) Universal Turing Machines can compute any computable function
(A3) Computers are Universal Turing Machines
19
(C) Computers can think
This argument works both in favor of and against Searle. Firstly, it grants, in Searle’s favor, that
computers do not presently think. The argument works against Searle by proving that computers can
potentially think.
The efforts on the part of the CRA were to disprove the existence of AI, as well as the possibility
of AI. On both counts, it fails, yet one more so than the other. In terms of actual existing AI, Searle is
neither right nor wrong. He is right because there is currently no pure form of Artificial Intelligence
whereas he fails on the possibility of AI. His inability to steer from a purely biological requirement is
highly detrimental; he grants that acceptance of AI requires acceptance of a dualism, yet his method of
rejecting the possibility (by affirming that the brain is a necessary condition) is invalid. One cannot
18
19
(Hauser, 1997, pp. 205-206)
Ibid, p. 211
7
simply infer that what is necssary for the human mind is necessary for an artificial mind. The ciruclar
logic used when Searle claims that there are higher and lower order mental states hinders the
argument; the calculator can perform a higher-order mental capability while lacking lower order
capabilities (and, really, any other capability). The CRA fails to logically disprove the existence of possible
AI while simply reaffirming the fact that there is no AI presently. He falsely assumes that the silicon mind
could never possibly attain the same consciousness as the biological mind.
8
Bibliography
Boden, M. A. (1990). 'Escaping From the Chinese Room', In Margaret Boden, ed., The Philosophy of
Artificial Intelligence, New York: Oxford University Press, pp. 89-104.
Hauser, L. (1997). Searle's Chinese Box: Debunking the Chinese Room Argument. Minds and Machines,
199-226.
Searle, J. R. (1980). Minds, Brains, and Programs. The Behavioral and Brain Sciences, 417-424.
Searle, J. R. (1983). Intentionality: an Essay in the Philosophy of Mind. New York: Cambridge University
Press.
Searle, J. R. (1984). Minds, Brains and Science. Cambridge: Harvard University Press.
9