Download Chinese_Room - Lund University Publications

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Philosophy of mind wikipedia , lookup

List of unsolved problems in philosophy wikipedia , lookup

Symbol grounding problem wikipedia , lookup

Transcript
Lund University
Philosophy Department
FTEA 12:5 Consciousness
Exam paper Spring Term 2012
Teacher: Jan Hartman
An Intentional Look at the
中文室
Chinese Room
by
Tania Norell
Lund 29 May 2012
1
Introduction
-The Mind and Matter Problem
The Philosophy of Mind, which pertains to the study of mental properties and physical
properties in relation to the world, has been around since the 4th century with philosophers
like Socrates and Plato. The Mind and Matter Problem, also known as the Body and Soul
Problem, deals with the issues of identifying and understanding the apparent difference
between the mental mind or soul and the physical matter or body. Through history many
resolutions to this problem have been presented. For example, Ontological Dualism claims
that the mind and body are two distinct substances. Property Dualism claims that the mind is
independent of the body, because it has its own properties, but it is not a distinct substance.
Idealism claims that the physical can be reduced to the mental, and Monism claims that the
mental can be reduced to the physical, whereas Neutral Monism claims that the mental and
the physical are two attributes of one “unknown” substance. So as we can see the Mind and
Matter Problem can be approached in a variety of ways.
-Purpose and Demarcation
The purpose of this paper is to present and analyze two articles on the topic of the thought
experiment called the Chinese Room, which pertains to the Mind and Matter Problem within
Monism. A common view of contemporary philosophers is within the realm of Physicalistic
Monism which includes Identity Theory, Behaviorism, Functionalism and Eliminative
Materialism. In this paper I will not go into the different aspects of the variations of Monism.
My intention is to focus on one of the monist theories that deals with Biological Naturalism,
which entails the view that mental properties and physical properties are not two separate
properties but rather two different properties of the biological brain.
-Material and Method
The materials I will use are two articles. First, John R. Searle, a philosophy professor at
Berkeley University of California, who´s article “Mind, Brains, and Programs” is about the
Chinese Room. Secondly, Jacquette Dale, a philosophy professor at the University of Bern,
who´s article “Fear and Loathing (and other intentional states) in Searle´s Chinese Room”
offers criticism to Searle´s article. I have chosen to present each article followed by an
analysis of each, separately. I will then conclude with an intentional look at the issues
concerning the Chinese Room.
2
Material Presentation
-Searle
In the article “Minds, Brains, and Programs” in Behavioral and Brain Sciences from 1980,
John Searle presents his thought experiment called the Chinese Room. Through he offers two
propositions that form the foundation for justifying three conclusions;
1) Intentionality in human beings is a product of causal features of the brain.
2) Instantiating a computer program is never by itself a sufficient condition of intentionality.
3) The explanation of how the brain produces intentionality cannot be that it does it by
instantiating a computer program.
4) Any mechanism capable of producing intentionality must have causal powers equal to those of
the brain.
5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by
designing programs but would have to duplicate the causal powers of the human brain.
(Searle 1980, pp.417-457).
Searle asks the question “Could a machine think?” and his answer is, Yes a machine can think,
if it is a machine that has the causal powers of a biological brain. The reason he makes this
point is because he is reacting to the Artificial Intelligence (AI) researchers, who also claim
that machines can think, but according to Searle they do not consider the criteria of
intentionality before referring to the machine as mind. Searle seems to want to highlight that
AI is actually not dealing with mind but rather has do to do with the matter of the function of
programs, which according to Searle is a physical thinking that is an operational function of
simulation and cannot be considered as a mental thinking containing understanding.
The point Searle puts forward with the Chinese Room example is that yes a computer
machine can think, behave, function, operate or compute, whatever term one chooses to use,
just as a human brain machine can. This is done by using information on identifying basic
symbols and following instructions on constructing complex symbols correctly, but this does
not automatically imply that the machine understands the meaning of the process. In the
Chinese Room experiment he places himself in a room with a slot for input and a slot for
output. The input he receives is a set of Chinese symbols. In the room he has all the
information needed for Chinese symbol identification and all the instructions for how to
assemble Chinese symbol combinations correctly. He uses this program to compute an output
which is accurate in relation to the input. He points out that the people outside the room will
think the person in the room understands Chinese but in fact he does not understand any
Chinese at all. What he understands on the other hand is the language that the information and
3
instructions are provided in, which is English. By this example he hopes to show that yes he is
capable of processing an English program but that does not translate into an understanding of
the Chinese input or output. The computers thinking is then not equivalent to a mind that has
intentionality, i.e. it does not have the causal power to understand anything beyond the
program, but it does have the capability to function and behave as if it does. Since the
computer´s thinking is only a simulation of the thinking process it cannot be claimed to have a
mind that understands. Searle clearly states that “the computer understanding is not just
partial or incomplete; it is zero” (Searle 1980, pp.417-457).
In his article Searle also replies to 6 criticisms by referring to his two propositions which
ultimately bring him to the same three conclusions each time. No new ideas are presented, but
the question; Could a machine think? to which he answered Yes in the beginning of the article
is reflected upon with the questions; “But could something think, understand, and so on solely
in virtue of being a computer with the right sort of program? Could instantiating a program,
the right program of course, by itself be a sufficient condition of understanding?” (Searle
1980, pp.417-457). The question is then; Could a machine understand? And Searle´s answer
to that question is No. He then concludes the article with explaining his answer, again using
his two propositions; which ultimately highlights the gap between syntax and semantics.
Searle explains that;
Because the formal symbol manipulations themselves don´t have any intentionality; they are quite
meaningless; they aren´t even symbol manipulations, since the symbols don´t symbolize anything.
In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers
appear to have is solely in the minds of those who program them and those who use them, those
who send in the input and those who interpret the output (Searle 1980, pp.417-457).
Analysis
-Searle
Searle´s Chinese Room is offered in reaction to AI and when it comes to AI Searle makes a
distinction between weak AI and strong AI. He agrees with the weak AI that implies that a
computer machines simulation of human cognitive capacities is a useful tool for us to use for
the purpose of understanding the processes of a minds capability of rational thinking. He does
not agree with the strong AI that claims that this tool is equivalent to a mind because it is able
to understand. Searle claims to exemplify through his thought experiment that it is incorrect to
think that an accurately programmed machine is sufficient in providing intentionality, which
4
is the ability of having cognitive states that exceed the programming, i.e. intentionality goes
beyond mere information, it demands understanding.
Searle mentions that the definition of the term understand can be argued, if you so wish, but
the brute fact is that a machine without the brains intentionality understands nothing. The
reason we use the word understand in relation to mechanical machines is because they are an
extension of our brains intentional power, ie. we understand the input and output, but the
machine does not. The machine is only capable of computing the program and if one wants to
claim that the machine understands the program this would be a faulty use of the word
understand. With the Chinese Room argument Searle hopes to show that the program in
relation to the computer is not equivalent to what the mind is to the brain. His reason is that
even if his brain memorized the English program concerning the Chinese symbols he would
still not understand Chinese. Searle is quite clear on what he has to say in regard to strong AI:
If strong AI is to be a branch of psychology, then it must be able to distinguish those systems that
are genuinely mental from those that are not. It must be able to distinguish the principles on which
the mind works from those on which nonmental systems work; otherwise it will offer us no
explanations of what is specifically mental about the mental (Searle 1980, pp.417-457).
Searle clearly delineates what this specifically mental trait is. He claims that the reason strong
AI researchers think that computers have mind is because cognitive science delineates the
mental as information processing and a computer can be argued to do exactly that- process
information. Searle points out that this correlation is incorrect since computers manipulate
specific program information, it does not reflect on information about the world. The
difference is that the computer has the capability of syntax processing which is an operational
function that provides results and the mind has the ability of semantic processing which is a
reflective function that results in understanding. Searle suggests that if strong AI did not
correlate operational behavior with intentionality then there would not be a problem, because
they would realize their mistake. He goes on to accuse strong AI of being dualistic since this
to him exemplifies that they separate mind from being dependent on the brain because they
claim that mechanical machines can have mind. Searle views himself as a monist and yes, a
machine can be claimed to be capable to think, but it is a mind only if it is a machine with the
biological causal power of being able to provide intentionality in relation to the world.
5
Material Presentation
-Dale
In the article “Fear and Loathing (and other intentional states) in Searle´s Chinese Room” in
Philosophical Psychology from 1990, Jacquette Dale argues that Searle´s Chinese Room
experiment poses no threat to Artificial Intelligence.
The fact that the Chinese Room by hypothesis satisfactorily passes the Turning Test of machine
intelligence, but the homuncular agent by stipulation does not understand Chinese, is taken by
Searle as demonstrating that no pure syntax processor is capable of achieving true intelligence or
understanding, or of producing genuine intrinsic intentionality necessary for psychological states
(Dale 1990, pp.287-305).
Dale points out that Searle´s conclusions that claim to refute AI, also refutes other
psychological theories such as functionalism, computationalism and cognitivism, He then
offers three criticisms against Searle´s confident refutative conclusions.
1) The Chinese Room is irrelevant in the refutation of the Turning test of machine intelligence and
any of the functionalist-computationalist-cognitivist family of philosophical-psychological theories.
2) The concept of the right causal powers required to sustain the product of genuine intrinsic
intentionality is unintelligible except as the right microlevel input-output functionalities, a model
supposedly invalidated as inadequate by the Chinese Room counterexample.
3) The causal-biological naturalization of intentionality in terms of phenomena caused by and
realized in neutral microstructures which Searle attempts to advance is either reducible to inputoutput functionalities, again supposedly invalidated by the Chinese Room, or else fails to
naturalize the concept by lack of analogy with distinctively nonintentional nomically irreducible
phenomena caused by and realized in a nonneutral material microstructure (Dale 1990,pp.287-305).
The problem that Dale highlights is that there is a conflict between Searle´s naturalization of
intentionality as being a causal-biological phenomenon and the thesis that it is causallyphysically-mechanically irreducible. Searle´s defense to all criticism is that the Chinese Room
example clearly shows that computers are syntactically capable but not semantically able and
this is the inherent difference between programs and minds. Dale clarifies that this is not the
issue; the criticism pertains rather to the sufficiency of the Chinese Room as a valid enough
justification for these conclusions;
Searle´s argument states:
Programs are syntactical.
Minds have semantics.
Syntax by itself is neither sufficient for nor constitutive of semantics. Therefore,
Programs by themselves are not minds (Dale 1990, pp.287-305).
Dale´s refined argument states:
Programs are syntactical and at most only derivatively semantical.
Minds are intrinsically semantical.
Syntax and derivative semantics by themselves are neither sufficient for nor constitutive of
intrinsic semantics. Therefore,
Programs by themselves are not minds (Dale 1990, pp.287-305).
6
Dale argues that pure syntax is an oxymoron and therefore a program cannot be purely
syntactical. Semantics is not something a mind has but rather minds are intrinsically
semantical. The issue that Dale conveys through his criticism is that;
The problem is that when the modified Chinese Room gives up the non-Chinese-speaking symbolswapping homuncular prisoner as a single locus of program execution and control, and redesigned
as a micro-functionally isomorphic simulation of natural intelligence, it is no longer evident that
the system lacks intrinsic intentionality (Dale 1990, pp.287-305).
Analysis
-Dale
Dale´s criticisms in his article “Fear and Loathing” picks up where Searle leaves off in his
article “Minds, Brains and Programs”. That is he highlights Searle´s perceived gap between
syntax and semantics. What Dale seems to want to bring to the table is the possibility that
there might be semantics involved with the syntax depending on if it is framed on a macro- or
micro-level. The only fact that Searle has that the Chinese Room “computer” does not
understand Chinese is the fact that he, as the only figuring agent, does not understand Chinese.
He does, on the other hand, understand English and that is why he is capable of following the
program because it is in English. But, one could ask if the firing neurons in his brain
“machine” understand English or are they doing the understanding syntactically, so to speak.
How then does one distinguish between syntax and semantics? Depending on the frame of a
macro-level homuncular intelligent agent, or the frame of micro-level isomorphoric
intelligence, the distinction between syntax and semantics is not as clear cut as Searle would
like to have it. “The problem of macro- versus micro-level program design suggests that
Searle has no prior justification for the claim that such a program could not as a matter of fact
duplicate the causal powers of the brain minimally sufficient to produce intentionality”
(Dale1990, pp.287-305).
According to Dale the only reason Searle has of holding on to the syntax-semantic gap is by
keeping the Chinese Room example in a macro-level homunculus frame. So Dale does not
disagree with Searle on the macro-level, which concludes that programs do not have minds,
but rather he has highlighted that the fact that programs do not have minds is because,
according to Searle´s propositions, they are only derivatively semantic and therefore not
sufficient for intrinsic semantics, which would be necessary for equaling programs as having
minds because then they would be mind.
7
Searle does not appreciate this widening of the frame, because in response to Dale Searle
maintains that Dale has “some very deep misunderstandings” (Dale 1990, pp.287-305), not
only of his arguments but of the nature of intentionality. The main issue then between Dale
and Searle is the ontology of intentionality because even if Dale, to a degree, agrees with what
Searle´s Chinese Room shows, regarding the distinction between syntax and semantics, he
does not agree that it shows that intentionality is an exclusively biological causal power
dependent on the brain. Dale writes that “even if only biological systems were to exhibit
intrinsic intentionality, a claim which cannot be adduced as obvious without begging the
question against mechanist philosophy of the mind, that still would not qualify intrinsic
intentionality as biological” (Dale 1990, pp.287-305). Searle´s approach is thus that it is
obvious and Dale approach is that that just won´t do. The fact that intentionality is exhibited
by biological organisms does not automatically translate into the fact that intentionality´s
essence is biological.
Searle´s response to this is to accuse Dale of being a dualist since he interprets Dale´s
argument as implying that if intentionality is not necessarily dependent on the biological then
that exemplifies that he holds a view that intentionality is abstract and separate from
biological phenomenon. Dale concludes by simply not taking Searle´s fear and loathing on
board and remarks that Searle contributes no new ideas nor restates previous positions to
reinforce the Chinese Room as to adequately meet any criticism.
Conclusion
-Summary
In this paper I have shown that Searle´s Chinese Room is a thought experiment that is directed
mainly against the Artificial Intelligence claim that a machine that is capable of thinking is
also able to understand and therefore equivalent to mind. Searle unequivocally argues that his
Chinese Room sufficiently justifies his two propositions that claim that programs are
synthetic and minds have semantics and that therefore his conclusion that machines with
programs do not have mind is obvious. He also concludes that only biological brains have the
causal power necessary or intentionality and thus have mind.
8
Dale criticizes Searle´s argument of not sufficiently justifying a universally applicable syntaxsemantic gap and just because biological brains are capable of semantics does not
automatically translate into the fact that the essence of intentionality is necessarily biological.
Mind, in this case, has been portrayed equivalent to intentionality and matter equivalent to a
computer. According to Searle the brain is a machine that has mind and according to AI a
computer does not necessarily have mind, but it is mind. The problem Dale wants to discuss is
then what is mind, but Searle has already made up his mind, so to speak, with the conclusion
that the mind is the causal power of a biological brain.
AI implies that if a machine is programmed sufficiently then it will compute intelligently and
understand what it is computing, the Chinese Room exemplifies that programming is
insufficient for understanding because programs are syntactical. Only minds have the ability
of understanding and therefore programs are not minds. Dale agrees that programs do not
have minds but suggests that programs could be claimed to be mind. But Searle stands by the
fact that a machine computing program is not thinking if the definition of thinking entails
understanding what is thought. It then seems as if the mind and matter issue here has more to
do with consciousness than intelligence. Instead of intelligence being a mental property it
becomes a physical property, so to speak. The Mind and Matter Problem thus pertains to the
definition of mind.
-Reflections
The question of what is necessary as to be able to understand is what divides all the different
resolutions offered to the Mind and Matter Problem and we have, as far as we know, been
asking this question since the 4th century. I agree with Searle´s conclusions when framed as
they are in the Chinese Room and I agree with Dale´s point that the ambiguity of that which
seemingly is necessary for intentionality might not necessarily be dependent on a biological
brain. Is causal power something we have or something we are or something we do?
Paradoxically, Searle considers himself a monist and accuses Dale of dualism, when one
could wonder if Searle´s proposition that the mind has semantics, isn´t actually more of a
dualistic view than Dale´s proposition that minds are semantical. What is the causal power
that Searle claims that the brain has, if an exact mechanical copy of its biological firing
neurons does not qualify as mind? The question of what mind is, in contrast to what has mind,
is therefore the mayor question still up in the air.
9
References

Dale, Jacquette (1990) “Fear and Loathing (and other intentional states) in Searle´s
Chinese Room”; Philosophical Psychology; Jun/Sep90, Vol.3 Issue 2/3, pp.287-305.

Searle, John R. (1980) “Minds, Brains, and Programs”; Behavioral and Brain
Sciences 3 (3), pp.417-457. Cambridge University Press, UK/US.
10