Download Instrumentalism, Semi-Realism and Causality

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Philosophical zombie wikipedia , lookup

Transcript
Instrumentalism, Semi-Realism and Causality
Comparing Dennett´s view of the function of folk psychology
with the Theory-Theory
Kevin B. Reuter, 28/07/06
Daniel Dennett has developed a theory about intentional states which avoids the threats
of eliminativism. According to him the main purpose of intentional states is to predict the
behaviour of complex systems. Mental States have an objective reality to be seen in
patterns of behaviour but do not have any real causal power themselves. In this essay I
want to briefly outline Dennett´s account, how it developed from being purely
instrumentalistic to semi-realistic, and show how it compares with the theory-theory, the
standard theory in the field. I will argue that by denying intentional states to be causal
efficacious, his theory faces too many difficulties to be considered a valuable alternative.
Introducing the Players
Beliefs, desires and other intentional states play an important part in our daily life, as we
ascribe them to others and ourselves. The theory-theory states that these intentional
states are properly characterized by a framework we call FP (what’s this?) and moreover
that FP is ontologically committed to real states. As an example of a typical rule of FP,
imagine Stuart desires to read this essay to its end, but a phonecall from Sharon to go to
the cinema will prevent him from doing this, then he will very likely desire that Sharon is
not calling.
Paul Churchland has in a now renowned article argued that FP is an empirical theory
alright, but a bad one and should and will therefore be eliminated in the near future. 1
Ramsey et al have tried to show that connectionist networks pose a further threat to the
validity of the theory-theory, as connectionist-networks face difficulties in explaining
propositional modularity.2 Stephen Stich has summarized these points by arguing that in
case of a successful reduction, a realistic account (be it rules-and-representation or
connectionistic theories) will win the battle. But it is more likely that a reduction is not
possible. This will eventually lead to an acceptance of the eliminativist stand. We have
now arrived at the point where Dennett´s theory steps in.
Dennett´s theory and his move from instrumentalism to semi-realism
In a series of articles Dennett has developed an alternative account to the theory-theory,
which is instrumentalistic rather than empirical and thus FP cannot be falsified by
neuroscience. Dennett uses the concept of stances and distinguishes three different kinds
to explain the behaviour of a system - the physical, the functional and the intentional
stance. If we explain the behaviour of a system in the intentional stance, we treat the
system as a rational agent, which acts as it ought to do according to the information it is
given – a clearly normative approach. Intentional states are useful concepts in predicting
behaviour, but are not causally efficacious themselves.
How attractive is Dennett´s theory? He surely got a point in stressing the great
explanatory power of attributing intentional states to humans, some animals and other
complex systems. Furthermore by construing his theory instrumentally, it is invulnerable
His main arguments are:
A. FP has made no significant progress within the last 2500 years.
B. FP has contributed hardly anything to phenomena like memory, mental illnesses, ...
C. There is very little hope that the concepts of FP can be embedded within the theories of other sciences.
2
Propositonal modularity means that mental states are functionally discrete, causally active and semantically
interpretable.
1
to Churchland´s arguments, as they only attack a proper empirical theory. But is this all
there is to intentional states?
In earlier papers Dennett terms a system a “true believer“ if it is practical to use the
intentional stance, e.g. if it is computationally too costly, or more efficient(I don’t
understand this sentence). But the use of “practical“ renders Dennetts theory
relativistic; it is possible that we become better in using lower levels of prediction (e.g. a
certain level of functional stance) or with the help of “Super Martians“, we might be able
to predict other humans behaviour even from the physical stance. Therefore Dennett has
modified his view from being purely instrumentalistic to what can be called semi-realistic.
In `True Believers` Dennett states:
It is important to recognize the objective reality of the intentional patterns discernible in the
activities of intelligent creatures.
Dennett’s theory takes a descriptive element on board. In `Real Patterns` he compares
the metaphysical status of intentional states to the parallelogram of forces. They are not
real in the sense of theoretically posited entities (illata), but of calculation bound entities
(abstracta).
In what follows I will use two arguments against Dennett´s semi-realistic theory. The
first was proposed by Stephen Stich as response to Dennett`s article “The Intentional
Stance“, the second is an argument about the likelihood of a possible correspondence
between at least some intentional and functional/physical states. A top-down and
bottom-up example will be used.
The commonsense causal efficacy of intentional states
Stich objects to Dennett´s comparison of intentional states and scientific constructs like
the parallelogram of forces.
I take commonsense discourse about beliefs and desires at face value. [...They] are
conceived by folk psychology to have both causes and effects. [... But] only real entities
(illata) can have causes and effects.
There is indeed a difference between scientific tools like the parallelogram of forces and
intentional states. While the parallelogram serves as a tool for predicting the behaviour of
bodies, but has no causal power itself, intentional states are attributed to others to
predict their behaviour because they are causally active.
Lynne Rudder Baker has pointed out that in `Elbow Room` Dennett assumes beliefs to
have some causal efficacy. But as soon as Dennett concedes intentional states to be
causally active, they become ontologically real. How could Dennett possibly meet this
obvious difficulty? Baker offers the following solution:
“This difficulty would be removed if Dennett were also an instrumentalist about causation [...
But] to be an instrumentalist about causation would leave one very little about which to be a
realist.“
Of course Dennett does not have to bite this bullet (too colloquial?). He could claim that
beliefs have indeed no causal power and since it is part of a predictive strategy to explain
behaviour we would ascribe causal power to intentional states without them being really
causally efficacious. To me this seems very unconvincing and I want to extend an
argument used by Churchland to stress its incredibility. Churchland made an interesting
comparison between the status of intentional states and the theoretical entities of
alchemy.
“It is an objective fact that much of the behaviour of metals and ores is predictable in terms
of the alchemical essences [...]. And yet there are no alchemical essences.“
With this move an alchemist could salvage his false theory, and insist on the entities
being abstracta rather than illata. But this surely would be an “outrage against reason
and truth“. In my opinion the reason why the predictive strategy fails in cases of
alchemy and intentional states, is that we treat the entities not as abstracta but as illata
with causal power. As soon as we propose theoretical entities to be causally active, they
enter the realm of illata and are subject to possible falsification. On one side there are
centres of gravity, statistical averages, etc. which are used as predictive tools but do not
have causal power themselves. On the other hand there are entities like alchemical
essences and intentional states, which are predictive tools too, but have also an empirical
status due to them being causally active.
Two possible correspondence examples3
The Causal Power of Active Knowing
If you look up the word “belief“ in the Oxford Dictionary you will read among other more
religious-related definitions: a firmly held opinion. The word “to know“ in comparison
means “to be sure of something“. A “belief“ is ambiguous, as it can mean the mental
state itself, but also that what is believed (what it refers to). There is no such expression
in the English language for knowledge, so I adopt the word `Knowing` for my purposes.
We use the word belief of course also for attitudes in which we want to express a kind of
uncertainty about our knowing, but in many cases there is no difference at all between a
knowing and a belief. This can be easily seen by taking the Inspector Clouseau example
from Ramsey et al. If the inspector believes the hotel is closed during the winter, then we
could say that he knows that the hotel is closed. As things stand I take knowings as a
subset of beliefs, beliefs I know with near absolute certainty. Moreover beliefs and
knowings have the same metaphysical status: they both fit the conditions of
propositional modularity (see above).
I claim that it is much harder for us to accept Dennett´s instrumental semi-realism if we
adopt the intentional stance for knowings instead of beliefs. Knowings seem to be active
memory states and there is very little doubt about the fact that knowledge is physically
represented and in fact has causal power.
Let me exemplify this:
I am sitting in my room, the window is open. There is a cold breeze coming in. I
know/belief that my window is open, so I go to the window and close it.
Dennett would argue that my `knowing` of the window being open is not causally active.
Someone would ascribe the knowing to me (not clear?) only in order to explain my
behaviour. Moreover, from a subjective stand I would need to concede that I do not close
the window because I know the window is open, but of some other reason. Beliefs and
desires are inherently more difficult to analyze, not to mention fears and other
emotionally laden states. The latter (what exactly do you refer to?) might be eliminated
in the future, but the mere causal power of `knowings` is a crucial argument against
Dennett´s theory.
Do Chess Computers have Intentional States?
Dennett likes one sort of intentional systems in particular: chess computers. He states
that we often treat them as intentional systems, e.g. the computer believes his rook is in
serious danger and makes a retreating move. In Dennett`s terms:
The following arguments can also be used but to a lesser extent against eliminativism itself. This should be
obvious as both semi-realism and eliminativism rely on intentional states being non-reducible.
3
Deep Blue, like many other computers equipped with AI programs, is what I call an
intentional system: its behaviour (keep with English!) is predictable and explainable by
attributing to it beliefs and desires -- "cognitive states" and "motivational states"--and the
rationality required to figure out what it ought to do in the light of those beliefs and desires.
At the same time, Dennett is convinced that there are no real internal workings which
correspond to these cognitive states. But are there really no corresponding physical
states? Deep Blue is not only calculating moves to a certain depth, but also evaluating
the possible moves (but the evaluation is also calculated). And a rook being in danger of
being captured is definitely a part of the evaluation process. There is a functionally
discrete state in Deep Blue, which can be interpreted as “move back the rook, as my
evaluation matrix calculates many minus points (or is in danger)“. And this state is
causally very active.
Dennett will respond in saying that even if there is a functionally discrete, causally active
state, it is not semantically interpretable. The issue at hand boils down to the Chinese
Room thought-experiment by John Searle. He has argued that strong Artificial
Intelligence cannot fulfil its task4, as computers are syntactically structured and are not
able to produce semantically interpretable (where is the noun?) intentionality. Dennet
himself attacked Searles interpretation of the Chinese Room argument, and the
Churchlands have mounted a serious attack using connectionist networks. 5
The point I want to make is this: Even if we cannot correspond (not clear) any intentional
state we attribute to Deep Blue to a functional state, the conditional argument still holds.
If strong AI succeeds, then Dennett´s semi-realistic view is incorrect, as then there are
intentional states which have causal power.
If this happens, then the only way to secure Dennett´s semi-realism is to claim that at
least some intentional states do not need to correspond to physical or functional states. 6
And with this weaker assumption Dennett might be right, but then his theory is not as
forceful as before.
Conclusion
In this essay I have outlined that Dennett´s theory hinges on the fact that intentional
states cannot be causally active. I have then analyzed the causality-issue on two
different levels: Firstly, Stich`s objection that we use intentional states as if they were
causally active and secondly that it is likely that at least some intentional states like
`knowings` or states we will ascribe to future (why future? Deep blue is present!) chess
computers have a correspondence on the functional level and thus are causally
efficacious. Therefore it seems that the theory-theory by granting FP to pick out
ontologically real states, is not such a bad theory at all. On a weaker reading Dennett´s
theory is though very enlightening as it explains why and how we ascribe intentional
states to complex systems and how we might continue to use these after many of them
have been eliminated by neuroscience.
Die logischen Schlüsse sind nicht nachvollziehbar ohne den entsprechenden
“Background”. Aber bis zu einem gewissen Grad ist das nachvollziehbar.
(Seitenzahlen einfügen)
Strong AI claims that it will be possible to build robots to which we can successfully ascribe `thoughts` and
understanding.
5
Jerry Fodor is convinced that his representational theory brings about intentionality. Physicists like Roger
Penrose in contrast are certain that a new theory of physics is necessary to explain semantically interpretable
understanding.
6
Ansgar Beckermann has pointed out that Dennett has become increasingly cautious in how widely applicable
his theory in fact is.
4
Bibliography:
1. Baker, L.R., 1989, `Instrumental Intentionality`, Journal of Philosophy, Vol.56, no.2,
pp.303-316
2. Beckermann, A., 2001, `Analytische Einführung in die Philosophie des Geistes`,
Berlin, de Gruyter Verlag
3. Churchland, P.M., 1981, ´Eliminative Materialism and the Proposisitional Attitudes´,
Journal of Philosophy, vol.78, no.2, pp.67-90
4. Churchland, P.M., 1988, `The Ontological Status of Mental States: nailing folk
psychology to ist perch`, Behavioural and Brain Sciences, vol.11, no.3, pp.507-8
5. Churchland, Paul, Churchland, Patricia, 1990, `Could a machine think?` Scientific
American 262, no.1, pp.32-39
6. Dennett, D., 1980, `The milk of human intentionality`, Behavioral and Brain
Sciences, no.3, pp.429-430
7. Dennett, D., 1984, `Elbow Room: The Varieties of Free Will Worth Wanting`,
Cambridge Mass., MIT Press
8. Dennett, D., 1987, `True Believers: The Intentional Strategy and why it works`, The
Intentional Stance, Cambridge: MIT Press, pp.13-35
9. Dennett, D., 1991, `Real Patterns`, Journal of Philosophy, vol.88, no.1, pp.27-51
10. Frankish, K., 2005, The Open University - The Postgraduate Foundation Module in
Philosophy, A850 Study Guide, Self-Ownership, pp.152-232
11. Penrose, R., 1994, Shadows of the Mind, Oxford University Press
12. Ramsey W., Stich S., Garon J., 1996, Connectionism, ´Eliminativism and the Future
of Folk Psychology´, Deconstructing the Mind, Oxford Unversity Press, pp.91-114
13. Reuter, K.B., 2005, `Hebbian Unlearning in Networks of Spiking Neurons with Low
Activity`, Dissertation Thesis, Technical University of Munich
14. Searle, J., 1980, `Minds, Brains, and Programs`, Behavioral and Brain Sciences, 3,
417-424
15. Stich, S.P., 1985, `The Future of Folk Psychology`, From Folk Psychology to
Cognitive Science, MIT Press, pp.242-6, p.253
16. Stich, S.P., 1988, `Connectionism, Realism and Realism`, Behavioural and Brain
Sciences, vol.11, no.3, pp.531-2