Download Meaning in Artificial Agents: The Symbol Grounding Problem

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Enactivism wikipedia , lookup

Agent (The Matrix) wikipedia , lookup

AI winter wikipedia , lookup

Technological singularity wikipedia , lookup

Artificial intelligence in video games wikipedia , lookup

Hard problem of consciousness wikipedia , lookup

Hubert Dreyfus's views on artificial intelligence wikipedia , lookup

Functionalism (philosophy of mind) wikipedia , lookup

Intelligence explosion wikipedia , lookup

John Searle wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Chinese room wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Transcript
Meaning in Artificial Agents: The Symbol
Grounding Problem Revisited
Dairon Rodríguez, Jorge Hermosillo &
Bruno Lara
Minds and Machines
Journal for Artificial Intelligence,
Philosophy and Cognitive Science
ISSN 0924-6495
Minds & Machines
DOI 10.1007/s11023-011-9263-x
1 23
Your article is protected by copyright and
all rights are held exclusively by Springer
Science+Business Media B.V.. This e-offprint
is for personal use only and shall not be selfarchived in electronic repositories. If you
wish to self-archive your work, please use the
accepted author’s version for posting to your
own website or your institution’s repository.
You may further deposit the accepted author’s
version on a funder’s repository at a funder’s
request, provided it is not made publicly
available until 12 months after publication.
1 23
Author's personal copy
Minds & Machines
DOI 10.1007/s11023-011-9263-x
Meaning in Artificial Agents: The Symbol Grounding
Problem Revisited
Dairon Rodrı́guez • Jorge Hermosillo • Bruno Lara
Received: 19 April 2011 / Accepted: 23 November 2011
Ó Springer Science+Business Media B.V. 2011
Abstract The Chinese room argument has presented a persistent headache in the
search for Artificial Intelligence. Since it first appeared in the literature, various
interpretations have been made, attempting to understand the problems posed by
this thought experiment. Throughout all this time, some researchers in the Artificial
Intelligence community have seen Symbol Grounding as proposed by Harnad as a
solution to the Chinese room argument. The main thesis in this paper is that
although related, these two issues present different problems in the framework
presented by Harnad himself. The work presented here attempts to shed some light
on the relationship between John Searle’s intentionality notion and Harnad’s
Symbol Grounding Problem.
Keywords
Chinese room argument Symbol grounding problem
Introduction
Since its conception in the fifties, Artificial Intelligence as a scientific field has tried
to emulate the behaviour of the human brain, on the assumption that computers and
brains are information processing machines.
A traditional and widely accepted view of cognition explains behaviour as a
product of a direct, unidirectional line of information processing. Sensory inputs
create a representation and according to this a motor action is performed; actions are
D. Rodrı́guez J. Hermosillo B. Lara (&)
Facultad de Ciencias, UAEM, Cuernavaca, Mexico
e-mail: [email protected]
D. Rodrı́guez
e-mail: [email protected]
J. Hermosillo
e-mail: [email protected]
123
Author's personal copy
D. Rodrı́guez et al.
regarded as reactions, responses to stimuli. Most of the observed behaviour is
considered a consequence of an innate stimulus-response mechanism that is
available to the individual (Witkowski 2002). Known as the information processing
metaphor or computationalism, this framework thinks of the perception processes as
modules that receive, modify and then pass the information available from the
environment to modules in charge of motor control.
During the first decades of research, the goals of Artificial Intelligence, seemed
very clear: to achieve artificial systems capable of handling the right sort of
representations by means of the right set of rules. The cognitive sciences found
themselves following these same assumptions on the functioning of the brain and
the mind. All these ideas were the principles of cognitivism, asserting that the
central functions of the mind -of thinking- can be accounted for in terms of the
manipulation of symbols according to explicit rules. Cognitivism has, in turn, three
elements of note: representation, formalism, and rule-based transformation
(Anderson 2003). In Artificial Intelligence, the main areas of research could be
framed in what is now known as GOFAI (Good Old Fashion Artificial Inteligence)
which loosely follows the ideas of cognitivism.
In the last three decades, a big direction change affected the cognitive sciences,
including Artificial Inteligence. A cornerstone in the the aforementioned turnabout
was John Searle’s seminal paper (Searle 1980), where he severly criticized the
computationalist approach to cognition. Several authors in the cognitve sciences
comunity attempted to give Searle a reply to his argument, which states that
computer programs are not enough to allot artificial systems with minds. However,
this thought experiment aroused some questions that still remain open. Later, Steven
Harnad brought out the now highly cited Symbol grounding problem (SGP) (Harnad
1990), where he sets a fundamental issue for achieving intelligent machines.
In this paper we address the following question: what are the links between the
fundamental issues that Searle and Harnad rise? After a thorough review of the
papers presented by Harnad throughout his career, we try to shed some light on the
connection between Searle’s intentionality notion and Harnad’s SGP. We argue that
Harnad never intended to address directly Searle’s Chinese room argument with his
SGP and the solutions he proposed for it, as some works found in the literature have
presumably assumed. This discussion has received little attention in the literature.
We believe that after nearly 30 years of research, it is important to consider the
connection between these two relevant contributions, as it could give a fresh
direction in the search for Artificial Intelligence.
In order to bring out our thesis, we analyse a sample of works taken from a very
significant and thorough historical review of the SGP after fifteen years of research
by the Artificial Intelligence community, presented by Taddeo and Floridi (2005).
We believe that this review encompasses important strategies and investigations
into the SGP and the search for Artificial Intelligence. However, our analysis is
carried out from a different perspective than Taddeo and Floridi as they do not
attempt to draw a direct connection between the two authors we are concerned with.
In the next Section we present a brief outline of both the Chinese Room Argument
and the SGP. We continue with a presentation of our main thesis by reviewing
Harnad’s work. We then present some misinterpretations we have noticed in the
123
Author's personal copy
Meaning in Artificial Agents
literature regarding Searle’s and Harnad’s relevant contributions, since we believe it
is important to highlight the problems and consequences of making a wrong
connection between them. Finally, in the last Section, we present our conclusions.
The Chinese Room Argument
In his 1980 seminal paper (Searle 1980), Searle tries to answer one of the main
questions the Artificial Inteligence community has been pondering throughout its
history: Can a machine think? It is of vital importance to mention the context and
prevailing theories when this question was posed.
As discussed in the previous section, research in the Cognitive Sciences was
highly influenced by computationalism and the affirmation that brains were similar
to computers in that they were only information processing machines. Conversely,
for people working in Artificial Intelligence, a computer was seen as a brain waiting
for the right program to become an intelligent machine. The origin of this idea is in
Turing’s (1950) paper where he states that a computer can be considered intelligent
if it is indistinguishable in its verbal behavior from a human being (Turing 1950). It
follows from this argument that it would be possible to have an intelligent machine;
the only missing element is the right program, a program designed to hold a
conversation in a human manner.
However, Searle argued against that possibility by means of the Chinese room
argument. Searle’s argument centers on a thought experiment in which he himself,
who knows only English, sits in a room following a set of syntactic rules written in
English for manipulating strings of Chinese characters. Following these rules he is
able to answer a few questions asked in Chinese so that for those outside the room it
appears as if Searle understands Chinese. However, he only pays attention to formal
features of manipulated symbols, overlooking the meaning that could be associated
with them. Therefore, the aforementioned set of rules enables Searle to pass the
Turing Test even though he does not understand a single word of Chinese.
The argument is intended to show that while suitably programmed computers
may appear to converse in natural language, they are not capable of grasping the
semantic contents of their own symbolic structures. Searle argues that the thought
experiment underscores the fact that computers merely use syntactic rules to
manipulate symbol strings, but have no understanding. Searle goes on to say:
The point of the argument is this: if the man in the room does not understand
Chinese on the basis of implementing the appropriate program for understanding
Chinese then neither does any other digital computer solely on that basis, because
no computer has anything the man in the room does not have (Searle 1980).
Symbol Grounding as a Response to the Chinese Room
In Harnad (1990), Harnad posed the following problem. An artificial agent, even
though capable of handling symbols syntactically, nevertheless does not have the
123
Author's personal copy
D. Rodrı́guez et al.
means to link those symbols to their referents. This problem was framed in the
context of the prevailing theories in the cognitive sciences at the time, namely the
symbolic model of the mind, which holds that ‘‘the mind is a symbol system and
cognition is symbol manipulation’’ (Harnad 1999).
Strictly speaking, the SGP can be seen as the problem of endowing Artificial
Agents with the necessary means to autonomously create internal representations
that link their manipulated symbols to their corresponding referents in the external
world. These representations should arise through the agents own sensory motor
capabilities, by grouping in general categories the invariant features of perceived
data. These categorical representations have as their main function to act as
concepts, allowing agents to pick out the referents for manipulated symbols.
Until now, we have addressed Searle’s Chinese room argument as well as
Harnad’s main thesis regarding the SGP. Nevertheless, we still have to discuss how
the two are related to each other.
In this respect, most of the specialized literature has regarded Harnad’s proposal
to solve the SGP as the first attempt to coherently answer the questions that the
Chinese Room thought experiment raised at the time. Our point here is that,
contrary to what some authors presume, Harnad has never intended to show that an
Autonomous Agent, by grounding its symbols, grasps their meaning. This is the
main debate we engage and which is central to our discussion in this section.
Let us be clear about our point. It would appear that the generalized assumption,
that manipulation of grounded symbols entails understanding, supports a broader
theoretical view that considers the SGP to be equivalent to the problem of endowing
Autonomous Agents with meaningful thoughts about the world. This view is drawn
on the general assumption that once an Autonomous Agent has grounded all its
manipulated symbols it will be able to understand what its symbolic structures stand
for, just as we know what our thoughts and beliefs are about. However, in clear
opposition to this last statement, Harnad himself aimed at showing that our
meaningful mental states could not be reduced to holding categorical representations. Harnad’s point is that categorical representations are not enough for a subject
to make its internal states bearers of meaning, since meaning can only be achieved
when we know what those mental states are about.
In what follows, we will introduce a thought experiment made by Harnad
himself. According to this experiment, solving the SGP does not necessarily imply
solving the lack of thoughts in Artificial Agents. We then turn our attention to
Harnad’s particular conception of meaningful mental states, which postulates an
additional element to the mere categorical representations as a condition to enable
the knowledge of what our thoughts are about.
As it is well known, Harnad agrees with Searle in that symbols and their
manipulation by Artificial Agents is not enough to produce understanding of any
kind; however, few authors have realized that Hanard seems to go beyond the
traditional criticism to computationalism when he asserts in Harnad (2003) that even
if an Artificial Agent was able to ground all the symbols it manipulates, this would
not imply that the general project of Artificial Inteligence is actually possible. To
express this another way, solving the SGP does not imply that Artificial Agents
possess thoughts or beliefs.
123
Author's personal copy
Meaning in Artificial Agents
To demonstrate this last thesis, Harnad developed his own version of the Chinese
room thought experiment (Harnad 1992): let us imagine a robot built up in such a
way that it could, by its own means, provide representations to each of the symbols
it manipulates. Furthermore, let us imagine that thanks to these representations and
to its particular functional organization, the robot would have the same verbal
behavior as we exhibit daily when confronted with sensory stimuli. In that case, the
robot might be able to succeed in the robotic version of the Turing Test, that is, its
verbal behavior in the face of sensory stimuli would be indistinguishable from that
of a normal human being. Nevertheless, Harnad further mentions that even with all
these capabilities ‘‘the robot would fail to have in its head what Searle has in his: It
could be a Zombie’’ (Harnad 2003); that is, it would lack any kind of belief or
thought that could be considered a bearer of meaning.
Even if the thought experiment we have briefly described does not try to
demonstrate that robots cannot think, it does contribute to the debate in that it
reveals that we would not incur in any conceptual contradiction if we postulate the
existence of an Artificial Agent that, while lacking the capability of thought, would
be endowed with the means to pick out the referents for the symbols it manipulates.
Thus, the fundamental intuition that Harnad seeks to appeal to, with his mental
experiment, is that there would be no technical inconvenience if we wanted to make
a robot that, while possibly being a zombie, would be able to categorize and identify
objects on the basis of categorical representations that it would have acquired
autonomously. The latter means that it would not be evident whatsoever, that an
Artificial Agent would have meaningful internal states because it has grounded the
symbols it manipulates.
So, what is missing in an Artificial Agent for it to have thoughts as we do?
Harnad suggests that the human brain posseses two properties that make
‘‘meaningless strings of squiggles become meaningful thoughts’’ (Harnad 2003).
The first property has already been pointed out by Harnad himself, when in
(1990) he wrote about the limits of computationalism: the human brain, as opposed
to computational systems, has the capacity to pick out the referents for the symbols
manipulated. In this line of thought, the fact that Mary correctly believes that
Aristotle was the mentor of Alexander the Great is possible, among other things,
thanks to her capability to direct her thoughts to Aristotle and not any other Greek
philosopher such as Plato.
Several authors, Harnad himself included, have tried to explain this capability
with the existence of internal representations or concepts, that given their particular
internal structure are capable of selecting the referent for our thoughts. According to
this perspective ‘‘beliefs imply the use of concepts: one can not believe that
something is a cow unless one understands what a cow is and in this same way,
possess the concept cow’’ (Honderich 1995). Following this line of reasoining, it
follows that our thoughts are made out of categorical representations, e.g. concepts
that are used to determine or select their respective referents.
It is then, in a similar manner, that the symbol grounding as proposed by Harnad
would be useful for an Artificial Agent to pick out the referents for the symbols it
manipulates. What all this implies is that grounding is at least a necessary condition
for thoughts because it is what explains the particular directionality of our mental
123
Author's personal copy
D. Rodrı́guez et al.
states. However, as discussed above, grounding by itself appears insufficient for
meaningful mental states.
With respect to the second property that allows us to have thougths, Harnad
postulates that phenomenological consciousness and understanding are somehow
intimately tied. Harnad points to the fact that our mental states possess a
phenomenological or experiential component: ‘‘[we] all know what it FEELS like to
mean X, to understand X, to be thinking about X’’ (Harnad 1992). However, for
Harnad this fact, far from being a simple curiosity, is tightly bond with the property,
characteristically exhibited by some of our internal states, to be about facts or
particular entities. This aboutness property has been called intentionality (Brentano
1874). As an example, Harnad affirms that the difference between understanding a
certain sentence written in english and not understanding a sentence because it is
written in Chinese resides in a difference in phenomenological content: in the first
case we feel the sentence to be about something in particular, in the second case
such sensation of aboutness is missing (Harnad 1992). This difference suggests that
the property to represent entities would be intrinsically present on those internal
states that come accompanied by a phenomenological content of aboutness and that
in a certain manner would define them as thoughts about the world. Simply stated:
we are able to make our internal states bearers of meaning when we feel (become
aware of) their aboutness.
The argument runs contrary to the naturalist consensus followed by most
cognitive scientists, and it is partially backed by an old and vigorous philosophical
tradition that links the concepts of consciousness and thinking. This tradition makes
its first formulation in the works of the French philosopher Rene Descartes, who
made the following declaration: ‘‘I take the word thought to cover everything that
we are aware of as happening within us, and it counts as thought because we are
aware of it’’ (Descartes 2010).
A natural conclusion to our discussion of the second property exposed by Harnad
is that there can not be meaningful thoughts that are not at the same time conscious
mental states. This would impose an important restriction on the entities to which
we can justifiably allot mental states, as only entities to which we concede
consciousness could have them.
Evidently, this would mean that for the case of the position defended by Harnad
and that we have tried to summarize, the general project of Artificial Intelligence,
namely trying to endow Artificial Agents with thoughts, could only be completed
when we manage to solve not only the SGP, given that ‘‘the problem of
intentionality is not the SGP; nor is grounding symbols the solution to the problem
of intentionality’’ (Harnad 2003), but on top of that, creating artificial
consciousness.1
1
Before this possibility, Harnad himself is particularly sceptic: ‘‘the problem of discovering the causal
mechanism for successfully picking out the referent of a category name can in principle be solved by
cognitive science. But the problem of explaining how consciousness can play an independent role in
doing so is probably insoluble’’ (Harnad 2003).
123
Author's personal copy
Meaning in Artificial Agents
Some Representative Misinterpretations
We have pointed out that in the specialized literature there is a trend towards
considering that Harnad’s proposal is a method for Artificial Agents to give
meaning to the symbols it manipulates. It is important to realize that this assumption
has had a deep impact in the terms in which some authors express themselves about
their proposed solutions to the SGP. As we shall see in this section, the origin of this
common assumption can be found in different sources.
An early notable example can be found in Davidsson (1993), where Davidsson
states, referring to the SGP, that:
the problem of concern is that the interpretations (of symbols) are made by the
mind of an external interpreter rather than being intrinsic to the symbol
manipulating system. The system itself has no idea of what the symbols stand
for.
In this way, Davidsson reformulates the SGP in such a way that subtly turns it in the
problem posed by Searle.
It follows then, for Davidsson, that the mechanisms suggested by Harnad for
Artificial Agents to autonomously acquire internal representations are at the same
time tools or means for endowing the internal states of Artificial Agents with
meaning:
Harnad suggests in the same article ([Har90]) a solution to this problem
[SGP]. According to him, the meaning of the system’s symbols should be
grounded in its ability to identify and manipulate the objects that they are
interpretable as standing for (Davidsson 1993).
Echoing this interpretations of Harnad’s work, Davidsson states as one of his
main goals to solve the SGP by developing an agent that
must by itself be able to make sense of the symbols used to represent its
knowledge about its environment and of its problem solving capabilities (cf.
paper VI). Thus, as pointed out earlier, it must be able to interpret and reason
about these symbols (Davidsson 1996).
Davidsson hopes to accomplish this goal through the vision system and
its use of epistemological representations that are parts of the same structure as
the corresponding symbols, which permits grounding, or the connection
between symbols (designators) and their referents (objects in the world), to be
carried out (Davidsson 1993).
However, as can be derived from the previous section, this conclusion is not
obvious and therefore requires further justification. Besides, it is important to point
out that an interpretation such as Davidsson’s about SGP is implausible in the light
of what Harnad himself argued later, since his main concern consisted simply in
making manipulated symbols to be: ‘‘grounded in something other than just
meaningless symbols’’ (Harnad 1999), where grounded means the ability to pick out
123
Author's personal copy
D. Rodrı́guez et al.
referents for manipulated symbols and not the ability to make sense of symbols as
Davidsson suggested.
Another author following a similar interpretation of the SGP is Mayo (2003).
According to him:
In response to Searle’s well-know Chinese room argument against Strong AI
(and more generally computationalism), Harnad proposed that if the symbols
manipulated by a robot were sufficiently grounded in the real world, then the
robot could be said to literally understand.
With no doubt, this quote represents a clear enough instance of the misconception
concerning SGP that Mayo shares with different authors.
Naturally, the answer of these authors to the Chinese room argument consists in
proposing a mechanism to endow Artificial Agents with categorical representations
for the symbols that these are capable of handling. In this respect Mayo that holds
that:
the symbols, by virtue of their groundedness, can be manipulated intrinsically
without any distinct and artificial rules of composition being defined. This
could serve as the starting point for a definition of understanding (Mayo 2003).
Mayo continues to say: ‘‘By elaborating on the notion of symbol groundedness in
three ways, I will show that Searle’s CRA is considerably weakened.’’
A similar position is held by Rosenstein and Cohen (1998) when they affirm that:
To make the leap from percepts to symbolic thought and language, the agent
requires a way of transforming uninterpreted sensor information into
meaningful categories. That is, the agent must solve the bottom-up version
of the SGP The solution outlined below was inspired by the method of delays,
a nonlinear dynamics tool for producing spatial representations of time-based
data.
More recently, we note that even a leading author like Luc Steels is entangled
with some of the misinterpretations we have pointed out. However, there is nothing
wrong in Steels’s interpretation of the SGP but problems arise when we consider
Steels’s particular reading of Searle’s Chinese room argument:
Language requires the capacity to link symbols (words, sentences) through the
intermediary of internal representations to the physical world, a process
known as symbol grounding. One of the biggest debates in the cognitive
sciences concerns the question of how human brains are able to do this. Do we
need a material explanation or a system explanation? John Searle’s well
known Chinese Room thought experiment, which continues to generate a vast
polemic literature of arguments and counter-arguments, has argued that
autonomously establishing internal representations of the world (called
’’intentionality’’ in philosophical parlance) is based on special properties of
human neural tissue and that consequently an artificial system, such as an
autonomous physical robot, can never achieve this (Steels 2006).
123
Author's personal copy
Meaning in Artificial Agents
One clear difference with Davidsson is that Steels frames the problem posed by
Searle in terms of the SGP.
But even when the starting point is different, Steels’s conclusion is shared with
Davidsson in that both believe that the SGP and the problem posed by Searle can
both be solved just by means of some kind of internal categories or concepts:
However, as I argued elsewhere, there is a further possibility in which the
brain (or an artificial system) might be able to construct and entertain
m[eaningful]-representations, namely by internalizing the process of creating
m[eaningful]-representations. Rather than producing the representation in
terms of external physical symbols (sounds, gestures, lines on a piece of
paper) an internal image is created and re-entered and processed as if it was
perceived externally. The inner voice that we each hear while thinking is a
manifestation of this phenomenon (Steels 2008).
Now, before rejecting once and for all Steels’s last thesis, it is important to
realize its attractiveness. According to a certain old fashion conception about
meaning, concepts, or what Steels calls internal images, are a necessary condition to
obtain intentional mental states, since they allow us to choose the referents of our
beliefs and thoughts (Russell 1905). Concepts would also be a necessary condition
to the organization of our perceptive experience through categorization, in that
knowing that there is a cat before me, for instance, necessarily implies knowing
what a cat is; that is, it is necessary to have acquired the concept of a cat.
Those beings whose cognitive systems are capable of associating concepts to the
symbols they manipulate are at the same time being able to organize their
perceptions and to generate a stable overt behavior as a response to those stimuli.
Nevertheless, it does not follow from the latter, as Steels seems to presume, that an
agent capable of organizing its sensory inflow, because it manipulates symbols with
categorical representations, possess thoughts and beliefs, that is, it could be a
zombie just as it follows from the mental experiment presented by Harnad.
Conclusion
We regard meaningful states of mind, thoughts, as the internal states of an Agent
(real or artificial) which are bearers of meaning. Following Harnad’s insights, the
meaning that thoughts have for its possessor derive from the awareness the subject
has of what they represent or stand for, more than of attaching concepts to symbols.
That is, instead of considering understanding as a SGP, we regard it as a property
of the rather more intricate notion of thinking; more specifically, the problem of
symbol grounding concerns the internal construction of a mapping between sensory
input and symbols being manipulated, so as to give grounding to those symbols.
However, this internal mapping or representation does not follow from an
understanding (or knowledge) of what the symbols refer to or stand for, at least no
more than what, for example, a translation of an English expression into Chinese
would intend in a non-native context.
123
Author's personal copy
D. Rodrı́guez et al.
Therefore, contrary to what several authors have misconstrued throughout the
last three decades, the SGP, which is clearly concerned with providing internally
manipulated symbols with concepts, does not refer to intrinsically meaningful
internal states of mind.
To summarize, we have argued that the position defended by Harnad, which
concerns the general problem of supplying thoughts to Artificial Agents, can only be
addressed when, first, the Symbol Grounding Problem is solved, thereby giving
concepts to the manipulated symbols, and second, when artificial consciousness is
achieved, thereby giving intentionality to those manipulated symbols.
References
Anderson, D. M. L. (2003). Embodied cognition: A field guide. Artificial Intelligence, 149(1), 91–130.
http://cogprints.org/3949.
Brentano, F. C. (1874). Psychology from an empirical standpoint. UK: Routledge.
Davidsson, P. (1993). Toward a general solution to the symbol grounding problem: Combining machine
learning and computer vision. In Machine learning and computer vision, in AAAI fall symposium
series (pp. 157–161). Machine learning in computer vision: What, why and how, AAAI Press.
Davidsson, P. (1996). Autonomous agents and the concept of concepts. Ph.D. thesis, Department of
Computer Science, Lund University.
Descartes, R. (2010). Prinicples of philosophy. Whitefish: Kessinger Publishing.
Harnad, S. (1990). The symbol grounding problem. Physica D, 42, 335–346. http://cogprints.org/615..
Harnad, S. (1992). There is only one mind/body problem. In Symposium on the perception of
intentionality, XXV world congress of psychology, Brussels, Belgium.
Harnad, S. (1999). The symbol grounding problem. CoRR cs.AI/9906002.
Harnad, S. (2003). Symbol-grounding problem (Vol. LXVII). MacMillan: Nature Publishing Group.
http://cogprints.org/3018.
Honderich, T. (1995). The Oxford companion to philosophy. Oxford: Oxford University Press.
Mayo, M. J. (2003). Symbol grounding and its implications for artificial intelligence. In ACSC ’03:
Proceedings of the 26th Australasian computer science conference, Australian Computer Society,
Inc., Darlinghurst, Australia, Australia, pp. 55–60.
Rosenstein, M. T., & Cohen, P. R. (1998). Symbol grounding with delay coordinates. In In
AAAI technical report WS-98-06, the grounding of word meaning: Data and models (pp. 20–21).
Online. http://www.orl.co.uk/omniORB/omniORB.html.
Russell, B. (1905). On denoting. Mind, 14(56), 479–493.
Searle, J. R. (1980). Minds, brains, and programs. The Behavioral and Brain Sciences, 3, 417–457?.
Steels, L. (2006). Semiotic dynamics for embodied agents. Intelligent Systems, IEEE, 21(3), 32–38 doi:
10.1109/MIS.2006.58.
Steels, L. (2008). The symbol grounding problem has been solved. so what’s next? Symbols, embodiment
and meaning. New Haven: Academic Press. http://www.csl.sony.fr/downloads/papers/2007/steels07a.pd.
Taddeo, M., & Floridi, L. (2005). Solving the symbol grounding problem: a critical review of fifteen
years of research. Journal of Experimental and Theoretical Artificial Intelligence, 17(4), 419–445.
Turing, A. (1950). Computing machinery and intelligence. Mind, 59, 433–460.
Witkowski, M. (2002). Anticipatory learning: The animat as discovery engine. In In M. V. Butz, P.
G6rard, & O. Sigaud (Eds.), Adaptive Behavior in Anticipatory Learning Systems (ABiALS’02).
123