Download Artificial Understanding: Do you mean it?

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Ecological interface design wikipedia , lookup

AI winter wikipedia , lookup

Enactivism wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Agent (The Matrix) wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Cognitive model wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Transcript
Artificial Understanding: Do you mean it?
Luís Miguel Botelho ([email protected])1
ISCTE-IUL, Department of Information Science and Technologies
IT-IUL, Information Technology Research Group
1 Research problem
The main goal of our current research is to increase the degree to which a program can
understand its own actions, the events it somehow participates or witnesses, and the things that
take part in its interactions. Will it be possible for a program to meaningfully ask and answer
questions as Do you mean it? Will the asking program be genuinely interested in the answer?
Will the answer be felt by the responding program?
We became interested in this problem because of the criticisms made to artificial intelligence
according to which a program will never have “first person” meanings for the tokens it
processes and for its interactions (see John Searl 1980; and Tom Ziemke 2003, 2001, 1999. But
see also Steve Harnad 1990 for a less pessimistic version).
However, the above criticisms may have to be interpreted at a different light if we recognize the
fact that brains assign meanings to things and events, in spite of being made of carbon based
molecules, for which nothing bears any meaning (Chalmers 1992). That is, if we have first
person meanings for our interactions and for the data collected by our sensors, then it is possible
to create those meanings, even in a carbon-based machine as the brain. Perhaps AI can do the
same.
Although we don’t claim that we will completely solve the described problem, our research is
aimed at advancing the current state of the art of artificial understanding. That is, we intend to
create alternative computational mechanisms through which a computer program can develop
meanings for its actions, for the events it participates or witnesses, and for the things that take
part on such events. We will compare and evaluate the results achieved through the alternative
mechanisms and plan the next stage of proposals and experiments.
2 Hypothesis and abstract approach
We propose that a certain kind of understanding consists of a goal-based explanation. For
instance, an explanation for downloading and saving the file chilli_oysters.html from the
exquisite-food site is that the agent had the goal of having a recipe of chilli oysters.
A goal-based explanation of the agent’s activity provides the ground for the meanings of all
individual elements taking part in that activity. According to this proposal, the meaning of a
given element of the agent’s activity will be the role it plays within the goal-based explanation
of its activity. In the oysters example, chilli_oysters.html is the file with the oysters’ recipe,
exquisite-food is the site on which the file was found, the recipe of chilli oysters is the desired
recipe, and so on.
For this, we don’t need to assume that agents have goals. Instead, we assume that the very
mechanism capable of explaining what happens, in terms of goals, provides both the goals and
the goal-based explanation. The agent will start seeing itself has a goal driven entity when it
formulates a goal-based explanation of its life.
1
Work being done in cooperation with Ricardo Ribeiro ([email protected]), ISCTEIUL, Department of Information Science and Technologies, INESC-ID, Spoken Language
Systems Lab
1
In the current approach, we are working on the design and implementation of a generate-andtest computational mechanism that proposes goals capable of explaining the agent’s life. The
generation stage will propose sets of plausible goals. The test stage will try to explain the
agent’s life in terms of the proposed goals. If the goals support an explanation, they will become
candidates to being adopted by the agent. Goals that do not support an explanation will be
discarded by the agent. A computer program becomes an agent when it adopts a set of goals,
which may have been generated and tested as described.
The presented hypothesis will be validated in concrete agents with the following global design.
The agent will consist of a set of capabilities implemented without any concern with goals. Each
capability may include sensors and actuators, as needed. These capabilities and the way they are
triggered in response to changes in the environment will make up the so called opaque or goalless component of the agent. It is not important to know whether or not the opaque component
has implicit goals. However, it is important to keep in mind that it is not aware of possible goals
it may implicitly have, even if we can talk about them.
Another component of the agent, hereunder the observing component, will have a mechanism
that observes the behaviour of the opaque goal-less component and its interaction with the
external environment, and builds goal-oriented explanations of what happens.
Goals supporting an explanation will be adopted by the agent until something happens that
cannot be explained through the adopted goals, in which case a goal revision process will take
place.
Goal revision results of the generate-and-test process that incrementally proposes new goals.
Goals will be abandoned or preserved according to predefined principles comparable, for
example, to the principle of epistemic entrenchment by Peter Gardenförs (1988).
The whole approach will be externally2 validated in the scope of concrete demonstrations for
specific problems.
Two major ingredients of the described global proposal still require a lot of work and research:
1- How to generate plausible goals?
2- How to generate plausible explanations?
One possibility to be considered consists of using abdutive reasoning (Hobbs et. al. 1993), much
in the same fashion criminal investigators do. Another possibility may involve default reasoning
(Reiter 1980) as used in diagnostic processes. Yet a third possibility will be based on planning
and hierarchical planning algorithms (Ghallab, Nau, and Traverso 2004).
Meanwhile we stress the fact that nothing prevents an agent from having more than just the
mentioned two components. It may also happen that a given component plays two different
roles: that of the observer and also that of the opaque goal-less component. In such a case, that
component would be responsible for understanding what it observes, and would be understood
by another component that observes it.
2
By “externally”, we mean a process not conducted by the agent itself. We will define a process through
which the goal based explanations and corresponding goals will be evaluated by ideally independent
subjects.
2
The same agent may be a chain of several components, playing the double role of the opaque
goal-less observed component and that of the observer. Instead of a linear chain, an agent may
also be arranged as a ring or as a network of observed/observer components. Understanding, in
such agent designs, would be distributed by several local goal-based explanations, which raises
the problem of global understanding.
3 Concrete demonstration / proof of concept
The described approach will be demonstrated in concrete applications. Currently, we are
interested in the problem of creating and writing stories.
In each demonstration, we will build an agent that will do a given activity. That agent will have
first to understand the activity in which it is involved. Once it has understood it (at a certain
level), the agent will write a story presenting its understanding of what it does. That is, each
agent will write the story of its own life.
The generated stories will be used in the external evaluation process mentioned in section 2.
Independent subjects will watch the agent doing its job. Then, they will be asked to judge the
degree to which the generated story accurately describes the watched activity.
4 Bibliography
Chalmers, D. J. 1992. Subsymbolic computation and the Chinese room. In J. Dinsmore. The
Symbolic and Connectionist Paradigms: Closing the Gap. Hillsdale, NJ. Lawrence Erlbaum.
1992, pp. 25-48
Ghallab, M.; Nau, D.; Traverso, P. 2004. Automated Planning: theory and practice. Morgan
Kaufmann Publishers.
Gärdenfors, P. 1988. Knowledge in Flux: Modeling the Dynamics of Epistemic States. MIT
Press, 262 pages
Harnad, S. 1990. The Symbol Grounding Problem. In Physica D 42: 335-346
Hobbs, J.R.; Stickel, M.E.; Appelt, D.; Martin, P.A. 1993. Interpretation as Abduction. Artificial
Intelligence, Vol. 63, Nos. 1-2, pp. 69-142
3
Reiter, R. 1980. A Logic for Default Reasoning.Artificial Intelligence 13 (1980), 81-132
Searle, J. 1980. Minds, Brains and Programs. In Behavioral and Brain Sciences 3 (3): 417–457,
doi:10.1017/S0140525X00005756
Ziemke, T. 1999. Rethinking Grounding. In Riegler, Peschl & von Stein (eds.) Understanding
Representation in the Cognitive Sciences pp. 177-190. New York. Plenum Press
Ziemke, T. 2001. Disentangling Notions of Embodiment. In Pfeifer, Lungarella & Westermann
(eds.) Developmental and Embodied Cognition – Workshop Proceedings. Edinburgh, UK.
Ziemke, T. 2003. What's that thing called embodiment? In Alterman & Kirsh (eds.) Proceedings
of the 25th Annual Conference of the Cognitive Science Society. pp. 1134-1139. Mahwah, NJ.
Lawrence Erlbaum.
4