Download Co-Designing Agents: A Vision

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Agent-based model in biology wikipedia , lookup

AI winter wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Agent-based model wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Visual Turing Test wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Agent (The Matrix) wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Transcript
Co-Designing Agents: A Vision
Stuart C. Shapiro
Department of Computer Science and Engineering
and Center for Cognitive Science
University at Buffalo, The State University of New York
[email protected]
June 1, 2006
For several years, some colleagues, students, and I have been developing autonomous agents that serve as actors
in a virtual reality drama (Anstey et al., 2003; Shapiro et al., 2005a; Shapiro et al., 2005b; Shapiro et al., 2005c). The
agents proved difficult to debug, principally for the following reasons: the “mind” and “higher brain” of the agent run
on a different computer from its “lower brain,” “body,” and the VR environment; the programs on the two computers
communicate via I/P sockets, one for each of the agent’s modalities (vision, hearing, speech, gross body movements,
body language, etc.); the modalities and reasoning facilities of the agent run in multiple processor threads; much of
what the agent does and says is in reaction to a human participant in the VR world. Eventually, the thought struck us
that “If you’re so smart, why can’t you help?” Thus was born the idea of co-designing agents.
A co-designing agent is an intelligent agent that participates in its own design. The tasks it should be able to do
can be grouped according to where in its “job cycle” they are relevant. The tasks include:
1. Before being on the job:
(a) Answer questions of the form “What would you do in situation x?”
(b) Think of situations that may occur, but in which it would not know what to do. Describe the situations,
and ask for advice.
(c) Discover situations in which it is supposed to do contradictory actions. Describe them, and ask for advice.
2. While on the job:
(a) Be able to answer questions of the form, “What are you doing, and why?”
(b) Be able to answer questions of the form, “Why aren’t you doing x?”
(c) Be able to ask for help if it doesn’t know what to do, or if it can’t do what it should. Know whom to ask.
3. After being on the job:
(a) Be able to report what it did at various times.
(b) Be able to report any problems it had, including situations it couldn’t handle satisfactorily.
(c) Be able to answer questions of the form, “Why did you do x?”
(d) Be able to answer questions of the form, “Why didn’t you do x?”
To at least some extent, these are also the concerns of software engineering. However, in the spirit of AI, I would
want these tasks to be done by an intelligent agent based on the knowledge it has about itself, its capabilities, goals,
plans, and past actions and experiences. To do this, it would have to be self-aware.1
Several of the listed tasks relate to ongoing AI research, some of long standing. Derivation of tasks that might be
performed in situations other than the current one relate to planning. Asking for help is a topic in multi-agent systems.
1 See http://www-formal.stanford.edu/jmc/www.selfawaresystems.org/ and http://www-formal.stanford.
edu/eyal/aware/.
1
There has been some research on answering “What are you doing?” (Sabouret and Sansonnet, 2001), but I believe
more is needed. It can be tricky (answer: “Telling you what I’m doing.”). Research on answering “What did you do
in situation x, and why?” goes back at least to SHRDLU (Winograd, 1972), but, again, I believe that more work is
needed to answer based on an epistemic memory formed at the time. There are also issues of granularity (“I am moving
my leg/taking a step/walking/walking to the store/buying milk/. . . ”)2 that surely need a model of the communication
partner (See research in the area of user modeling.) and of the goals of the interaction.
Answering “Why aren’t you doing/didn’t do x?” is related to answering “Why not?” (Chalupsky and Russ, 2002),
but relates to acting rather than inferring beliefs. Answering “Why did/didn’t you do x?” opens the possibility of an
answer like “Because . . . , but if I knew then what I do now, I would have done y.” (See (Shapiro, 1995).) This is at
least a step toward an agent’s feeling regret.
Contradictory actions sound like contradictory beliefs, but are quite different. Additional research may be needed
to characterize contradictory actions. They surely include being at two places at once, and overuse of a limited
resource, such as holding multiple blocks in a hand that can only hold one at a time.
Co-designing agents was the topic of a graduate seminar I led in Spring, 2005. I thank the participants: Josephine
Anstey, Trupti Devdas Nayak, Albert Goldfain, Michael Kandefer, Carlos Lollett, Rahul Krishna, Shawna Matthews,
and Rashmi Mudiyanur. A longer discussion of our vision of co-designing agents is in (Goldfain et al., 2006). The
need for “ ‘intelligent project coaches’ that participate with people in the design and operation of complex systems”
was also noted in (Grosz, 2005). Note, however, that for a co-designing agent the “complex system” is itself.
Creating a co-designing agent seems to be another AI-complete problem (Shapiro, 1992). It is a worthy goal
around which to carry out our future research.
References
Anstey, J., Pape, D., Shapiro, S. C., and Rao, V. (2003). Virtual drama with intelligent agents. In Thwaites, H., editor,
Proc., VSMM 2003. International Society on Virtual Systems and MultiMedia.
Chalupsky, H. and Russ, T. A. (2002). WhyNot: Debugging failed queries in large knowledge bases. In Proc., IAAI-02,
pages 870–877, Menlo Park, CA. AAAI Press.
Goldfain, A., Kandefer, M. W., Shapiro, S. C., and Anstey, J. (2006). Co-designing agents. In Proceedings of
CogRob2006: The Fifth International Cognitive Robotics Workshop.
Grosz, B. J. (2005). Whither AI: Identity challenges of 1993-1995. AI Magazine, 26(4):42–44.
Ismail, H. O. and Shapiro, S. C. (2000). Two problems with reasoning and acting in time. In Cohn, A. G., Giunchiglia,
F., and Selman, B., editors, Proc., KR 2000, pages 355–365, San Francisco. Morgan Kaufmann.
Sabouret, N. and Sansonnet, J.-P. (2001). Automated answers to questions about a running process. In Proceedings of
the Fifth Symposium on Commonsense Reasoning (CommonSense 2001), pages 217–227.
Shapiro, S. C. (1992). Artificial intelligence. In Shapiro, S. C., editor, Encyclopedia of Artificial Intelligence, pages
54–57. John Wiley & Sons, New York, second edition.
Shapiro, S. C. (1995). Computationalism. Minds and Machines, 5(4):517–524.
Shapiro, S. C., Anstey, J., Pape, D. E., Nayak, T. D., Kandefer, M., and Telhan, O. (2005a). MGLAIR agents in a
virtual reality drama. Technical Report 2005-08, Department of Computer Science & Engineering, University at
Buffalo, Buffalo, NY.
Shapiro, S. C., Anstey, J., Pape, D. E., Nayak, T. D., Kandefer, M., and Telhan, O. (2005b). MGLAIR agents in virtual
and other graphical environments. In Proc., AAAI-05, pages 1704–1705. AAAI Press, Menlo Park, CA.
Shapiro, S. C., Anstey, J., Pape, D. E., Nayak, T. D., Kandefer, M., and Telhan, O. (2005c). The Trial The Trail, Act
3: A virtual reality drama using intelligent agents. In AIIDE-05, pages 157–158. AAAI Press, Menlo Park, CA.
Winograd, T. (1972). Understanding Natural Language. Academic Press, New York.
2 Compare
the stack of NOWs in (Ismail and Shapiro, 2000).
2