Download Nonverbal Communication for Human-Robot Interaction

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Enactivism wikipedia , lookup

Cognitive science wikipedia , lookup

Social Bonding and Nurture Kinship wikipedia , lookup

Symbolic interactionism wikipedia , lookup

Development Communication and Policy Sciences wikipedia , lookup

Coordinated management of meaning wikipedia , lookup

Anxiety/uncertainty management wikipedia , lookup

History of the social sciences wikipedia , lookup

Behavioral modernity wikipedia , lookup

Symbolic behavior wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Models of communication wikipedia , lookup

Thin-slicing wikipedia , lookup

Transcript
1
Nonverbal Communication for Human-Robot Interaction
HENNY ADMONI, Yale University
Socially assistive robots help people through social interactions, such as tutoring, home assistance, and
autism therapy. In my work, I develop socially assistive robots that can recognize and use typical human
communication, including nonverbal behaviors such as eye gaze and gestures, to interact naturally and
easily with naive human users. My research focuses on modeling users’ mental states to identify what nonverbal communication will be most effective. These models learn to recognize and generate nonverbal cues in
human-robot interactions using observations of human performance. This research makes contributions to
the fields of human-robot interaction, artificial intelligence, and cognitive psychology. Future work involves
real-time adaptation of these computational models of nonverbal communication to personalize the behavior
of socially assistive robots to account for goals, environments, and human user preferences.
General Terms: Robotics, Cognitive Modeling, Artificial Intelligence
Additional Key Words and Phrases: Human-robot interaction, nonverbal communication, service robots,
assistive robotics
ACM Reference Format:
Henny Admoni, 2014. Nonverbal Communication for Human-Robot Interaction. ACM Trans. Embedd. Comput. Syst. x, x, Article 1 (October 2014), 3 pages.
DOI:http://dx.doi.org/10.1145/0000000.0000000
1. INTRODUCTION
I build assistive robots that make interactions with users less frustrating by understanding and responding to nonverbal communication. Intelligent, autonomous robots
can improve people’s lives by providing social assistance: acting as one-on-one educational tutors for children, providing home care assistance for the elderly, and performing as therapeutic agents for people with autism.
During these interactions, people communicate both verbally and nonverbally, using
cues like gestures and eye gaze to convey additional information, reveal their mental
state, or support verbal communication. To have natural, intuitive interactions with
human users, socially assistive robots must be able to understand and generate such
nonverbal cues as well.
However, recognizing and generating appropriate nonverbal cues depends on a number of factors, including interaction context, task-related goals, and user preferences.
To address this, I develop context-sensitive models of nonverbal behavior that can improve human-robot interactions in socially assistive applications.
This work is supported by National Science Foundation grants IIS-1117801 and IIS-1139078, and Office of
Naval Research grant N00014-12-1-0822.
Author’s address: H. Admoni, Department of Computer Science, Yale University, New Haven CT 06520 USA.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted
without fee provided that copies are not made or distributed for profit or commercial advantage and that
copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights
for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component
of this work in other works requires prior specific permission and/or a fee. Permissions may be requested
from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212)
869-0481, or [email protected].
c 2014 ACM 1539-9087/2014/10-ART1 $15.00
DOI:http://dx.doi.org/10.1145/0000000.0000000
ACM Transactions on Embedded Computing Systems, Vol. x, No. x, Article 1, Publication date: October 2014.
1:2
H. Admoni
2. CURRENT WORK
Robotics has already improved lives by taking over dangerous, time-consuming, and
repetitive jobs, freeing up human capital for safer, more skillful pursuits. For instance,
autonomous mechanical arms weld cars in factories; physical rehabilitation patients
are aided by robots in performing their physical therapy exercises; and autonomous
vacuum cleaners keep floors clean in millions of homes.
The next frontier for robot assistance lies with social, rather than physical, tasks
[Feil-Seifer and Matarić 2005]. Robot teaching assistants can make teacher time more
effective by allowing students to practice a one-on-one, personalized lesson outside of
the classroom. Robot therapy assistants can act as social conduits between those with
social impairments, such as autism, and their caretakers or therapists [Scassellati
et al. 2012]. Robots at home can help elderly users with tasks of daily living, allowing
them to age at home and increasing their independence and quality of life.
To be good social partners, robots must understand and use existing human communication structures. A large part of human communication occurs nonverbally, through
eye gaze [Argyle 1972], gestures [McNeill 1992], and other cues. However, nonverbal
behaviors cannot be pre-scripted or prerecorded, because socially assistive interactions
are dynamic, nonlinear, and often highly personalized to a user. My research focuses
on developing models of human nonverbal communication that robots can use during
socially assistive tasks to recognize nonverbal communication from a human partner
and to generate appropriate nonverbal behaviors in turn.
For these nonverbal behavior models to be flexible in a variety of social situations
and for a variety of users, I apply a data-driven approach to learning nonverbal behaviors. I use machine learning techniques to train robot behavior models based on observations of actual human behavior [Admoni and Scassellati 2014]. Although other models generate nonverbal behaviors from human examples, my model allows the reverse
process as well: the robot can apply the same learned model to understand the context of newly observed behaviors. This allows robots to not only communicate through
nonverbal behaviors, but also understand a human user’s communication. This bidirectionality is a critical part of social interaction. Without it, a robot may produce the
right kinds of nonverbal behaviors, but is essentially blind to human communication,
making it an ineffective social partner.
My joint background in computer science and cognitive psychology enables me to
conduct carefully-controlled experiments that quantitatively measure the impacts
of robot nonverbal behavior on human-robot interaction. I have studied people’s
millisecond-level responses to robot faces as compared with human faces, and have
found evidence that robot faces are processed differently from human faces in the first
several hundred milliseconds of a person’s attention, even when those robots appear
highly anthropomorphic [Admoni et al. 2011]. However, when people can be induced
to attribute intentionality to a robot’s gaze, I’ve shown that people understand and respond to robot eye gaze as expected [Admoni et al. 2014b]. I’ve also shown that robots
can appear as though they’re “paying attention” to people better when the robots use
short, frequent glances rather than longer, infrequent stares to look at people [Admoni
et al. 2013]. When conflicts arise between a robot’s verbal instructions and its nonverbal gaze communication during collaborative tasks, people’s performance improves
with helpful robot gaze, but does not suffer from incorrect gaze compared to having no
gaze cue at all, supporting the benefit of nonverbal communication cues for humanrobot interaction [Admoni et al. 2014a].
ACM Transactions on Embedded Computing Systems, Vol. x, No. x, Article 1, Publication date: October 2014.
Nonverbal Communication for Human-Robot Interaction
1:3
3. FUTURE VISION
My future focus is on building more sophisticated models of nonverbal communication
as a way of learning to recognize and generate appropriate nonverbal cues in collaborative interactions. These models should be able to adapt to their user and environment,
and should interface well with models of verbal communication.
Real time machine learning using observations of human behavior enables personalized interactions by adapting the behavior models with interaction-relevant information. Good partners will adapt their own behavior based on cues from others. My
model provides the capability to do real-time behavior adaptation through the same
learning mechanism that trains the model in the first place. Currently, however, the
model is trained on data that is hand-annotated offline. The main challenge of realtime adaptation is autonomous behavior recognition.
Natural language is a primary mode of communication, and nonverbal behavior often serves to augment it. By tying together my nonverbal behavior models with stateof-the-art models in NLP, I look to build robust and application-ready systems that can
understand all facets of natural human communication.
Being able to recognize and generate nonverbal behaviors opens up a large avenue
of communication for social robots. I anticipate that this contribution will progress the
adoption of robots in social environments by increasing peoples’ comfort and trust of
social robots, allowing robots to positively impact many lives.
REFERENCES
Henny Admoni, Caroline Bank, Joshua Tan, and Mariya Toneva. 2011. Robot gaze does not reflexively cue
human attention. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society (CogSci
2011), L. Carlson, C. Hölscher, and T. Shipley (Eds.). Cognitive Science Society, Austin, TX USA, 1983–
1988.
Henny Admoni, Christopher Datsikas, and Brian Scassellati. 2014a. Speech and Gaze Conflicts in Collaborative Human-Robot Interactions. In Proceedings of the 36th Annual Conference of the Cognitive Science
Society (CogSci 2014).
Henny Admoni, Anca Dragan, Siddhartha Srinivasa, and Brian Scassellati. 2014b. Deliberate Delays During Robot-to-Human Handovers Improve Compliance With Gaze Communication. In Proceedings of the
9th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2014).
Henny Admoni, Bradley Hayes, David Feil-Seifer, Daniel Ullman, and Brian Scassellati. 2013. Are You
Looking At Me? Perception of Robot Attention is Mediated By Gaze Type and Group Size. In Proceedings
of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2013).
Henny Admoni and Brian Scassellati. 2014. Data-Driven Model of Nonverbal Behavior for Socially Assistive Human-Robot Interactions. In 6th ACM International Conference on Multimodal Interaction (ICMI
2014).
Michael Argyle. 1972. Non-verbal communication in human social interaction. In Non-verbal communication, R. A. Hinde (Ed.). Cambirdge University Press, Oxford, England.
David Feil-Seifer and Maja J. Matarić. 2005. Defining Socially Assistive Robotics. In Proceedings of the 9th
International IEEE Conference on Rehabilitation Robotics.
David McNeill. 1992. Hand and Mind: What Gestures Reveal about Thought. The University of Chicago
Press, Chicago.
Brian Scassellati, Henny Admoni, and Maja Matarić. 2012. Robots for Use in Autism Research. Annual
Review of Biomedical Engineering 14 (2012), 275–294.
ACM Transactions on Embedded Computing Systems, Vol. x, No. x, Article 1, Publication date: October 2014.