Download Aurally Informed Performance: Integrating Machine Listening and Auditory Presentation in Robotic Systems

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Signals intelligence wikipedia , lookup

Artificial intelligence wikipedia , lookup

Transcript
Aurally Informed Performance:
Integrating Machine Listening
and Auditory Presentation in
Robotic Systems
Papers from the AAAI Fall Symposium
Technical Report FS-06-01
AAAI Press
American Association for Artificial Intelligence
AAAI Press
445 Burgess Drive
Menlo Park, California 94025
ISBN 978-1-57735-299-0
FS-06-01
Aurally Informed
Performance: Integrating
Machine Listening and
Auditory Presentation in
Robotic Systems
Papers from the AAAI Fall Symposium
Derek Brock, Ramani Duraiswami, and Alexander I. Rudnicky, Cochairs
Technical Report FS-06-01
AAAI Press
Menlo Park, California
Copyright © 2006, AAAI Press
The American Association for Artificial Intelligence
445 Burgess Drive
Menlo Park, California 94025 USA
AAAI maintains compilation copyright for this technical report and
retains the right of first refusal to any publication (including electronic distribution) arising from this AAAI event. Please do not
make any inquiries or arrangements for hardcopy or electronic
publication of all or part of the papers contained in these working
notes without first exploring the options available through AAAI
Press and AI Magazine (concurrent submission to AAAI and an
another publisher is not acceptable). A signed release of this right
by AAAI is required before publication by a third party.
ISBN 978-1-57735-299-0
FS-06-01
Manufactured in the United States of America
Organizing Committee
Derek Brock, Naval Research Laboratory
Ramani Duraiswami, University of Maryland
Alexander I. Rudnicky, Carnegie Mellon University
This AAAI Symposium was held October 13–15, 2006,
in Crystal City, Arlington, Virginia USA
iii
Contents
Auditory and Other Non-verbal Expressions of Affect for Robots / 1
Cindy L. Bethel and Robin R. Murphy
Embedded and Integrated Audition for a Mobile Robot / 6
Simon Brière, Dominic Létourneau, Maxime Fréchette,
Jean-Marc Valin, and Francois Michaud
Using the Concept of Auditory Perspective Taking to
Improve Robotic Speech Presentations for Individual Human Listeners / 11
Derek Brock and Eric Martinson
Content Analysis for Acoustic Environment Classification in Mobile Robots / 16
Selina Chu, Shrikanth Narayanan, and C.-C. Jay Kuo
Are You Talking to Me? Dialogue Systems Supporting
Mixed Teams of Humans and Robots / 22
John Dowding, Richard Alena, William J. Clancey,
Maarten Sierhuis, and Jeffrey Graham
Non-Speech Aural Communication for Robots / 28
Frederick Heckel and William D. Smart
Making Them Dance / 33
Jae Woo Kim, Hesham Fouad, and James K. Hahn
A Sound Localization Algorithm for Use in Unmanned Vehicles / 38
Justin A. MacDonald and Phuong K. Tran
A Biological Inspired Robotic Auditory System Based on
Binaural Perception and Motor Theory / 43
Enzo Mumolo and Massimiliano Nolich
Continuous Auditory Feedback in a Control Task / 49
Matthias Rath
Realizing Affect in Speech Classification in Real-Time / 53
Carson Reynolds, Masatoshi Ishikawa, and Hiroshi Tsujino
Vector-based Representation and Clustering of
Audio Using Onomatopoeia Words / 55
Shiva Sundaram and Shrikanth Narayanan
v
vi