Download Affective Behavior Models for Virtual Humans and Social Robots

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Visual servoing wikipedia , lookup

Human-Computer Interaction Institute wikipedia , lookup

Wizard of Oz experiment wikipedia , lookup

Technological singularity wikipedia , lookup

The City and the Stars wikipedia , lookup

Kevin Warwick wikipedia , lookup

Robot wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Intelligence explosion wikipedia , lookup

Index of robotics articles wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Adaptive collaborative control wikipedia , lookup

Robotics wikipedia , lookup

Human–computer interaction wikipedia , lookup

List of Doctor Who robots wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Transcript
Being There Centre: Immersive
Telepresence
TAB TWO
TAB THREE
TAB FOUR
• To make major technological and systems-level advances leading to a credible,
3D experience of tele-presence including VH and Social Robots
• Breakthrough in the quality of interpersonal communication at a distance
allowing for eye contact and proper motion parallax among a group of users.
BeingThere Centre
 Research Centre over 5 years in 3D
TelePresence
 SGD 10 million funding support by Media
Development Authority in Singapore (MDA) MDA
 With in-kind contributions from
• NTU, Singapore
• ETH Zurich, Switzerland
• UNC Chapel Hill, USA
 Commencement date: 15 December 2010
NTU
BeingThere
Centre
Est 15 Dec
2010
ETH
UNC
Co-directors
Prof Nadia MAGNENAT-THALMANN
Co-director
NTU, Singapore
Prof Henry FUCHS
Co-director
UNC Chapel Hill, USA
Prof Markus GROSS
Co-director
ETH Zurich, Switzerland
Vision and Objectives
Project 1:
Room-based Telepresence
Project 3:
Animatronics Avatar
Project 2,5:
Advanced Technologies for 3D
Capture, Communication, and
Display
Project 4: Autonomous Virtual
Human and Social Robot
Prof. Nadia Magnenat Thalmann
Director Institute for Media Innovation (IMI) and
BeingThere Centre, NTU, Singapore
– We have evolved and progressed
– We have evolved and progressed.
– We have invented and discovered.
– We have evolved and progressed.
– We have invented and discovered.
– Now we start to be empowered...
Bruce Goldman, College of Liberal Arts and Sciences, University of Connecticut
Can a machine think?
Put a machine and a human in a room and send
in written questions. If we cannot tell which
answers are from the machine or the human,
the machine is thinking…
What first passed the Turing Test
and is it enough?
• The first was ELIZA, a program
written by the American computer
scientist, Joseph
Weizenbaum(1976)
• BUT anything like human
intelligence must be able to
engage with the real world, with
social interaction , and the Turing
Test doesn’t test for that..
Modelling the human brain, with
something like well over 100
trillion synapses would be a
software project many orders of
magnitude larger than the largest
software project ever done.
In 2011, Ray Kurzweil (Director of
engineering at Google) told Time
magazine: "We will successfully
reverse-engineer the human brain
by the mid-2020s. By the end of
that decade, computers will be
capable of human-level
intelligence."
Vernor Vinge, famous San Diego State University Professor of
Mathematics, Computer Scientist, and Science fiction Author said
in 1993: “Within thirty years , we will have the technological
means to create superhuman intelligence. Shortly after, the
human era will be ended.“
Kismet, MIT
iCub, RobotCub EU project
Nadine, IMI, NTU
ASIMO, Honda
INDIGO EU Project
• human/pet interactions are simpler than
human/human
• Lesser expectations regarding social abilities
In computer graphics:
In robotics:
Paro robot baby seal
• Zoomorphic.
• Designed by Takanori Shibata
in 1993 but produced in 2002
• Responds to petting through
tactile sensors by moving its
tail
• responds to sounds and can
learn a name
• can show emotions such as
surprise, happiness and anger.
-- Physically support
Robot that can lift up
or set down a real
human from or to a
bed or wheelchair.
RIBA robot (Robot for Interactive
Body Assistance)
T. Mukai, S. Hirano, H. Nakashima, Y. Kato, Y. Sakaida, S. Guo, and S. Hosoe, “Development of a nursing-care assistant robot riba
that can lift a human in its arms,” in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5996–
6001, Oct 2010.
-- Entertainment
 Play with children.
 Interact with humans.
Sony AIBO robot: quadruped dog-like
robots
N. Suzuki and Y. Yamamoto, “Pursuing entertainment aspects of Sony AIBO quadruped robots,” in4th
International Conference on Modeling, Simulation and Applied Optimization (ICMSAO), pp. 1–5, April 2011.
Care-o-bot robot equipped with the latest state-of-art industrial
components :
• omnidirectional drives
• range and image sensors for object learning and detection in realtime 3D environment
• a 7 DOF redundant manipulator
• a dexterous three finger gripper
Pieskä, Sakari, et al. "Social service robots in public and private environments." Recent Researches in Circuits,
Systems, Multimedia and Automatic Control (2012): 190-196.
•
•
•
•
Psychic support
 Reduces stress
 Stimulates interaction
 Improves the relaxation
and motivation.
NAO robot: an semi-autonomous, programmable humanoid robot
[Shamsuddin, Syamimi, et al. "Initial response of autistic children in human-robot interaction therapy with
humanoid robot NAO." Signal Processing and its Applications (CSPA), 2012 IEEE 8th International Colloquium
on. IEEE, 2012.
• Hanson robotic Head with skin deformations and facial
expressions
K. Zawieska, M. B. Moussa, B. R. Duffy and N. Magnenat Thalmann, The Role of Imagination in Human-Robot Interaction,
Computer Animation and Social Agents conference (CASA 2012)
BEST VIDEO Video at the AAAI 2012 Video Competition- Conference of Intelligence Artificial Intelligence, Toronto, Canada
• Foster et al. from James EU project
– Open world interaction
– Multiple users and robot to fulfill a task (grasping, detecting faces and
hands of the people)
M. E. Foster, A. Gaschler, M. Giuliani, A. Isard, M. Pateraki, and R. Petrick. Two People Walk Into a Bar: Dynamic Multi-Party Social Interaction
with a Robot Agent. Proceedings of the ACM International Conference on Multimodal Interaction (ICMI 2012), pages 3-10, 2012.
• Virtual characters and robots interacting with people in
social contexts
– should understand users’ behaviors
– and respond back with gestures, facial expressions and
gaze.
• Challenges:
– Sensing and interpreting users’ behaviors, intentions
– Making decisions appropriate to the social situation based
on partial sensory input
– Rendering synchronized and timely multi-modal behaviors
TAB TWO
TAB THREE
TAB FOUR
TAB FIVE
Nadine has a total of 27DoF for facial expressions and upper body
movements. Nadine can speak, display emotions and natural gestures. She
can recognize people, colours, and some gestures. She is on a constant
learning curve…as to remember what has happened, when and with whom…
Y
{15}
X
Z
Y
Z
{10}
{12}
Z
{3}
{1} {1}
{8}
{4}
{4}
{5}
Waist
{13}
{14}
Y
X
{2}
{9}{9}
{11}
X
{16}
{6}
Forearm
{7}
[1] Demo was showed at Swissnex Singapore End of Year Party 2013