Download Anthropomorphism and The Social Robot

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Enactivism wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

Robot wikipedia , lookup

Human–computer interaction wikipedia , lookup

Index of robotics articles wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

List of Doctor Who robots wikipedia , lookup

Robotics wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Transcript
Anthropomorphism and The Social Robot
Brian R. Duffy
Media Lab Europe, Sugar House Lane, Bellevue, Dublin 8, Ireland
[email protected]
Abstract
This paper discusses the issues pertinent to the development of a meaningful social interaction between robots and people
through employing degrees of anthropomorphism in a robot’s physical design and behaviour. As robots enter our social
space, we will inherently project/impose our interpretation on their actions similar to the techniques we employ in
rationalising for example, a pet’s behaviour. This propensity to anthropomorphise is not seen as a hindrance to social robot
development, but rather a useful mechanism that requires judicious examination and employment in social robot research.
Keywords: Anthropomorphism, social robots
1 Introduction
With technology starting to provide us with quite
robust solutions to technical problems that have
constrained robot development over the years, we are
becoming closer to the integration of robots into our
physical and social environment. The humanoid form
has traditionally been seen as the obvious strategy for
integrating robots successful into these environments
with many consequently arguing that the ultimate quest
of many roboticists today is to build a fully
anthropomorphic synthetic human. Anthropomorphism
in this paper is the rationalisation of animal or system
behaviour through superposing aspects of the human
observer and is discussed in depth in section 3.
This paper discusses the use of anthropomorphic
paradigms to augment the functionality and behavioural
characteristics of a robot (both anticipatory and actual)
in order that we can relate to and rationalise its actions
with greater ease. The use of humanlike features for
social interaction with people (i.e. [1][2][3]) can
facilitate our social understanding. It is the explicit
designing of anthropomorphic features, such as a head
with eyes and a mouth that may facilitate social
interaction. This highlights the issue that social
interaction is fundamentally observer-dependent, and
exploring
the
mechanisms
underlying
anthropomorphism provides the key to the social
features required for a machine to be socially
engaging.
1.1 The robot
The Webster’s Dictionary defines a robot as: any
manlike mechanical being, by any mechanical device
operated automatically, esp. by remote control, to
perform in a seemingly human way.
Through popular interpretations, this definition
already draws associations between a robot and man.
And in looking to define a social robot, the following
has been proposed: A physical entity embodied in a
complex, dynamic, and social environment sufficiently
empowered to behave in a manner conducive to its
own goals and those of its community. [4]
There is a two-way interaction between a robot and
that which it becomes socially engaged with, whether
another robot or a person. It is in embracing the
physical and social embodiment issues that a system
can be termed a “social robot” [4]. This paper deals
with human-robot interaction, which is proposed as the
only motivation for employing anthropomorphism in
robotic systems. (See [4] for explicit robot-robot social
interaction).
A robot’s capacity to be able to engage in
meaningful social interaction with people inherently
requires the employment of a degree of
anthropomorphism, or human-like qualities, whether in
form or behaviour or both. But it is not a simple
problem. As discussed by Foner [5], people’s
expectations based on strong anthropomorphic
paradigms in HCI overly increase a user’s expectations
of the system’s performance. Similarly in social
robotics, the ideal paradigm should not necessarily be a
synthetic human. Successful design in both software and
robot in HCI needs to involve a balance of illusion that
leads the user to believe in the sophistication of the
system in areas where the user will not encounter its
failings which the user is told not to expect. Making
social robots too human-like may also defeat the
purpose of robots in society (to aid humans). For
example, perhaps a robot who comes across as too
intelligent may be perceived as more selfish or as prone
to weaknesses as humans, and thus not as desirable as a
reliable entity. The issue of the social acceptance of
robots is not discussed here but is important in the
greater debate regarding the context and degree of social
integration of robots into people’s social and physical
space.
The social robot can be perceived as the interface
between man and technology. It is the use of socially
acceptable functionality in a robotic system that helps
break down the barrier between the digital information
space and people. It may herald the first stages where
people stop perceiving machines as simply tools.
2 The Big AI Cheat
Are social robots, with embedded notions of identity
and the ability to develop social models of those that it
engages with, capable of achieving that age-old pursuit
of artificial intelligence (AI)? Can the illusion of life and
intelligence emerge through simply engaging people in
social interaction? How much can this illusion emerge
through people’s tendency to project intelligence and
anthropomorphise?
According to the Machiavellian (or Social)
Intelligence Hypothesis, primate intelligence originally
evolved to solve social problems and was only later
extended to problems outside the social domain (recent
discussion in [6]). Will AI effectively be achieved
through robot “intelligence” evolving from solve social
problems to later extending to problems outside the
problem domain? If the robot “cheats” to appear
intelligent, can this be maintained over time? Does it
matter if it cheats? Is it important what computational
strategies are employed to achieve this illusion? The
principles employed in realising a successful “illusion
of life” in Walt Disney’s famous cartoon characters [7]
have been well documented. Inspiration can be drawn
on how to apply similar strategies to designing social
robots and create the illusion of life and intelligence.
Proponents of strong AI believe that it is possible to
duplicate human intelligence in artificial systems where
the brain is seen as a kind of biological machine that
can be explained and duplicated in an artificial form.
This mechanistic view of the human mind argues that
essentially understanding the computational processes
that govern the brains characteristics and function
would reveal how people think and effectively provide
an understanding of how to realise an artificially
created intelligent system with emotions and
consciousness.
On the other hand, followers of weak AI believe that
the contradictory term “artificial intelligence” implies
that human intelligence can only be simulated. An
artificial system could only give the illusion of
intelligence. In adopting the weak AI stance, artificial
intelligence is an oxymoron. The only way an artificial
system can become “intelligent” is if it cheats, as the
primary reference is not artificial. A social robot could
be the key to the great AI cheat.
This paper follows the stance of weak AI where
computational machines, sensory and actuator
functionality are merely a concatenation of processes
and can lead to an illusion of intelligence in a robot
primarily through projective intelligence on the part of
the human observer/participant (see a discussion of
autopoiesis and allopoiesis regarding physical and
social embodiment in [4]).
2.1 Projective intelligence
In adopting the weak AI stance, the issue will not be
whether a system is fundamentally intelligent but rather
if it displays those attributes that facilitate or promote
people’s interpretation of the system as being
intelligent.
In seeking to propose a test to determine whether a
machine could think, Alan Turing came up in 1950
with what has become well known as the Turing Test
[8]. The test is based on whether a machine could trick
a person into believing they were chatting with another
person via computer or at least not be sure that it was
“only” a machine. This approach is echoed in Minsky’s
original 1968 definition of AI as “[t]he science of
making machines do things that would require
intelligence if done by [people])”.
In the same line of thought, Weizenbaum’s 1960
conversational computer program Eliza [9], employed
standard tricks and sets of scripts to cover up its lack of
understanding to questions it was not pre-programmed
for. It has proved successful to the degree that people
have been known to form enough attachment to the
program that they have shared personal experiences.
It can be argued that the Turing Test is a game based
on the over-simplification of intrinsic intelligence and
allows a system to have tricks to fool people into
concluding that a machine is intelligent. But how can
one objectify one’s observations of a system and not
succumb
to
such
tricks
and
degrees of
anthropomorphising and projective intelligence? Does it
matter how a system achieves its “intelligence”, i.e. what
particular complex computational mechanisms are
employed?
Employing
successful degrees of
anthropomorphism with cognitive ability in social
robotics will provide the mechanisms whereby the robot
could successfully pass the age-old Turing Test for
intelligence assessment (although, strictly speaking, the
Turing Test requires a degree of disassociation from that
which is being measured and consequently the physical
manifestation of a robot would require a special
variation of the Turing Test). The question then remains
as to what aspects increase our perception of
intelligence; that is, what design parameters are
important in creating a social robot that can pass this
variation of the Turing test.
2.2 Perceiving intelligence
Experiments have highlighted the influence of
appearance and voice/speech on people’s judgements of
another’s intelligence.
The more attractive a person, the more likely others
would rate the person as more intelligent [10][11].
However, when given the chance to hear a person speak,
people seem to rate intelligence of the person more on
verbal cues than the person’s attractiveness [11].
Exploring the impact of such hypotheses to HCI, Kiesler
and Goetz [12] undertook experimentation with a
number of robots to ascertain if participants interacting
with robots drew similar assessments of “intelligence”.
The experiments were based on visual, audio and
audiovisual interactions. Interestingly the results
showed strong correlations with Alicke et al’s and
Borkenau’s
experiments
with
people-people
judgements.
Such experimentation provides important clues on
how we can successfully exploit elements of
anthropomorphism in social robotics. It becomes all the
more important therefore that our interpretations of
what is anthropomorphism and how to successfully
employ it are adequately studied.
3 Anthropomorphism
This paper has thus far loosely used the term
anthropomorphism; however, it is used in different
senses throughout the natural sciences, psychology,
and HCI. Anthropomorphism (from the Greek word
anthropos for man, and morphe, form/structure), as
used in this paper, is the tendency to attribute human
characteristics to inanimate objects, animals and others
with a view to helping us rationalise their actions. It is
attributing cognitive or emotional states to something
based on observation in order to rationalise an entity’s
behaviour in a given social environment. This
rationalisation is reminiscent of what Dennett calls the
intentional stance, which he explains as “the strategy
of interpreting the behavior of an entity (person,
animal, artifact, whatever) by treating it as if it were a
rational agent who governed its ‘choice’ of ‘action’ by
a ‘consideration’ of its ‘beliefs’ and ‘desires.’” [13].
This is effectively the use of projective intelligence to
rationalise a system’s actions.
This phenomenon of ascribing humanlike
characteristics to nonhuman entities has been exploited
in religion to recent animation films like “Chicken
Run” (Aardman Animations, 2000) or “Antz”
(DreamWorks & SKG/PDI, 1998).
Few psychological experiments have rigorously
studied the mechanisms underlying anthropomorphism
where only a few have seen it as worthy of study in its
own right (e.g. [14][15][16]). The role of
anthropomorphism in science has more commonly
been considering it as a hindrance when confounded
with scientific observation rather than an object to be
studied more objectively.
Is it possible to remove anthropos from our science
when we are the observers and build the devices to
measure? Krementsov and Todes [17] comment that
“the long history of anthropomorphic metaphors,
however, may testify to their inevitability”. If we are
unable to decontextualise our perceptions of a given
situation
from
ourselves,
then
condemning
anthropomorphism won’t help. Caporael [14] proposes
that if we are therefore unable to remove
anthropomorphism from science, we should at least “set
traps for it” in order to be aware of its presence in
scientific assessment.
Kennedy goes as far as to say that anthropomorphic
interpretation “is a drag on the scientific study of the
causal mechanisms” [18]. Building social robots forces
a new perspective on this. When interaction with people
are the motivation for social robot research, then
people’s perceptual biases have an influence on how the
robot is realised. The question arising in this paper is
not how to avoid anthropomorphism, but rather how to
embrace it in the field of social robots.
Shneiderman takes the extreme view of the role of
anthropomorphism in HCI by stating that people
employing anthropomorphism compromise in the
design, leading to issues of unpredictability and
vagueness [19]. He emphasises the importance of clear,
comprehensible and predictable interfaces that support
direct manipulation. Shneiderman’s comment touches on
a problem which is not fundamentally a fault of
anthropomorphic features, but a fault of the HCI
designers in not trying to understand people’s tendency
to anthropomorphise, and thus they indiscriminately
apply certain anthroporphic qualities to their design
which only lead to user over-expectation and
disappointments when the system fails to perform to
these expectations. The assumption has generally been
that the creation of even crude computer “personalities”
necessarily requires considerable computing power and
realistic human-like representations. Since much of the
resources tend to be invested in these representations,
Shneiderman’s sentiments seem more a response to the
lack of attention to other equally important issues in the
design. Such unmotivated anthropomorphic details may
be unnecessary; as investigations show that using simple
scripting of text demonstrates that “even minimal cues
can mindlessly evoke a wide range of scripts, with
strong attitudinal and behavioural consequences” [20].
Even if this were not the case, Shneiderman’s
argument is valid when the system in question is
intended as a tool and not when the attempt is to develop
a social intelligent entity. The question of how to
develop AI in social robotics effectively requires
human-like traits, as the goal is a human-centric
machine. In the social setting, anthropomorphic
behaviour may easily be as clear, comprehensible and
predictable interfaces (and of course there must be a
balance).
Moreover, Shneiderman’s hesitancy with the role of
anthropomorphism in HCI is based on obscuring the
important distinction of metaphorical ascription of
human-like qualities to non-human entities with the
actual explanation of others’ behaviours with humanoriented intentions and mental states. As Searle points
out, there is an important, if subtle, difference in what
he terms as-if intentionality used to rationalise, say, a
robot’s behaviour, and intrinsic intentionality found in
humans [21].
Nass
and
Moon
demonstrate
through
experimentation that individuals “mindlessly apply
social rules and expectations to computers” [20].
Interestingly,
the
authors
are
against
anthropomorphism as they base it on a belief that the
computer is not a person and does not warrant human
treatment or attribution. This highlights a predominant
theme in HCI, which is to view anthropomorphism as
“portraying inanimate computers as having humanlike personality or identity” [22], or projecting intrinsic
intentionality. This differs from the broader
psychological perspective of anthropomorphism, which
also includes metaphorically ascribing human-like
qualities to a system based on one’s interpretation of its
actions. In this paper, anthropomorphism is a metaphor
rather than an explanation of a system’s behaviour.
The stigma of anthropomorphism in the natural
sciences is similarly partly based on a rationalisation of
animal or plant behaviour based on models of human
intentionality and behaviour. The stigma does not stem
from the appropriateness in describing the behaviour in
terms of anthropomorphic paradigms but rather in
cases when such paradigms are used as explanations of
its behaviour. Such explanations are incorrect, but
anthropomorphism is not restricted to only this.
3.1 Anthropomorphism and robots
Similarly social robots should exploit people’s
expectations of behaviours rather than necessarily
trying to force people to believe that the robot has
human reasoning capabilities. Anthropomorphism
should not be seen as the “solution” to all humanmachine interaction problems but rather it needs to be
researched more to provide the ”language” of
interaction between man and machine. It facilitates
rather than constrains the interaction because it
incorporates the underlying principles and expectations
people use in social settings in order to fine-tune the
social robot’s interaction with humans.
The role of anthropomorphism in robotics is not to
build an artificial human, but rather to take advantage of
it as a mechanism through which social interaction can
be
facilitated.
It
constitutes
the
basic
integration/employment of “humanness” in a system
from its behaviours, to domains of expertise and
competence, to its social environment in addition to its
form. Once domestic robots progress from the washing
machine and start moving around our physical and
social spaces, their role and our dealings with them will
change significantly. It is in embracing a balance of
these anthropomorphic qualities for bootstrapping and
their inherent advantage as machines, rather than seeing
this as a disadvantage, that will lead to their success.
The anthropomorphic design of human-machine
interfaces has been inevitable.
As pointed out, to avoid the common HCI pitfall of
missing the point of anthropomorphism, the important
criterion is to seek a balance between people’s
expectations
and
the
machines
capabilities.
Understanding the mechanisms underlying our
tendencies to anthropomorphise would lead to sets of
solutions in realising social robots, not just a single
engineering solution as found in only designing
synthetic humans.
Still the questions remain. Is there a notion of
“optimal anthropomorphism”? What is the ideal set of
human features that could supplement and augment a
robot’s
social
functionality?
When
does
anthropomorphism go too far? Using real-world robots
poses many interesting problems. Currently a robot’s
physical similarity to a person is only starting to
embrace basic humanlike attributes, and predominantly
the physical aspect in humanoid research (see
http://www.androidworld.com), but just as in HCI,
rigorous research is needed to identify the physical
attributes important in facilitating the social interaction.
A robot not embracing the anthropomorphic paradigm
in some form is likely to result in a persistent bias
against people being able to accept the social robot into
the social domain, which becomes apparent when they
ascribe mental states to them. As shown in several
psychological experiments and human-robot interaction
experiments [12] [23] and pointed out by Watt [24],
familiarity may also ease social acceptance and even
tend
to
increase
people’s
tendency
to
anthropomorphise [15].
An important question is how does one manage
anthropomorphism. One question is the degree and
nature of visual anthropomorphic features. Roboticists
have recently started to address what supplementary
modalities to physical construction could be employed
for the development of social relationships between a
physical robot and people. Important arenas include
expressive faces [1][2][25] often highlighting the
importance of making eye contact and incorporating
face and eye tracking systems. Examples demonstrate
two methodologies that employ either a visually iconic
[1][26] or a strongly realistic humanlike construction
(i.e. with synthetic skin and hair) [2] for facial gestures
in order to portray artificial emotion states. The iconic
head defines a degree of anthropomorphism which is
employed through the robot’s construction and
functional capabilities. This constrains and effectively
manages the degree of anthropomorphism employed.
Building mannequin-like robotic heads, where the
objective is to hide the “robotic” element as much as
possible and blur the issue as to whether one is talking
to a machine or a person, results in effectively
unconstrained anthropomorphism and a fragile
manipulation of robot-human social interaction, and is
reminiscent of Shneiderman’s discontent with
anthropomorphism.
Interestingly, it can be argued that the most
successful implementation of expressive facial features
is through more mechanistic and iconic heads such as
Anthropos & Joe [26] and Kismet [1]. Strong humanlike facial construction in robotics [2] has to contend
with the minute subtleties in facial expression, a feat
by no means trivial. Contrarily, it can be argued that
successful highly humanlike facial expression is an
issue of resolution. Researchers are trying to bombard
the artificially-intelligent robot enigma with few
looking for the key minimum features required to
realise an “intelligent” robot. Consequently, in seeking
to develop a social robot, the goal of many is the
synthetic realistic human. From an engineering
perspective it is more difficult to realise strong
anthropomorphism as found in synthetic humans i.e.
[2], but in order to “solve” AI through hard research, it
is a simpler more justifiable route to take. A more
stringent research approach should not be to throw
everything at the problem and force some answer
however constrained, but rather to explore and
understand the minimal engineering solution needed to
achieve a socially-capable artificial “being”. The hard
engineering approach is not mutually exclusive of the
solution to realising the grail of AI. After all,
engineering research plays a strong role in AI research
(especially robotics), but the rest of AI research is why
such engineering feats are needed. The path to the
solution should embrace a more holistic approach
combining both engineering solutions as well as
tackling core AI research questions, even knowing what
“cheats” to employ. A useful analogy is in not trying to
replicate a bird in order to fly but rather recognising
those qualities that lead to the invention of the plane.
From a social robotics perspective, it is not an issue
whether people believe that robots are “thinking” but
rather taking advantage of where people still have
certain social expectations in social settings. If one
bootstraps on these expectations, i.e. through exploiting
anthropomorphism, one can be more successful in
making the social robot less frustrating to deal with and
be perceived as more helpful.
Anthropomorphising robots is not without its
problems. Incorporating life-like attributes evokes
expectations about the robot’s behavioural and cognitive
complexity that may not be maintainable [27].
Similarly it can be argued that this constrains the
range of possible interpretations of the robot depending
on the person’s personality, preferences and the
social/cultural context. This in fact can be reversed
where constraining the robot to human-like
characteristics can facilitate interpretation.
4 Artificial Sociability
It is then natural to discuss sociality of robots.
Artificial sociability in social robotics is the
implementation of those techniques prevalent in human
social scenarios to artificial entities (such as
communication and emotion) in order to facilitate the
robot’s and people’s ability to communicate.
The degree of social interaction is achieved through a
developmental and adaptive process. The minimum
requirement for social interaction is the ability in some
way to adapt to social situations and communicate
understanding of these situations through, for example, a
screen and keyboard or a synthetic speech system in
order to have the “data-transfer” required. This
functionality of communicating in addition to emotive
expression, extends to the development and maintenance
of complex social models and the ability to
competently engage in complex social scenarios. .
4.1 Communication & Emotion
Communication occurs in multiple modalities,
principally the auditory and visual fields. For artificial
sociability the social robot does not necessarily need to
be able to communicate information in as complex a
manner as people but just sufficient enough for
communication with people. For example, natural
language is inherently rich and thus can be fuzzy, so
strong anthropomorphism can have its limitations.
However, the social robot needs to be able to
communicate enough to produce and perceive
expressiveness (as realised in speech, emotional
expressions, and other gestures). This anthropomorphic
ability can contribute to a person’s increased
perception of the social robot’s social capabilities (and
hence acceptance as a participant in the human social
circle). Social competence is important in contributing
to people’s perception of another’s intelligence [22].
This ability to understand the emotions of others is
effectively the development of a degree of Emotional
Intelligence in a robotic system. Daniel Goleman, a
clinical psychologist, defines EI as“the ability to
monitor one's own and others’ emotions, to
discriminate among them, and to use the information
to guide one’s thinking and actions” [28].
Goleman
discusses
five
basic
emotional
competencies: self-awareness, managing emotions,
motivation, empathy and social skills in the context of
emotional intelligence. This approach is a departure
from the traditional attitude, still prevalent, that
intelligence can be divided into the verbal and nonverbal (performance) types, which are, in fact, the
abilities that the traditional IQ tests assess. While
numerous often-contradictory theories of emotion exist,
it is accepted that emotion involves a dynamic state
that consists of cognitive, physical and social events.
As the case of Phineas Gage in 1848 demonstrated,
an emotional capacity is fundamental towards the
exhibition of social functionality in humans (see [29]).
Salovey and Mayer [30] have reinforced the
conclusions that emotionality is an inherent property of
social intelligence in discussing emotional intelligence
[29]. When coupled with Barnes and Thagard’s [31]
hypothesis that “emotions function to reduce and limit
our reasoning, and thereby make reasoning possible”,
a foundation for the implementation of an emotional
model in the development of a social robot becomes
apparent. A social robot requires those mechanisms that
facilitate its rationalisation of possibly overwhelming
complex social and physical environments.
From an observer perspective, one could pose the
question whether the attribution of artificial emotions to
a robot is analogous to the Clever Hans Error [32] where
the meaning and in fact the result is primarily dependent
on the observer and not the initiator? Possibly not. As
emotions are fundamentally social attributes, they
constitute communicative acts from one person to
another. The emotions have a contextual basis for
interpretation, otherwise the communication act of
expressing an emotion may not be successful and may
be misinterpreted. The judicious use of the expressive
social functionality of artificial emotions in social
robotics is very important. As demonstrated through
numerous face robot research projects (i.e. [2]), this
treads a fine line between highly complex expressive
nuances and anticipated behaviour.
4.2 Social Mechanisms
What are the motivations that people could have in
engaging in social interactions with robots? The need to
have the robot undertake a task, or satisfy a person’s
needs/requirements promotes the notion of realising a
mechanism for communication with the robot through
some form of social interaction.
It has often been said that the ultimate goal of the
human-computer interface should be to “disappear” the
interface. Social robotics is an alternate approach to
ubiquitous computing. The interaction should be
rendered so transparent that people do not realise the
presence of the interface. As the social robot could be
viewed as the ultimate human-machine interface, it
should similarly display a seamless coherent degree of
social capabilities that are appropriate for interaction
and to it being a machine. What matters is the ease and
efficiency of communication with the robots, as well as
the assistants' capabilities and persona.
This inherently necessitates that the robot be
sufficiently empowered through mechanisms of
personality and identity traits to facilitate a human’s
acceptance of the robots own mechanisms for
communication and social interaction. It is important
that the robot not be perceived as having to perform in a
manner, which compromises or works against either its
physical or social capabilities. This argues against the
apparent necessity perceived by those building strong
humanoid robots to develop a full motion, perception
and aesthetic likeness to humans in a machine.
Contrary to the popular belief that the human form is
the ideal general-purpose functional basis for a robot,
robots have the opportunity to become something
different. Through the development of strong
mechanistic and biological solutions to engineering and
scientific problems including such measurement
devices as Geiger counters, infra-red cameras, sonar,
radar and bio-sensors for example, a robot’s
functionality in our physical and social space is clear. It
can augment our social space rather than “take over the
world”. A robots form should therefore not adhere to
the constraining notion of strong humanoid
functionality and aesthetics but rather employ only
those characteristics that facilitate social interaction
with people when required.
In order to facilitate the development of complex
social scenarios, the use of basic social cues can
bootstrap a person’s ability to develop a social
relationship with a robot. Stereotypical communication
cues provide obvious mechanisms for communication
between robots and people. Nodding or shaking of the
head are clear indications of acceptance or rejection
(given a defined cultural context). Similarly, work on
facial gestures with Kismet [1] and motion behaviours
[3][33] demonstrate how mechanistic-looking robots
can be engaging and can greatly facilitate ones
willingness to develop a social relationship robots.
Another technique to create an illusion of “life” is to
implement some form of unpredictability in the motion
and behaviour of the robot to make it appear more
“natural”. Such techniques as Perlin Noise [34] have
demonstrated the power of these strategies in
animation, surface textures, and shapes.
Collectively, these mechanisms are based on a
requirement to develop a degree of artificial sociability
in robots and consequently artificial “life”. But if the
robot becomes “alive”, has science succeeded? The
following section briefly discusses how this notion of
perceived robot “life” through artificial sociability
should playback into the arguments for and against
employing anthropomorphism in social robotics.
5 Fear of Robots
“RADIUS: I don’t want any master. I know
everything for myself”. Karel Capek’s original robots
from his 1920 play “Rossum’s Universal Robots” were
designed as artificial slaves which were intended to
relieve humans of the drudgery of work. Unfortunately
the story develops into the robots becoming more
intelligent, becoming soldiers and ultimately wiping out
the human race. Oops.
Asimov’s laws from 1950 are inherently based on a
robot’s interaction with people. Interesting stories in his
books “I, Robot” and “The Rest of the Robots” present
scenarios where the often-bizarre behaviours of the
robot characters are explained according to their
adherence to these laws. An example is the telepathic
robot that lies in order not to emotionally hurt people
but inevitably fails to see the long-term damage of this
strategy.
When the robot itself is perceived as making its own
decisions, people’s perspective of computers as simply
tools will change. Consequently, issues of non-fear
inducing function and form are paramount to the success
of people’s willingness to enter into a social interaction
with a robot. The designer now has to strive towards
facilitating this relationship, which may necessitate
defining the scenario of the interaction. The physical
construction of the robot plays an important role in such
social contexts. The simple notion of a robot having two
eyes in what could be characterised as a head, would
intuitively facilitate where a person focuses when
speaking to the robot. As highlighted in the PINO
research, “the aesthetic element [plays] a pivotal role in
establishing harmonious co-existence between the
consumer and the product” [35]. Such physical
attributes as size, weight, proportions and motion
capabilities are basic observable modalities that define
this relationship (PINO is 75cm tall).
While anthropomorphism in robotics raises issues
where the taxonomic legitimacy of the classification
‘human’ is under threat, a number of viewpoints adopted
by researchers highlight different possible futures for
robotics; (a) Machines will never approach human
abilities; (b) Robots will inevitably take over the world;
(c) People will become robots in the form of cyborgs.
A fourth possibility exists. Robots will become a
“race” unto themselves. In fact, they already have.
Robots currently construct cars, clean swimming pools,
mow the lawn, play football, act as pets, and the list is
growing very quickly. Technology is now providing
robust solutions to the mechanistic problems that have
constrained robot development until recently, thereby
allowing robots permeate all areas of society from
work to leisure. Robots are not to be feared, but rather
employed.
6 Future Directions
Robots will expand from pure function as found in
production assembly operations to more social
environments such as responding and adapting to a
person’s mood. If a robot could perform a “techno
handshake” and monitor their stress level through
galvanic skin response and heart rate and use an
infrared camera system on the person’s face, a
traditional greeting takes on a new meaning. The
ability of a robot to garner bio information and use
sensor fusion in order to augment its diagnosis of the
human’s emotional state through for example, the
“techno handshake” illustrates how a machine can have
an alternate technique to obtain social information
about people in its social space. This strategy
effectively increases people’s perception of the social
robot’s “emotional intelligence”, thus facilitating its
social integration.
The ability of a machine to know information about
us using measurement techniques unavailable to us is
an example where the social robot and robots in
general can have their own function and form without
challenging ours. What would be the role of the social
robot? If therapeutic for say psychological analysis and
treatment, would such a robot require full “humanness”
to allow this intensity of social interaction develop?
The only apparent validation for building strong
humanoid robots is in looking to directly replace a
human interacting with another person. It is in seeking
to embrace fundamental needs and expectations for
internally motivated behaviour (for one’s own means)
that could encourage strong humanoid robotics.
Examples include situations requiring strong
introspection
through
psychotherapy
or
the
satisfaction/gratification of personal needs or desires
such as found in the adult industry.
Does work on the development of socially capable
robots hark a fundamental change in AI and robotics
research where work is not based on a desire to
understand how human cognition works but rather on
simply designing how a system may appear intelligent?
The answer is obviously no. In order to design a system
that we could classify as “intelligent” we need to know
what intelligence is, and even better if we can
understand what it is.
7 Conclusion
It is argued that robots will be more acceptable to
humans if they are built in their own image. Arguments
against this viewpoint can be combated with issues of
resolution where its expressive and behavioural
granularity may not be fine enough. But if the robot was
perceived as being human, then effectively it should be
human and not a robot. From a technological standpoint,
can building a mechanistic digital synthetic version of
man be anything less than a cheat when man is not
mechanistic, digital nor synthetic? Similar to the
argument to not constrain virtual reality worlds to our
physical world, are we trying to constrain a robot to
become too animalistic (including humanistic) that we
miss how a robot can constructively contribute to our
way of life?
The perspectives discussed here have tended towards
the pragmatic by viewing the robot as a machine
employing such humanlike qualities as personality,
gestures, expressions and even emotions to facilitate its
role in human society. The recent film AI (Warner
Brothers, 2001) highlights the extended role that robot
research is romantically perceived as aiming to achieve.
Even the robot PINO [35] is portrayed as addressing “its
genesis in purely human terms” and analogies with
Pinocchio are drawn, most notably in the name. But,
does the robot itself have a wish to become a human
boy? Or do we have this wish for the robot? We should
be asking why.
Experiments comparing a poor implementation of a
human-like
software
character
to
a
better
implementation of a dog-like character [33] promotes
the idea that a restrained degree of anthropomorphic
form and function is the optimal solution for an entity
that is not a human.
It can be argued that the ability of a robot to
successfully fool a person into thinking it is intelligent
effectively through social interaction will remain an
elusive goal for many years to come. But similarly
machine intelligence could already exist, in “infant”
form. Robots are learning to walk and talk. The issue is
rather should they walk? Walking is a very inefficient
motion strategy for environments that are more and
more wheel friendly. Should robots become the
synthetic human? Is attacking the most sophisticated
entity we know, ourselves, and trying to build an
artificial human the ultimate challenge to roboticists?
But researchers will not be happy building anything
less than a fully functional synthetic human robot. It’s
just not in their nature.
There is the age-old paradox where technologists
predict bleak futures for mankind because of their
research directions but nevertheless hurtle full steam
ahead in pursuing them. Strong humanoid research
could be just a means to play God and create life. But
fortunately, robots will not take over the world. They
will be so unique that both our identity and theirs will
be safe. As an analogy, nuclear energy has provided
both good and bad results. We similarly need to not let
one perspective cloud the other in social robotics and
robotics as a whole.
While anthropomorphism is clearly a very complex
notion, it intuitively provides us with very powerful
physical and social features that will no doubt be
implemented to a greater extent in social robotics
research in the near future. The social robot is the next
important stage in robot research and will fuel
controversy over existing approaches to realising
artificial intelligence and artificial consciousness.
8 References
[1] Breazeal, C. Sociable Machines: Expressive Social Exchange Between Humans and Robots,
Sc.D. dissertation, Department of Electrical Engineering and Computer Science, MIT. 2000
[2] Hara, F. Kobayashi, H., Use of Face Robot for
Human-Computer Communication, Proceedings
of International Conference on System, Man and
Cybernetics, pp10. 1995
[3] Duffy, B.R., Joue, G., Bourke, J. Issues in Assessing Performance of Social Robots, 2nd WSEAS
Int. Conf. RODLICS, Greece, Sept. 25-28, 2002
[4] Duffy, B. The Social Robot. Ph.D. Thesis, Department of Computer Science, University College
Dublin. 2000
[5] Foner, L. What’s agency anyway? A sociological
case study. Proceedings of the First International
Conference on Autonomous Agents. 1997
[6] Kummer, H., Daston, L., Gigerenzer, G., Silk, J.,
The social intelligence hypothesis, Weingart et al.
(eds), Human by Nature: between biology and so-
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
cial sciences. Hillsdale, NJ: Lawrence Erlbaum
Assoc., pp157-179
Thomas, F., Johnston, O., The Illusion of Life:
Disney Animation, Hyperion; ISBN: 0786860707;
Revised edition, 1995
Turing, A. M. Computing machinery and intelligence, Mind Vol.59, P433-460
Joseph Weizenbaum, ELIZA - a computer program
for the study of natural language communication
between man and machine. Communications of the
ACM 9. 1966.
Alicke, M.D., Smith, R.H., & Klotz, M.L. Judgments of physical attractiveness: The role of faces
and bodies. Personality and Social Psychology Bulletin, 12(4), 381-389. 1986
Borkenau, P. How accurate are judgments of intelligence by strangers? Annual Meeting of the
American Psychological Association, Toronto, Ontario, Canada, August 1993
Sara Kiesler, Jennifer Goetz, Mental models and
cooperation with robotic assistants, http://www2.cs.cmu.edu/~nursebot/web/
papers/robot_chi_nonanon.pdf
Dennett, D. Kinds of Minds. New York: Basic
Books1996
Caporael, L. R. Anthropomorphism and mechanomorphism: Two faces of the human machine.
Computers in Human Behavior, 2(3): pp215–234.
1986
Eddy, T.J., Gallup, G.G., Jr. & Povinelli, D.J. Attribution of cognitive states to animals: Anthropomorphism in comparative perspective. Journal of
Social Issues, 49, pp87-101. 1993
Tamir, P., & Zohar, A. Anthropomorphism and
Teleology in Reasoning about Biological Phenomena, Science Education, 75(1), pp57-67. 1991
Krementsov, N. L., & Todes, D. P. On Metaphors,
Animals, and Us, Journal of Social Issues, 47(3),
pp67-81. 1991
Kennedy, J.S. The New Anthropomorphism. Cambridge Press. 1992
Shneiderman, B. A nonanthropomorphic style
guide: Overcoming the humpty-dumpty syndrome.
The Computing Teacher, October, 9-10. 1988
Nass, C. & Moon, Y. Machines and mindlessness:
Social responses to computers. Journal of Social Issues, 56 (1), 81-103. 2000
Searle, J. Intentionality: An Essay in the Philosophy of Mind. New York, Cambridge University
Press. 1983
[22] Isbister, K., Perceived intelligence and the design
of computer characters, Lifelike characters conference, Snowbird, Utah, 28th Spetember, 1995
[23] Gill, M.J., Swann, W.B., Silvera, D.H., On the
genesis of confidence, Journal of Personality and
Social Psychology, Vol.75 pp1101-1114, 1998
[24] Watt, S. "A Brief Naive Psychology Manifesto",
Informatica, vol. 19, pp.495-500, 1995.
[25] Cozzi, A., Flickner, M., Mao, J., Vaithyanathan,
S., A Comparison of Classifiers for Real-Time
Eye Detection, ICANN, pp993-999, 2001
[26] Duffy,
B.R.,
The
Anthropos
Project:
http://www.mle.ie/research/socialrobot/
index.html
[27] Dryer, D.C. (1999). Getting personal with computers: how to design personalities for agents.
Applied Artificial Intelligence Journal, Special Issue on “Socially Intelligent Agents, v.13(2).
[28] Goleman, D., Emotional Intelligence, Bantam
Books, 1997
[29] Damasio, A. Descartes' Error: Emotion, Reason,
and the Human Brain. New York: G.P. Putnam's
Sons, 1994.
[30] Salovey, P., Mayer, J.D. Emotional intelligence.
Imagination, Cognition, and Personality, 9(1990),
pp185-211. 1990
[31] Barnes, A., Thagard, P., “Emotional Decisions”,
Proceedings of the Eighteenth Annual Conference
of the Cognitive Science Society. Erlbaum,
pp426-429.
[32] Pfungst, O., Clever Hans (The Horse of Mr. von
Osten): A Contribution to Experimental Animal
and Human Psychology, New York: Holt,
Rinehart, and Winston, 1965.
[33] Kiesler, S., Sproull, L., Waters, K., “A prisoner’s
dilemma experiment on cooperation with people
and human-like computers”, Journal of Personality and Social Psychology, 70(1), pp47-65, 1996
[34] Ken Perlin. An image synthesizer. Computer
Graphics (SIGGRAPH '85 Proceedings), volume
19, pages pp287-296, July 1985.
[35] Yamasaki F., Miyashita T., Matsui T., Kitano H.,
(2000) PINO the Humanoid: A basic architecture,
Proc. of The Fourth International Workshop on
RoboCup, August 31, Melbourne, Australia