Download Artificial Emotion Simulation Techniques for Intelligent Virtual

Document related concepts

Enactivism wikipedia , lookup

Soar (cognitive architecture) wikipedia , lookup

Perceptual control theory wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Agent-based model wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Agent (The Matrix) wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Agent-based model in biology wikipedia , lookup

Artificial intelligence in video games wikipedia , lookup

Cognitive model wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Affective computing wikipedia , lookup

Transcript
Polytechnics University of Bucharest
Faculty of Computers and Automation
Department of Computer Science
Artificial Emotion Simulation
Techniques for Intelligent Virtual
Characters
Research Report #2
A Model of Affective Behaviour
Author:
Valentin Lungu, B.Sc.
Scientific adviser:
Adina Magda Florea, Ph.D.
Bucharest
January 2012
Abstract
The study of what is now called emotional intelligence has revealed yet
another aspect of human intelligence. Research in human emotion has provided insight into how emotions influence human cognitive structures and
processes, such as perception, memory management, planning, reasoning and
behavior. This information also provides new ideas to researchers in the fields
of affective computing and artificial life about how emotion simulation can
be used in order to improve artificial agent behavior.
Virtual characters acting in an intelligent manner based on their goals
and desires as opposed to standing still, waiting around for the user and
interacting based on pre-scripted dialog trees brings a great deal of depth to
the experience. Evolving virtual worlds are by far more interesting than static
ones. Applications in which activity independent of the user takes place, such
as populating a virtual world with crowds, traffic, virtual humans and nonhumans, behaviour and intelligence modeling [6] are possible by integrating
artificial intelligence and virtual reality tools and techniques.
Emotion theories attribute several effects of emotions on cognitive, decision and team-working capabilities (such as directing/focusing attention,
motivation, perception, behavior towards objects, events and other agents.
The main goal of our work is to provide virtual characters with the ability
of emotion expression. Our approach is based on the idea that emotions serve
a fundamental function in human behavior, and that in order to provide artificial agents with rich believable affective behavior, we need to develop an
artificial system that serves the same purpose. We believe that emotions act
as a subsystem that enhances human behavior, by stepping up brain activity in arousing circumstances, directing attention and behavior, establishing
importance of events and act as motivation.
This thesis analyzes important aspects of human emotions and models
them in order to create more believable artificial characters and more effective agents and multi-agent systems. This thesis deals with the effects
that emotions have on perception, memory, learning and reasoning and their
integration into an artificial agent architecture. We show that emotion simulation does not only serve to improve human-computer interaction and behavior analysis, but can also be used in order to improve artificial agent and
multi-agent system performance and effectiveness.
We study various emotion theories, emotion computational simulation
models, and, drawing from their wisdom, we propose a model of artificial
agents, which attempts to simulate the effects of human-like emotions on the
cognitive and reactive abilities of artificial agents.
This thesis summarizes the psychological theories that our computational
emotion model is based on, defines the key concepts of the Newtonian emotion model that we developed and describes how they work within an agent
architecture. The Newtonian emotion model is a light-weight and scalable
emotion representation and evaluation model to be used by virtual agents.
We also designed a plug-and-play emotion subsystem to be integrated into
any agent architecture. We also describe the first theoretical prototype for
our architecture, an artificial agent emotion-driven architecture that not only
attempts to provide complex believable behavior and representation for virtual characters, but that attempts to improve agent performance and effectiveness by mimicking human emotion mechanics such as motivation, attention narrowing and the effects of emotion on memory, and explain its
shortcomings.
We have chosen role-playing gems where virtual characters exist and interact in virtual environments as our application domain for the unique challenges and constraints that arise for artificial intelligence techniques geared
towards this type of application, namely, being forced to run alongside CPUintensive processes such as graphics and physics simulations, as well as the
fact that role-playing games are unique in the fact that they have social interactions built into the game. We believe that believable emotional expression
and behavior will add a new dimension and progress virtual environments beyond visually compelling but essentially empty environments to incorporate
other aspects of virtual reality that require intelligent behavior.
2
Chapter 1
Introduction
Several beneficial effects on human cognition, decision-making and social interaction have been attributed to emotions, while several studies and projects
in the field of artificial intelligence have shown the positive effects of artificial
emotion simulation in human-computer interaction and simulation applications. This thesis presents various emotion theories and artificial emotion
simulation models, and proposes a model of emotions that tries to reproduce the human-like effects on cognitive structures, processes, patterns and
functions in artificial agents through emotion simulation and emotion based
memory management, leading thus to more believable agents, while also improving agent performance.
Autonomous artificial entities are required to be able to exist in a complex
environment and respond to relevant changes in the environment, reason
about the selection and application of operators and actions and be able to
pursue and accomplish goals. This type of agent has applications in various
fields, including character behavior in virtual environments, as used in elearning applications and serious games.
Emotion integration is necessary in such agents in order to generate complex believable behavior. It has been shown that emotions influence human
cognition, interaction, decision-making and memory management in a beneficial and meaningful way. Several studies in the field of artificial intelligence
have also been conducted ([16, 27, 59]) and have shown the necessity of emotion simulation in artificial intelligence, especially when human-computer
interaction is involved.
Emotions are a part of the human evolutionary legacy [22], serving adaptive ends, acting as a heuristic in time-critical decision-processes (such as
fight-or-flight situations); however, emotions have also become an important
part of the multi-modal process that is human communication, to the point
that its absence is noted and bothersome.
3
Affective computing is the study and development of systems that can
recognize, interpret, process and simulate human affects, in order to add
this extra dimension into our interaction with machines. Our goal is to create believable intelligent artificial characters capable of displaying affective
behavior. To achieve this, we attempt to provide artificial characters with
an emotional layer similar to that of humans, serving adaptive ends. The
emotion subsystem will influence agent behavior by establishing the importance of events and by influencing knowledge processing, as well as provide
the agent with an emotional state that it will be able to express and that
will further influence its behavior. We first describe our Newtonian emotion system, that explains the way we represent emotions and how external
and internal forces act upon them, based on the work of psychologist R.E.
Plutchik [67] and Newton’s laws of motion. Based on the psychological theory of dr. R. Lazarus [46], we devised an emotional subsystem that interacts
with and influences an agent architecture. The system also takes into account
the effects that emotions have on human perception and cognitive processes.
As an application domain, we chose virtual characters in computer role
playing games (CRPGs) as they are a type of game meant to simulate all
manner of human interactions (be they gentle or violent in nature). This type
of game provides us with complex dynamic environments in which modern
agents are expected to operate. It also focuses on individual characters and
the way they respond to relevent changes in their environment (behavior).
Last, but not least, the limited action set available to characters and current
generation game engines significantly reduce the overhead of proof-of-concept
apps.
Applications of the technology are not limited simply to CRPGs, as several types of endeavours find it useful, such as behavior simulation, interactive
storytelling, and, indeed, any application involving human-computer interaction (for example, online social sites could recommend events and friends
based on mood).
4
Chapter 2
Overview of artificial emotion
simulation models
We begin this chapter by presenting two popular emotion models with different goals and approaches in emotion simulation, and end by presenting
the findings of the PETEEI study that shows the importance of learning and
evolving emotions.
As shown in this section, there are several key aspects to take into consideration when designing a memory and emotion architecture, and we believe
that all are covered by our model.
2.1
Ortony Clore and Colins (OCC)
The OCC (Ortony Clore & Collins) model has become the standard model
for the synthesis of emotion. The main idea of this model is that emotions
are valenced reactions to events, agents or objects (the particular nature
being determined by contextual information). This means that the intensity
of a particular emotion depends on objects, agents and events present in
the environment [59]. In this model, emotions are reactions to perceptions
and their interpretation is dependent on the triggering event, for example:
an agent may be pleased or displeased by the consequences of an event, it
may approve or disapprove of the actions of another agent or like or dislike
an object. Another differentiation taken into account within the model is
the target of an event or action and the cause of the event/action: events
can have consequences for others or for oneself and an acting agent can be
another or oneself. These consequences (whether for another or oneself) can
be divided into desirable or undesirable and relevant or irrelevant.
In the OCC model, emotion intensity is determined by three intensity
5
variables: desirability - reaction towards events and is evaluated with regard
to goals, praiseworthiness relates to actions of other agents (and self), evaluated with regard to standards, and appealingness depends on reaction to
objects and its value is determined with regard to attitudes.
The authors also define a set of intensity variables: sense of reality, proximity, unexpectedness and arousal are the global variables that influence all
three emotion categories. There is also a threshold value for each emotion,
below which an emotion is not subjectively felt. Once an emotion is felt however, it influences the subject in meaningful ways, such as focusing attention,
increasing the prominence of events in memory, and influencing judgments.
As shown, according to the OCC model, behavior is a response to an
emotional state in conjunction with a particular initiating event [59]. As in
the OCC model, in our model, emotions are valenced reactions to events,
agents or objects determined by contextual information; in fact we will be
using a simplified version of the OCC emotion evaluation process similar to
the one used in [27] in early validation of our model.
2.2
Connectionist model (Soar)
In this model, emotions are regarded as subconscious signals and evaluations
that inform, modify, and receive feedback from a variety of sources including
higher cognitive processes and the sensoriomotor system. Because the project
focuses on decision-making, the model emphasizes those aspects of emotion
that influence higher cognition [36].
The model uses two factors, clarity and confusion (Fig. 2.2), in order to
provide an agent with an assessment of how well it can deal with the current
situation. Confusion is a sign that the agent’s current knowledge, rule set and
abilities are inadequate to handle the current situation, while clarity comes
when the agent’s internal model best represents events in the world. Since
confusion is a sign of danger it is painful and to be avoided, while clarity,
indicating the safety of good decisions, is pleasurable and sought-after.
Pleasure/pain, arousal and clarity/confusion form the basis of the emotional system. Instead of creating separate systems for fear, anger, etc.,
the authors assume that humans attach emotional labels to various configurations of these factors. For example, fear comes from the anticipation of
pain. Anxiety is similar to fear, except the level of arousal is lower [36].
The advantage of a generalized system is that it does not require specialized
processing.
Arousal is the main way that the emotional system interacts with the
cognitive system. Memory and attention are most affected by changes in
6
Figure 2.1: Soar 9 system architecture
7
Figure 2.2: Soar emotion subsystem
arousal. Highly aroused people tend to rely on well-learned knowledge and
habits even if they have more relevant knowledge available.
In the Soar implementation of this system, the Soar rules that comprise
have special conditions such that different kinds of rules only fire at differing
levels of arousal. For example, highly cognitive rules will not fire at high
levels of arousal, while more purely emotional rules may only fire at such
levels [36]. This allows for a very general approach to emotions.
The Soar model emphasizes the effects of emotions on cognition and
decision-making, effects considered important and also present in our own
model.
2.3
Evolving emotions (PETEEI)
Emotion is a complex process often linked with many other processes, the
most important of which is learning: memory and experience shape the dynamic nature of the emotional process. PETEEI is a general model for
simulating emotions in agents, with a particular emphasis on incorporating
various learning mechanisms so that it can produce emotions according to its
own experience [27]. The model was also developed with the capability to
recognize and react to the various moods and emotional states of the user.
In order for an agent to simulate a believable emotional experience, it
must adapt its emotional process based on its own experience. The model
uses Q-learning with the reward treated as a probability distribution to learn
about events and uses a heuristic approach to define a pattern and to further
define the probability of an action, a1, to occur given that an action, a0, has
occurred [27], thus learning about the user’s patterns of actions. External
feedback was used to learn what actions are pleasing or displeasing to the
8
user. The application also uses accumulators to associate objects to emotions (Pavlovian conditioning). The accumulators are incremented by the
repetition and intensity of the object-emotion occurrence.
An evaluation involving twenty-one subjects indicated that simulating the
dynamic emotional process through learning provides a significantly more
believable agent [27].
The PETEEI study focuses on the importance of learning agents and
evolving emotions. Our memory and emotion architecture is built primarily
with these two aspects in mind.
2.4
EBDI
The EBDI [64] model proposes an extension to the BDI architecture in which
agent behavior is guided not only by beliefs, desires and intentions, but also
by emotions in the processes of reasoning and decision-making. The architecture is used to establish some properties that agents under the effect of
an emotion should exhibit. The EBDI model describes agents whose behavior is dictated by interactions between beliefs, desires and inteintions (as in
the classical BDI arhitecture), but influenced by an emotional component.
The emotional component attempts to impose some of the positive aspects
that emotions play in reasoning and decision-making in humans into the artifical agent paradigm. EBDI introduces a multi-modal logic for specifying
emotional BDI agents.
The logic of EBDI The temporal relation is established over a set of
elements, called situations. Situations are pairs (world, state), where the
world refers to a scenario the agent considers as valid, and the state refers to a
particular point in that scenario. Actions are a labelling of the branching time
structure and can be either atomic (cannot be subdivided into a combination
of smaller actions) or regular (constructions of atomic actions).
The model introduces two emotional driving forces: fear and fundamental
desire. Fear motivates agent behavior, while the notion of fundamental desire
plays a role in properly establishing the notion of fear. The emotions can
be triggered either by an action which will lead to some wanted/unwanted
situation or triggered by believing that such situations may or will inevitably
be true in the future.
Bounded reasoning The model also introduces the notions of capabilities
and resources. Both Capabilities and resources are prerequisites for successful
action-execution.
9
• Resources physical or virtual means which may be drawn in order to
enable the agent to execute actions. If the resources for an action do
not exist, the action most likely cannot be executed.
• Capabilities abstract means that the agent has to change the environment in some way (resemble abstract plans of action). EBDI considers
the set of capabilities as a dynamic set of plans which the agent has
available to decide what to do in each execution state.
Fundamental desires are special desires which are vital desires of the
agent, or desires which cannot be failed to achieve, in any condition, since
may put in cause the agents own existence. Fundamental desires must always
be true and the agent must always do its best to maintain them valid.
Fear is one of the most studied emotions.The triggering of this emotion is
usually associated with the fact that one or more fundamental desires are at
stake. However, this triggering does not always occur in a similar way in all
agents. It is based on this differences that different classes of Emotional-BDI
agents can be modelled.
Fearful agents The fearful agent is an agent that always acts in a very
self-preserving way. Negative emotions, such as fear occur when unwanted or
uncontrollable facts or events are present in the environment. In its example,
EBDI only considers threats and unpleasant facts. Threats are facts or events
that directly affect one or more fundamental desire of the agent, putting its
self-preservation at stake.
• Dangerous a threat is dangerous when the agent believes that some
condition leads inevitably to the falsity of a fundamental desire, and
also believes that will also be inevitably true in the future
• Serious a threat is serious if the same conditions of a dangerous one
hold, except that the agent believes that may eventually be true in the
future, and not always
• Possible a threat is possible if the fundamental desire continues at
stake, but the agent believes that it may hold only in the future
Unpleasant facts are facts or events that may put one or more desires in
a situation of non-achievemnt. The agent will attempt to protext its desires.
• Highly Unpleasant something becomes highly unpleasant if the agent
believes that the source of the unpleasantness will always occur in the
future and will always in the future put in cause some desire
10
• Strongly Unpleasant something becomes strongly unpleasant if the
agent believes that the source of the unpleasantness will eventually occur
in the future and will always in the future put in cause some desire
• Possibly Unpleasant something becomes possibly unpleasant if the
agent believes that the source of the unpleasantness will eventually occur
in the future and, in the case of occurring, maybe it will put in cause
some desire
The model also defines the following actions, which represent specific
behavior exhibited by the agent under certain conditions.
• Self-preservation the self-preservation behaviour is activated when
the agent is fearing the failure of some of its fundamental desires. We
can see this as atomic action which mainly reacts to threats in a selfprotective way. In EBDI, this special action is represented by spreserv
• Motivated Processing Strategy this processing strategy is employed
by the agent when some desire which directs its behaviour must be maintained but may be at risk. This strategy is computationally intensive,
as its should produce complex data-structures for preserving desires.
• Direct Access Strategy this processing strategy relies on the use of
fixed preexisting structures/knowledge. It is the simplest strategy and
corresponds to a minimization of the computational effort and to fast
solutions.
A fearful agent is an agent which exhibits a very careful behaviour, considering any threat as a fear factor, and considering all uncomfortable events
at the same level. Being careful, the agent also acts towards solving threats
and unpleasant facts with the best of its means, and even if the means for
the best action are not available, the agent always tries to put itself on a
safe, or selfpreservation condition.
2.5
ERIC
ERIC is an affective embodied agent for realtime commentary in many domains [86]. The system is able to reason in real time about events in its environment and can produce natural language and non-verbal behavior based
on artificial emotion simulation. Efforts have been made to keep the expert
system domain indepndent (as a demonstration of this, ERIC was used in
two domains: a horse race simulator and a multiplayer tank battle game).
11
The ERIC2 framework (Fig. 2.3) allows for the development of an embodied character that commentates on events happening in its environment.
The framework is a collection of Java modules that each encapsulate a Jess
rule engine. All cognitive tasks are taken care of in the rule-based expert
system paradigm. The ERIC framework also uses template nased natural
language generation in order to create a coherent discourse.
Knowledge module The main job of the knowledge module is to elaborate
information perceived from the environment (throught the domain interface)
into a world model that is the basis for the agent’s natural language and
emotional state generation. As with all rule based expert systems, logical
inference (by applying axioms) is used on perceived in order to elaborate
the world model. This world model is the sent to the other ERIC modules
in order to generate affect and language. Timestamps are used within the
model in order to manage beliefs and allow ERIC to detect patterns.
Affective appraisal Emotions are used in order to enhance the character’s
believability as well as to convey some information (i.e. excitement means
that a a decisive action is being performed). ERIC uses ALMA to model
affect [31]. There are three features in the ALMA model:
• emotions (short-term) - usually bound to an event, action or object;
they decay through time; ERIC uses te OCC model of emotions in
order to deduce emotions from stimuli.
• moods (medium-term) - generally more stable than emotions; modelled as an average of emotionl states accross time; described by a
vector in PAD (pleasure, arousal, dominance) space; intensity is equal
to the magnitude of the PAD vector; the characters emotion changes
his mood according to the pull and push mood change function.
• personality (long-term) - long-term description of an agent’s behavior;
modelled along five factors: openness, conscientiousness, extraversion,
agreeableness and neuroticism; these traits influence the intensity of
the character’s emotional interactions and the decay of its emotional
intensities.
ERIC makes the appraisals that ALMA requires by comparing events,
objects and actions to the agents desires and beliefs. The framework also
separates goals and desires from causal relations. The cause-effect relations
allow the model to appraise events, actions and objects that are not specified
12
Figure 2.3: ERIC emotion system
13
as goals or desires (the following four relations are used: leadsto, hinders,
supports and contradicts). The agent also maintains a set of beliefs about
other agant’s goals and desires to enable it to judge the desirability of and
event for other agents in the scenario.
The agent’s affective state can be expressed through hand and body gestures, facial expression and selection of words and phrasese used to fill in a
natural language template.
In the ERIC model, affective output is generated from the dynamic appraisal of events, actions and objects related to goals and desires, acting as
an indicator of the agent’s progress and its opinion on events.
2.6
Deep
The DEEP project specifically designed this model to be used in the video
game industry to improve the believability of Non-Player Characters. The
emotion dynamics model takes into account the personality and social relations of the character. The model focuses on the influence of personality
in triggering emotions and the influence of emotions in inter-character social
interactions. Because of the specific nature of video games, the goal is not
to reach an optimal behavior, but to produce behaviors that are believable
from the player’s standpoint [54].
The proposed model (Fig. 2.4) is an adaptation of the OCC model based
on attitudes of NPCs towards actions, objects and other characters of the
environment. Each character is defind through its personality, defined by a
set of traits, and its social roles, which determine its initial social relation
with other characters.
Knowledge representation The knowledge representation model used
by the DEEP Project as a formal represetnation of the virtual environment
that the game takes place in has been kept purosefully simple in order to be
easily handled by designers and developers that do not have a background
in artificial intelligence. Actions are defined as couples (namea , ef f ecta ). A
character is defined by two feature vectors, one of attitudes that express its
attitude towards element in the lexicon (each possible element in the world)
and one of praises, which is a feature vector of attitudes towards possible
actions in the environment. Events are simply represented by a quadruplet
(agent, action, patient, certainty), where certainty denotes the degree of
certainty of the event.
14
Figure 2.4: Deep emotion system
Emotion representation Emotion representation is based on a simpification of the OCC model, in which 10 emotions represent an agent’s emotional
experience. Emotions are considered according to triggering conditions: joy
(distress) - desirable (undesirable) event; hope (fear) - expectation of a desirable (undesirable) event; relief (disappointment) - non-occurrence of an
expected undesirable (desirable) event; pride (shame) -praiseworthy (blameworthy) action done by the agent; admiration (anger) - praiseworthy (blameworthy) action done by another agent.
An emotional state is represented as a feature vector of values varying in
[0,1]. The emotional stimulus of an event is the emotion that an agent should
feel after the event, provided that the agent has a neutral personality. The
OCC model posits that the intensity of emotions is directly proportional
to the desirabilty/undesirability factor of the event (and its probability of
occurence).
Personality The personality of a virtual character is represented as an ndimensional feature vector, each value corresponding to a personality trait.
The emotion triggerd by an event is computed from the emotional stimulus
and the personality trait’s values. When no emotional stimulus occurs, a
character’s emotion intensity naturally decays towards the neutral state, by
a monotonous function that depends on personality. The model also mentions
intensity thresholds that define the minimal value for an emotion to have and
impact on a character’s behavior.
15
Social The relationship between two virtual characters is represented as
a quadruplet = (liking, dominance, familiarity, solidarity), respectively the
degree of liking i has for j at time t; the power that i thinks it can exert on j at
time t; the degree of familiarity, i.e. the average degree of confidentiality for
i of the information that i has transmitted to j; and the degree of solidarity
that represents the degree of like-mindedness or having similar behavioral
dispositions that i thinks to have with j. Relationships are not symmetrical.
A social role rx /ry is definedby a quadruplet where each element represents the degree of liking, dominance, familiarity and solidarity that a character with role rx feels for a character with a role ry . In order to compute a
combination of several roles, using an average of complementary roles.
The model has been implemented in Java as a tool for game designers to
simulate the impact of a scenario on non player characters. The vocabulary
of the game, NPCs personality, social roles nad scenario events need to be
defined.
The DEEP Project motivates the use of emotionally-enhanced intelligent agents as virtual characters in video games, in order to improve the
player’s feeling of immersion and enjoyment. The model takes into account
personality and social relations The heart of the model presented is contextindependent. It can be applied to any domain that makes use of intelligent
agents with an affective level.
This work is part of the DEEP project supported by RIAM (Innovation
and research network on audiovisual and multimedia).
2.7
iCat
iCat is a system developed in order to study how emotions and expressive
behavior in interactive characters can be used to help users better understand
the game state in educational games. The emotion model is mainly influenced
by the current game state and is mased on the emotivector anticipatory
mechanism. The hypothesis is that if a virtual character’s affective behavior
reflects the game state, users will be able to better peprceive the game state
[61].
Architecture The architecture is composed of three modules:
• The game module is made up of the game played by the user and the
virtual character. The game module is the main input for the emotion
module.
16
• The emotion module receives evaluations from the game module,
processes them and update the agent’s emotional state.
• The animation module animates the character by setting up a character position or running the character through predetermined animation sequences. The predefined sequences are used to display emtoional
reactions (each one of the 9 elements in the emotivector is mapped to
a different animation). Mood is reflected asa linear interpolation between the two parametrizations corresponding to the two extremes of
the valence space.
Emotion The affective state is represented by two components: emotional
reactions and mood. Emotional reactions are the immediate emotional feedback received by the character. In order to endow the agent with emotional
ractions, the iCat uses the emotivector anticipatory system. The system
contains a predictive system of itself and its environment that generates an
affective signal resulting from the mismatch of the expected and received values of the sensor. This means that the system is either rewarded or punished
if events do not go according to predictions, and this will show up through
the displayed animation.
Mood is represented as a valence variable that varies between a minimum
and a maximum value. The mood magnitude represents the mood intensity.
Positive values are associated to good scores, and negative ones to bad ones.
Mood works like a background affective state, when other emotions are not
occuring. Mood decays towards zero, over time in the absence of new stimuli.
Upon receiving a new stimuli, the adjustment is gradual until the desired
value is reached.
Evaluation The architecture was implemented in a scenario composed by
a social robot, the iCat, which acts as the opponent of a user in a chess
game. A preliminary experiment was performed, where the iCat played chess
against a human player. The results (correlated from a group where the
iCat showed the proper emotions, a group where it showed the wrong ones
randomly and a control group where it showed no emotions), indicate that
users were able to correctly interpret the iCats emotional behaviour and that
the correlation between the users own analysis of the game and the actual
game state variables is also higher in the emotional condition.
The results of a preliminary evaluation suggested that the emotional behaviour embedded in the character indeed helped the users to have a better
perception of the game [61].
17
2.8
Comparison
In this section we present a comparison of the emotion simulation models
presented relating to features we consider important.
• Knowledge whether the model influences agent knowledge
• Perception whether the model influences agent perception
• Behavior whether the model influences agent behavior
• Motivation if emotions are used as agent motivation within the model
• Personality if the model is able to generate different types of characters, characters that would act differently, given the same virtual world
state
• Theory How complex the theory is
• Dev how high the design, development and implementation effort would
be for someone to use the system
• Adoption how easily a system would be adopted by the game industry
• Graphics does the model have a way for artificial characters to express
their emotions?
• Emotion what emotion representation model the system uses
• Synthesis what emotion synthesis model the system uses
• Expandability how expandable the model is
• Scalability how scalable the model is
• Flexibility how flexible the model is
• Evolving if the emotion process and behavior are adapted based on
agent’s experience
As can be seen in table 2.1, most systems use the OCC model for emotion
expression and synthesis which is difficult to learn and implement, which can
be deduced from the fact that the authors have reduced adapted by reducing
the number of emotions from 21 to 10, and as can be seen through the
effort expanded to formalize it [82, 83, 85]. We can also see from the table
that for most models, either the emotion process or behavior are not adapted
18
Name
Soar
PETEEI
EBDI
DEEP
iCat
Knowledge Perception Behavior Motivation
yes
maybe
yes
yes
yes
yes
yes
yes
Name
Soar
PETEEI
EBDI
DEEP
iCat
Theory(cpx) Dev (cpx) Adoption Graphics
Emotion
simple
simple
med.
Pleasure/Pain
high
med.
med.
yes
OCC
high
OCC
high
yes
OCC
med.
simple
med.
yes
iCat
Name
Soar
PETEEI
EBDI
DEEP
iCat
Synthesis Expandability Scalability Flexibility Evolving
Soar
yes
yes
med.
static
OCC
low
dynamic
OCC
yes
yes
high
static
OCC
yes
yes
high
static
iCat
med.
static
Table 2.1: Emotion simulation model comparison
19
Personality
yes
according to the agent’s experiece, and none of them use emotion as a driving
force for a character’s actions.
We have taken it upon ourselves to design and develop a fast light-weight,
easily understood and adopted emotion representation model, the system
necessary for its integration into an artificial agent architecture that uses
emotion simulation as motivation for the virtual character’s actions, as well as
the underlying framework for its integration into our chosen demo application
type.
20
Chapter 3
Newtonian Emotion System
(NES)
This chapter presents our new and improved emotion simulation technique,
still based off of Plutchik’s emotion taxonomy. However, the emotion representation scheme has been streamlined, the emotion feature being reduced
to a four-dimensional vector, and emotion interactions follow Newtonian interaction laws that are easier to learn and understand. The architecture has
also been reduced to the bare minimum necessary for emotion simultaion and
expression while still influencing character behavior and memory in the same
way. This is because, while we believe that the best way to produce humanlike behavior is to attempt to reproduce thought patterns, game developers,
for example, tend to favor finite-state-machine based solutions that are very
fast and reduce development complexity. The new architecture allows for
any behavior generation technique (belief-desire-intention, expert systems,
behavior trees, finite state machines or even backtracking) and any machine
learning algorithm (SVM, linear regression, clustering), in essence, allowing designers to develop a system that suits them best, while incoroprating
emotion simulation.
3.1
Newtonian emotion space
In this section we introduce the Newtonian Emotion Space, where we define
concepts that allow emotional states to interact with each other and external factors, as well as the two laws that govern the interaction between an
emotional influence and an emotional state.
Concepts
21
Definition 1. Position specifies the intersection of an emotional state with
each of the four axes
Definition 2. Distance (||p~2 − p~1 ||) measures the distance between two emotional positions
Definition 3. Velocity (~v = p~2 −t p~1 ) represents the magnitude and direction
of an emotion’s change of position within the emotion space over a unit of
time
Definition 4. Acceleration (~a = v~2 −t v~1 ) represents the magnitude and direction of an emotion’s change of velocity within the emotion space over a
unit of time
Definition 5. Mass represents an emotional state’s tendency to maintain a
constant velocity unless acted upon by an external force; quantitative measure
of an emotional object’s resistance to the change of its velocity
Definition 6. Force (F~ = m · ~a) is an external influence that causes an
emotional state to undergo a change in direction and/or velocity
Laws of Emotion Dynamics The following are two laws that form the
basis of emotion dynamics, to be used in order to explain and investigate
the variance of emotional states within the emotional space. They describe
the relationship between the forces acting on an emotional state and its
motion due to those forces. They are analogous to Newton’s first two laws
of motion. The emotional states upon which these laws are applied (as in
Newton’s theory) are idealized as particles (the size of the body is considered
small in relation to the distance involved and the deformation and rotation
of the body are of no importance in the analysis).
Theorem 1. The velocity of an emotional state remains constant unless it
is acted upon by an external force.
Theorem 2. The acceleration ~a of a body is parallel and directly proportional
to the net force F~ and inversely proportional to the mass m: F~ = m · ~a
Emotion center and gravity The emotion space has a center, the agent’s
neutral state, a point in space to which all of the agent’s emotional states
tend to gravitate (usually (0,0,0,0), however, different characters might be
predisposed to certain kinds of emotions, thus, we should be able to specify a
different emotional centre, and instill a certain disposition, for each character;
we also use different emotion state mass to show how easy/hard it is to change
22
an agent’s mood). In order to represent this tendency we use gravitational
force:
~ = m · p~ − ~c · kg ,
G
||~p − ~c||
where p is the current position, c is the center and kg is a gravitational
constant. This force ensures that, unless acted upon by other external forces,
the emotional state will decay towards the emotion space center.
3.2
Plug-and-play architecture
We developed an emotion subsystem architecture (Fig. 3.1) that interposes
itself between the agent’s behavior module and the environment. The subsystem models the process described by Lazarus.
Events perceived from the environment are first processed by the appraisal
module, where an emotional force is associated with it. The resulting list of
events is then sorted in descending order according to magnitude and fed
into the agent’s perception module. This is done in accordance with the
psychological theory of attention narrowing, so that in a limited attention
span scenario, the agent will be aware of the more meaningful events.
We treat the agent behavior module as a black box that may house any behavior generation technique (rule based expert system, behavior trees, etc.).
The interface receives as input events perceived from the environment and
produces a set of possible actions. The conflict set module takes this set of
rules and evaluates them in order to attach an emotional state to each. Out
of these, the action whose emotion vector most closely matches the agent’s
current state is selected to be carried out.
arg min
arccos
i∈{conf lictset}
eagent · ei
||eagent || · ||ei ||
Feedback from the environment has an associated emotion force vector
that actually affects the agent’s emotional state. Feedback is distributed to
the appraisal and conflict set modules as well as the agent’s behavior module.
Actions undertaken are temporariliy stored in an action buffer for feedback
purposes.
3.3
NES Learning
Both appraisal (percept and action appraisal) modules use machine learning
techniques in order to learn to predict the outcome of events and actions,
23
Figure 3.1: Emotion system architecture
24
respectively. As there are many viable options for which learning technique
to use (hidden markov models, neural nets, q-learning), we use a plug-andplay model where the technique can be replaced.
The appraisal module attempts to predict the emotional feedback received
from the environment based on characteristics of the event to classify and
previous feedback. The goal of the appraisal module is to better label (and
thus establish priority of) events for the perception model. The conflict set
module works in a similar way based on the characteristics of actions taken.
The goal of the conflict set module is to settle conflicts between competing
actions by selecting the most appropriate action to be performed. The action
that has the greatest Gain − Risk ratio will be chosen.
Both modules treat the event, rule respectively, configuration as the observable state of the model, and attempt to predict the next hidden state
(the emotional feedback force).
Newtonian emotion system interaction is a three stage process. Each
inference turn, the behavior module generates a list of possible actions within
the given context for the virtual character. Each action is then appraised
by the action appraisal module (that is a wrapper interface for a machine
learning technique that attempts to learn the feedback function). At this
time, the value is and absolute value, that needs to be related to the agent’s
current state. This is done by evaluating the impact of an emotional force
on the agent’s current state. Based on this impact factor, actions to be
carried out are selected and executed. Once executed, these actions receive
emotional feedback from the environment and the appraisal module that
evaluated them is updated with the new values.
Let us take the following example: character A’s behavior module generates as the next action, an attack on character B. The following simplified
feature vector for the action is built, that encodes information about the current context (feelings about the action, feelings about our target and action
direction). From previous experience, the agent knows that the action would
raise it’s aggression level (feelings about the attack action are on the anger
and anticipation axes [0,0,-.9,-.3]); we also assume that we are somewhat
afraid of the character that we are attacking [0,0,.7,0], and we know that we
are performing the action, so action direction is 1. Based on this feature vector, the appraisal module predicts an emotional feedback with a dominant
on the anger and anticipation axes of [0,0,-.2,-.2]. We assume this action
has a satisfactory Gain/Risk ratio and gets selected and executed. We also
assume that it succeeds. According to the environment feedback formula
we used, the result would be [0,0,-.2,-.3] which means that the estimation
was close, but could be better. The feedback result is added to the learning
module data set, and the results are nudged in the direction of [0,0,-.2,-.3] 25
[0,0,-.2,-.2] = [0,0,0,-.1], for a better future prediction. The perception appraisal module works in a similar way to appraise perceived actions, however
the percepts are sorted according to a factor based on their distance to the
agent’s current emotional state (minimum) or their distance to the average
emotional state of events perceived this turn (maximum). This is so that an
agent will prioritize events that it resonates with or that stand out.
Validation In order to validate the learning module, we randomly generated feature vectors with valid values, used the feedback formula to label
them, and separated them into learning and validation data sets. These data
sets were used to validate a linear regression learning model. The results
can be seen in the following graphs that show the error between training and
validation related to number of training examples. A slight oscillation can
be seen in convergence due to the fact that some information mey be missing
in certain contexts.
Figure 3.2: Joy learning curve
26
Figure 3.3: Trust learning curve
27
Figure 3.4: Fear learning curve
28
Figure 3.5: Surprise learning curve
3.4
Personality filter
The personality filter allows designers to skew an agent’s perception of events.
It defines a perception in intepreting emotional forces when applied to feedback received from the environment. The personal filter consists of a four
dimensional vector (each element matching an axis of the Newtonian Emotion representation scheme) with values between -1 and 1 that scales the
agent’s emotional feedback according to personal criteria. For example, an
agent may feel fear more accutely than joy and as such, its personal filter
should reflect this, having a lower scaling factor for joy than for fear. Another agent may be extremely slow to anger, this would be reflected in the
personal filter as a very low scaling factor for the anger axis. It would even
be possible to change the sign altogether so that some agents may feel anger
instead of fear or joy instead of sadness (leaving room for the representation
of psychologically damaged individuals). The feedback applied to the agent’s
state would then be:
F eedback 0 = F ilter · F eedback
(3.1)
Together with the emotional center and emotion mass, these three would
29
be all the parameters that a character designer needs to interact with in order
to generate varied virtual characters capable of rich unique nuanced emotiondriven behavior. As can be seen, the system would be easy to learn and use
in the industry thanks to the small number of parameters that influence it
and the intuitive nature of these and their interactions within the model.
3.5
Emotion as motivation
As mentioned before, the emotional feedback that an agent receives from
the environment influences its behavior. This is achieved by selecting an
action from the conflict set provided by the behavior model according to the
feedback that the agent predicts from the environment.
In order to evaluate its effect on the agent’s current state, we put forth the
notion of tensor product between two states, because of the information this
gives about the influence that one axis has on another. This product has
an interpretation as stress produced by an emotional force on the current
emotional state, showing how the force is, relative to the current state of
the agent. This is called the emotional impact and the tensor product is
computed between the two forces acting on the agent: the Gravity force,
keeping him in his current state, and the learned action force (estimated
feedback).
Stress = [F eedback] · [Gravity]


fjoy
 ftrust  
= 
 ff ear  · gjoy gtrust gf ear gsurprise
fsurprise
(3.2)
This allows the assessment of the influence that each emotion axis of the
action has on those of the current state. In line with Lazarus’ appraisal
theory, emotional evaluation implies reasoning about the significance of the
event, as well the determination of the ability to cope with the event. Therefore we define two metrics for assessing emotional experiences: Well-being
(Joy and Trust) and Danger (Fear and Surprise). Well-being measures the
effect the action has on the positive axes (Joy-Trust) of the current state,
thus establishing the gain implied by the action; while Danger describes the
risk, as it evaluates the action in terms of its influence on the Fear-Surprise
axes.
When choosing an action, an agent estimates how much the current wellbeing is influenced by the action in regard to how much danger the action
30
implies, a gain-risk ratio. In terms of the tensor product, this involves a
column-wise computation with respect to the Well-being axes of the agent’s
current state (for Gain) and a row-wise computation with respect to the
Danger axes of the action (for Risk).
Gain = fjoy ·(gjoy +gtrust )+ftrust ·(gjoy +gtrust )+ff ear ·(gjoy +gtrust )+fsurprise ·(gjoy +gtrust )
(3.3)
Risk = ff ear ·(gjoy +gtrust +gf ear +gsurprise )+fsurprise ·(gjoy +gtrust +gf ear +gsurprise )
(3.4)
Thus the Impact factor can be used to assign priorities to all actions in
order to resolve the conflict set.
Impact = Gain − Risk
(3.5)
Let’s assume that a character’s current state is [.3,.5,-.1,-.1] (gravity =
[-.5,-.8,.16,.16]). We will further assume that the agent finds itself in a conflictual situation surrounded by other agents. We will also assume that the
behavior module will only generate options to attack the other characters,
and that the appraisal module predicts the correct feedback. In the table,
the possible feedback options are various degrees of
• fear and surprise (alarm) - the attack fails, but was not expected
to
• anger and surprise (outrage) - when the attack succeeds, but was
not expected to
• anger and anticipation (aggression) - when the attack succeeds
and was expected to
According to our feedback formula, we have the following impact factors:
In table 3.1) we can see that the agent will prefer the two options that
will increase it’s aggression, will be neutral towards attempting to attack an
agent that will suprise it, and will be reluctant to attack when it might fail.
As a further example, let us assume that the agent is in the same state
and that it is surrounded by objects that it can pick up, the only actions
generated are actions to pick up objects and the joy felt when picking up an
value(object)
.
object is equal to value(object)+wealth(agent)
As we can see in the second table (Fig. 3.2), for varying item values, we
get different priorities. The agent would prefer to pick up the most valuable
item first.
31
[-0.5
[-0.5
[-0.5
[-0.5
[-0.5
[-0.5
Gravity
-0.83 0.16
-0.83 0.16
-0.83 0.16
-0.83 0.16
-0.83 0.16
-0.83 0.16
0.16]
0.16]
0.16]
0.16]
0.16]
0.16]
Feedback
Gain
[0 0 0.2 0.3]
0.4
[0 0 -0.6 -0.6] 1.56
[0 0 -0.4 0.3] 0.13
[0 0 -1 -1]
2.6
[0 0 1 0.6]
-2.08
[0 0 1 0.3]
-1.69
Risk Impact
0.3
0.10
1.176
0.38
0.098
0.03
1.96
0.64
-1.568 -0.51
-1.274 -0.42
Table 3.1: Attack impact table
[-0.5
[-0.5
[-0.5
[-0.5
[-0.5
[-0.5
Gravity
-0.8 0.16
-0.8 0.16
-0.8 0.16
-0.8 0.16
-0.8 0.16
-0.8 0.16
0.16]
0.16]
0.16]
0.16]
0.16]
0.16]
Feedback Gain
[0.1 0 0 0] -0.13
[0.2 0 0 0] -0.26
[0.3 0 0 0] -0.39
[0.4 0 0 0] -0.52
[0.5 0 0 0] -0.65
[0.6 0 0 0] -0.78
Table 3.2: Take impact table
32
Risk
0
0
0
0
0
0
Impact
0.13
0.26
0.39
0.52
0.65
0.78
Chapter 4
Emotion integration in game
design
In this chapter we present how the two architectures (the Emotion-Driven
Character Behavior Engine, and the Newtonian Emotion System) were integrated to provide non-player character behavior in two games: a fire evacuation scenario serious game named ALICE, meant to teach preteen students
proper fire evacuation procedures, and a demo game that showcases the capabilites of the NES, called OrcWorld.
4.1
Artificial intelligence in virtual environments
Research into virtual environments on the one hand and artificial intelligence on the other has mainly been carried out by two different groups of
people, but some convergence is now apparent between the two fields. One
of the great selling points of virtual environments is their immersion factor,
therefore researchers are seeking to progress beyond visually compelling but
essentially empty environments to incorporate other aspects of virtual reality
that require intelligent behavior.
Virtual characters acting in an intelligent manner based on their goals
and desires as opposed to standing still, waiting around for the user and
interacting based on pre-scripted dialog trees brings a great deal of depth to
the experience. Evolving virtual worlds are by far more interesting than static
ones. Applications in which activity independent of the user takes place,
such as populating a virtual world with crowds, traffic, virtual humans and
non-humans, behaviour and intelligence modeling are possible by integrating
artificial intelligence and virtual reality tools and techniques.
33
This combination of intelligent techniques and tools, embodied in autonomous creatures and agents, together with effective means for their graphical representation and interaction of various kinds, has given rise to a new
area at their meeting point, called intelligent virtual environments [6].
4.1.1
Application domains
Games A video game is an electronic game that involves interaction with
a user interface to generate visual feedback on a visual device. Video games
have been using virtual environments from the start. The popular video game
Tron inspired the movie of the same name where a computer programmer
is transposed int o the virtual world of Tron. The most popular virtual
environments to date belong to video games (World of Warcraft, Second Life,
The Sims). The video game industry has also been using artificial intelligence
techniques as well, so in essence, the field of intelligent virtual environments
has existed for decades, however, video games usually limit themselves to
brute force or very time-efficient techniques, however it is our opinion that
the video game industry would have much to gain from the finess offered by
more refined artificial intelligence techniques.
Serious Games Another field that has recently gained momentum is the
field of serious games. More and more companies and state agencies are
using serious games to train and educate employees or the general public,
respectively. A serious game is a game designed for a primary purpose other
than entertainment, such as education or training. Although they can be
entertaining, serious games are designed in order to solve a problem and may
deliberately trade fun and entertainment in order to make a serious point.
There are many advantages to using serious games in education. First
of all, computer game developers are accustomed to developing games that
simulate functional entities quickly and cheaply by using existing infrastructure. Secondly, game developers are experienced at making games fun and
engaging, automatically injecting entertainment and playability into their
applications, features that serious games, while meant to train or educate
the user, could also benefit from.
One artificial intelligence paradigm has been used extensively in the field
of training and education, namely rule-based expert systems. These have
expert knowledge in their domain of expertise and are able to create plans,
explain actions and even reason non-monotonously [71].
Affective gaming has received much attention lately, as the gaming community recognizes the importance of emotion in the development of engaging
games. Affect plays a key role in the user experience, both in entertainment
34
and in serious games developed for education, training, assessment, therapy
or rehabilitation. A significant effort is being devoted to generate affective
behaviours in the game characters, player and AI-controlled avatars, in order
to enhance their realism and believability [39].
Games Emotion has become increasingly established, as the following examples illustrate: in the SIMS, characters actions are influenced by an emotive state, which results from their personality and events occurring during
the game [4]; in Fahrenheit, players are able to influence the emotional state
of characters indirectly [2]. There is also an increased interest in affective
storytelling, creating stories interactively.
E-Learning Education has always seemed a natural application domain
for artificial intelligence and virtual reality technologies, because visual representation and social interaction play a large part in learning and human
memory. There are a few applications worth mentioning: in the Cosmo
System, an artificial agent used emotional means to achieve teaching goals.
ITSPOKE [3] tried to register the students emotional states and use them to
tailor spoken tutorial dialogues appropriately. FearNot used emotion-driven
virtual characters in order to teach lessons about bullying [17].
The growing need for communication, visualization and organization technologies in the field of e-learning environments has led to the application of
virtual reality and the use of collaborative virtual environments [70]. Virtual
reality, together with emotion-capable artificial cognitive agents can highly
broaden the types of interaction between students and virtual tutors. As
in conventional learning tasks, the computer can monitor students, responding to questions and offering advice in a friendly manner, while keeping the
lessons pace comfortable to the student.
Artificial agents are able to fulfill several valuable roles in a (virtual) elearning environment: act as a virtual tutor and explain lessons, give help,
advice and feedback to students; act as a students learning peer and/or
participate in simulations, especially useful in team training scenarios; an
artificial agent may also be used (known or unknown to the student) in
order to monitor progress and personalize the students learning experience.
An artificial agent present in a virtual environment should dispose of the
following capabilities: demonstrate and explain tasks, monitor students and
provide assistance when it is needed, provide feedback to the student, ability
to explain the reasoning behind its actions and adapt (a plan) to unexpected
events and be able to fill the roles of extras [70].
A virtual tutor needs to have a way of storing and manipulating domain
specific knowledge in a useful way in order to be able to explain to, and
35
correct students, and justify its actions. In practice, the best way to build
such a system is by using a rule-based system together with a justificationbased truth maintenance system.
Steve [71] is an animated pedagogical agent for procedural training in
virtual environments that teaches students how to perform procedural tasks.
Steve is able to demonstrate and explain tasks, and monitor students performing tasks, providing assistance when it is needed, by making use of expert
knowledge of the domain stored in the form of hierarchical plans that contain
a set of steps (the steps required to complete an action, in order for Steve
to demonstrate and explain), a set of ordering constraints (so that Steve is
able to provide assistance or intervene at any point during a student’s performance of a task), and a set of causal links (that allow Steve to explain a
certain action).
4.1.2
Techniques
In this section we present and discuss applications of well-known artificial
intelligence paradigms and techniques in virtual environment applications,
with a focus on games, serious games and e-learning.
Neural networks An artificial neural network, usually called neural network, is a mathematical model or computational model that is inspired by
the structure and/or functional aspects of biological neural networks. A neural network consists of an interconnected group of artificial neurons, and it
processes information using a connectionist approach to computation.
In other words, a neural network is an interconnected assembly of simple
processing elements, units or nodes whose functionality is loosely based on
the biological neuron. The processing ability of the network is stored in the
inter-unit connection strengths or weights obtained by a process of adaptation
to or learning from a set of training patterns [35].
In most cases an ANN is an adaptive system that changes its structure
based on external or internal information that flows through the network
during the learning phase. Modern neural networks are non-linear statistical
data modeling tools. They are usually used to model complex relationships
between inputs and outputs or to find patterns in data.
Artificial neuron design is based off of the initial model developed by
McCullogh-Pitts, where each neuron has several inputs (synapses, x ) and
one output (axon, y). Each input has an associated weight (w ), that stores
the neuron’s evolution and allows it to adapt to the data that it is being
trained on.
36
internal activation potential:
ni =
n
X
wij · xi + bi
(4.1)
j=0
activation of the neuron:
yi = ψ(ni )
(4.2)
Neural networks are capable of both supervised and unsupervised learning, and several error propagation and weight update algorithms exist. Neural networks are fundamentally different from the processing done by traditional computers and are better suited to problems currently difficult to solve,
such as meaning of shapes in vidual images, distinguishing between different kinds of similar objects (classification and association), learning certain
functions (including unknown functions).
In a MMORPG context, neural networks could act as fast, efficient classifiers (the costly process of training neural networks is usually done offline,
while the classification is usually fast) that help artificial agents classify and
reason about objects in their environment, including users. For example,
based on characteristics exhibited by user or artificial characters, such as
faction, class, gender, build, background, an artificial agent could classify
another agent as a threat or non-threat, or as an enemy or a foe.
Genetic algorithms Genetic algorithms are a search heuristic that mimics
the process of natural evolution, routinely used to generate useful solutions
to optimization and search problems. In a genetic algorithm, a population
of individuals, each of which encode candidate solutions to an optimization
problem evolves toward better solutions by using techniques inspired by natural evolution such as inheritance, mutation, selection and crossover.
Problems which appear to be particularly appropriate for solution by
genetic algorithms include timetabling and scheduling problems. Genetic
algorithms are also a good approach to solve global optimization problems,
especially domains that have a complex fitness landscape as the combination
of mutation and crossover operators is designed to move the population away
from local optima.
Within the virtual intelligent environment context, genetic algorithms
may be used to generate walking methods for computer figures, bounding box
generation, as well as any optimization problems artificial agents may require,
where an exact solution is not strictly required and strict time constraints
are imposed, such as planning.
37
Self organizing systems Self-organization is a process in which structure
and functionality at the global level of a system emerge solely from numerous interactions among the lower-level components of a system without any
external or centralized control. The system’s components interact in a local
context either by means of direct communication of environmental observations without reference to the global pattern.
Swarm intelligence is the collective behaviour of decentralized, self-organized
systems, natural or artificial. The concept is employed in work on artificial intelligence. Also known as self organizing systems, they are typically
made up of a population of simple agents interacting locally with one another and with their environment. The agents follow very simple rules, and
although there is no centralized control structure dictating how individual
agents should behave, local interactions between such agents lead to the
emergence of intelligent global behavior, unknown to the individual agents.
Natural examples of swarm intelligence include ant colonies, bird flocking,
animal herding, bacterial growth, and fish schooling [42].
Swarm intelligence as a field is based on algorithms developed to simulate
this type of complex behavior that emerges in the natural world, and thus,
the obvious application of these techniques in virtual environments would be
to simulate the behavior of similar creatures that inhabit the environment;
however, other uses are possible as well, such as using ant colony optimization in order to compute optimal paths the agent may take in the virtual
environment or using particle swarm optimization in order to create more
complex and versatile particle systems [53]. Other applications of swarm
intelligence are crowd simulation and ant-based routing.
Reactive agents Reactive agents are simple processing units that perceive and react to changes in their environment. The architecture consists
of a perception component, a decision component and an action component.
The behavior is quite simple: in a loop, the agent perceives the state of its
environment (which may be complex and may include the states of other
agents), selects and action to perform and then performs it, changin its environment’s state. Reactive agents are able to reach their goal only by reacting
reflexively to external stimuli. While limited, reactive agents operate in a
timely fashion and are able to cope with highly dynamic and unpredictable
environments, making them ideal for an artificial environment’s lower, less
intelligent artificial life forms, such as animals and unevolved humanoids.
Cognitive agents Artificial agents inhabiting a virtual environment, where
multiple aspects of the world are currently changing, must be able to infer
38
new knowledge about the world, make plans, accomplish goals and take decisions in uncertain conditions in real-time, in order to challenge human users.
This is what cognitive agents are able to do. Several approaches based on the
belief-desire-intention model of agency have been desgined and implemented
successfully, while other approaches are being tested. The hierarchical goal
decomposition used in backward chaining engines is a powerful tool, allowing
agents to represent and solve complex problems [49], while forward chaining
techniques allow exploration of the knowledge domain. Truth maintenance
systems and fuzzy logic are two of the techniques that may be employed in
order to reason in uncertain conditions by artificial agents, either by maintaining a coherent knowledge base (handles conflicting knowledge and maintains a consistent set of beliefs), or adding a certainty weight to knowledge
items.
The cognitive agent model is very similar to the reactive agent model,
however, the decision component now handles planning as well (next action),
while the interaction component adds a social dimension to agent interaction.
We have proposed an architecture that uses forward and backward chaining,
truth maintenance and emotion simulation to achieve fast decision-making
cognitive agents.
Expert systems Another subfield of artificial intelligence, expert systems
use a knowledge base of human expertise on a specific problem domain.
An expert system uses some knowledge representation structure to capture
knowledge, typically gathering from knowledge human experts on the subject
matter and codifying it according to the structure. Last but not least, the
expert system is placed in a problem-solving situation where it can use its
expertise. Successful e-learning applications have been developed in the past
[71], where the expert system acts as a virtual tutor.
Emotion simulation Affective computing is the study and development
of systems and devices that can recognize, interpret, process, and simulate
human affects. Emotion integration is necessary in generating complex believable behavior, making the agent decision-making process less predictable
and more realistic as well as generating actions in time comparable to human
reaction time. Emotion simulation allows virtual environments more realistic
simulations of artificial characters, while virtual environments allow artificial
agents capable of emotion simulation more expressive freedom because of the
additional channels available.
There are several motivations for this research, relative to its intended
application. One approach is the simulation of emotions in conversational
39
agents in order to enrich and facilitate interactivity between human and machine. In e-learning applications, affective computing can be used to adjust
the presentation style of a virtual tutor when a learner is bored, interested,
frustrated or pleased. Psychological health services, an application domain
for virtual reality as well, can also benefit from affective computing: patients
would be able to act in a virtual world populated by interacting artificial
agents and be observed and diagnosed by a professional or learn how to
correct their own behaviour.
Our understanding of human emotion is best within the cognitive modality and most existing models of emotion generation implement cognitive appraisal, which is considered to be best suited for affective modeling in learning
and gaming applications.
The study of emotional intelligence has revealed an aspect of human intelligence. Emotions were shown to have great influence on many of human
activities, including decision-making, planning, communication, and behavior. Emotion theories attribute several effects of emotions on cognitive, decision and team-working capabilities (such as directing/ focusing attention,
motivation, perception, behavior towards objects, events and other agents.
One of our research papers proposes the use of emotion simulation as
an efficiency measure in cognitive agents, focusing attention/perception and
managing agent memory, as well as giving the agent a new expressive dimension [50].
Ontologies The term ontology denotes a branch of philosophy that deals
with the nature and organisation of reality. Virtual reality could benefit
from an ontology. Each object that exists in virtual space could attached
a defition using domain-specific ontological terms. This could improve the
virtual environment for both human users and artificial ones. The downside
is that an ontology would have to be defined and each object in the scene
graph would be doubled in an ontolgy graph, taking up more memory.
In computer science, an ontology is a formal representation of knowledge
as a set of concepts within a domain, and the relationships between those
concepts. An ontology is necessary in order for agents to have a shared
understanding of a domain of interest, a formal an machine manipulable
model of said domain. It is used to reason about the entities within that
domain, and greatly enhances symbolical reasoning. Ontologies aid in interagent communication, definitions being exchanged between agents until a
consensus is reached. Ontologies are usually defined in such a way as to
be easily interpreted by humans as well, and may be used to bridge the gap
created by the deficiencies in the field of natural language processing, making
40
interaction between human and artificial agents easier. One may also limit
the options human agents have in expressing themselves in such a way so as
to be indistinguishable from artificial agents, increasing immersion. However,
even without limiting user expresivity, which is not a desired feature in virtual
environments, an ontology would still increase human-computer interaction
quality.
4.2
Orc World
During an internship at the Laboratoire d’Informatique Paris 6, Universite
Pierre et Marie Curie, paris, France, we devised and implemented a game
on top of the virtual character framework to test the Newtonian Emotion
System. In the following sections we go on to define the scenario and game
rules, present what actions are available to agents in the game and how their
emotional feedback, and detail the agent behavior generation and learning
algorithms we used.
4.2.1
Scenario and rules
The premise of the game is simple. We created a virtual game world inhabited by orcs. Each orc has a treasure stash that he must increase (in essence
hoarding items and valuables). The orcs may do this by either foraging in the
wilderness, stealing from other orcs or pillaging other orcs’ stashes, however,
in order to do this they must leave their own treasure unprotected. This
scenario provides us with the same goals for all virtual characters involved
(survive and hoard) and allows us to use the same behavior generation module for all characters, showing the difference in behavior based on personality
characteristics and character experience. The scenario also provides us with
a balanced mix of social, physical and environment interactions.
In order to determine the success or failure of actions, we used a very
simplified version of the dungeons and dragons 3.5 edition rules [11]. The
main reason behind this was that the dungeons and dragons ruleset contains
rules that govern social interactions as well as physical ones. Alg. 1 describes
the turn sequence.
4.2.2
Actions and feedback
Possible actions within the game world and their associated effects are:
• Attack The character attacks another character with the goal of disabling it.
41
Figure 4.1: Round sequence diagram
42
Algorithm 1 Turn sequence
for all character ∈ characters
characterinitiative = roll(d20)
end for
for all character ∈ characters
Update(characterpercepts )
Generate(characteractions )
Appraise(characteractions )
Run(characteractions )
end for
for all character ∈ characters
Generate(characterf eedback )
Process(characterf eedback )
end for
do
{determine turn order}
do
do
– success: the attack hits, inflicting damage (decreasing the health)
of the opposing character
– failure: the attack misses, however the target is aware of the
attack
– feedback: this action influences the anger and anticipation axes
(aggression)
• Social The character attempts to impose its will that another character
perform an action through social interaction.
– success: the interaction succeeds and the target performs the desired action next turn
– failure: the interaction fails and the target is aware of the attempt
– feedback:
∗ Bluff the target is tricked into performing the action; feedback
is on the trust and joy axes (disgust)
∗ Diplomacy the target is lead to believe that it is in its best
interest to pursue the action; feedback is on the positive side
of the joy and trust axes (love)
∗ Intimidate the target is bullied into carrying out the action;
feedback is on the trust and anger axes (dominance)
• Move The character moves to a new destination.
– success: always succeeds
43
– feedback: this action receives feedback either according to the
area the character is moving to (which is an average of the feedback
of all the actions that a character has experienced in the area), or
on the joy and surprise axes if it is a new area
• Take The character attempts to take an item from a container.
– success: The character succeeds in taking the item
– failure: The action fails if the container is locked, the character
is caught stealing or if the container does not contain the item
– feedback: emotional feedback is on the joy axis and is directly
proportional to the acquired object’s value scaled by the character’s
current wealth
• Lock The character attempts to lock or unlock a container (fails or
succeeds based on the character’s skills; no explicit feedback ).
• Equip The character equips an item (fails if the character is unable to
equip the item, or not in possession of the item; no explicit feedback ).
• Sneak The character enters or exits sneak mode (in sneak mode a
character’s is less likely to be detected and able to steal items using the
take action, however, its speed is halved; no explicit feedback ).
The Sneak, Equip and Lock actions do not receive feedback for being
executed; however, all feedback is distributed to previous actions based on
a geometric progression, so that if, for example, a character would unlock a
container and then receive a valuable item, the joy felt from the item would
also influence the previous Lock action, making it pleasurable as well, thus
acknowledging its contribution in the action sequence that led to acquiring
the item. For example, let’s assume that an agent executes the following
action sequence {Sneak, Move, Unlock, Take(item)}, and ends up picking up
an item that increases it’s joy by [.32,0,0,0]. The first three action (Sneak,
Move and Unlock ) do not receive any feedback for being done, however,
they will receive a fraction of the feedback for the Take action, based on
the assumption that the sequence led to it, and so, for example, Unlock will
receive [.16,0,0,0], Move - [.8,0,0,0] and Sneak - [.4,0,0,0], to show the synergy
existing between these actions.
Emotional feedback for the Attack, Social and Move actions is generated
according to the following formula, where success and direction ∈ [-1,1]:
44
f eedback = success · (direction · actionemotion +
X
paramemotion ) (4.3)
params
The success parameter is ignored when an agent is convinced the action
is in the best interest of both parties involved (for example in the case of
diplomacy), assuming that the proposing agent meant well, regardless of
result (the result may still be skewed disfavourably by the backwards feedback
distribution).
Table 4.1 presents the base emotion force feedback used for actions available to agents in OrcWorld (column S denotes if the action was successful
and D its directionality, either source or destination), while table 4.2 presents
some examples of action instances, their computed feedback and its meaning
(where the S column denotes the action’s succes (1 for success, -1 for failure),
and the D column its directionality (1 for outgoing, -1 for incoming)).
For example, if an agent uses the Intimidate action (that has feedback on
the Trust and Fear Axes) on an agent that makes it feel proud, in order to
scare it away and steal its treasure, and succeeds, the resulting emotion felt,
according to the feedback equation, would be a combination of pride and
dominance. The virtual character on the other end of the action, based on
its feelings towards the acting agent would be despair, shame or remorse.
Perception module A character in the virtual environment can perceive
other characters, actions and environment objects through the perception
module. A virtual character can have several senses that it uses to make
sense of its surroundings. Fig. 4.2 shows a character that has two senses
(virtual sensors): sight and hearing; the area each of them covers and how
they overlap. The character gathers new knowledge through a process called
multi-source information fusion.
Fig. 4.3 presents the information fusion process, showing how multiple knowledge items (each from different sensors) representing the same virtual environment object are integrated into a consistent representation. In
essence, when new information about an object is perceived, the knowledge
item pertaining to it is updated.
4.2.3
Learning module
Feedback from the environment has an associated emotion force vector that
affects the agent’s state. The agent attempts to maximize its Gain (joy and
truth axes) and minimize its Risk (fear and surprise axes), and thus will
45
Action
Attack
Dir
Success
Joy
Trust
−
src
src
success
failure
dst
dst
success
failure
src
src
dst
dst
success
failure
success
failure
+0.1
-0.1
-0.1
+0.1
-0.1
+0.1
Diplomacy
src
src
dst
dst
success
failure
success
failure
+0.1
-0.1
+0.1
-0.1
+0.1
-0.1
+0.1
-0.1
Intimidate
src
src
dst
dst
success
failure
success
failure
Bluff
Move
10
Surprise
−
damage
−1
health
10
+0.1
+0.1
damage
−1
health
damage
−1
health
10
10
-0.1
-0.1
-0.1
+0.1
+0.1
-0.1
-0.1
+0.1
-0.1
+0.1
+0.1
-0.1
as the area the agent is moving into
+0.1
+0.1
Take
Action
Fear
damage
−1
health
Dir
value
success + wealth
value
failure − wealth
Success
Joy
Trust
Fear
Table 4.1: Action feedback table
46
Surprise
Action
Attack
(0,0,-1,-1)
Diplomacy
(1,1,0,0)
Intimidate
(0,1,-1,0)
Action
S
D
Param
Name
Result
Name
1
1
1
-1
-1
-1
-1
-1
-1
1
1
1
1
1
1
1
1
1
-1
-1
-1
-1
-1
-1
(1,0,-1,0)
(1,0,1,0)
(1,0,0,1)
(1,0,-1,0)
(1,0,1,0)
(1,0,0,1)
(1,0,-1,0)
(1,0,1,0)
(1,0,0,1)
(1,0,-1,0)
(1,0,1,0)
(1,0,0,1)
pride
guilt
delight
pride
guilt
delight
pride
guilt
delight
pride
guilt
delight
1
1
1
-1
-1
-1
-1
1
1
1
1
1
1
1
1
-1
-1
-1
-1
-1
(1,0,-1,0) pride
(1,0,1,0)
guilt
(1,0,0,1) deligth
(1,0,-1,0) pride
(1,0,1,0)
guilt
(1,0,-1,0) pride
(1,0,1,0)
guilt
(1,0,-1,0) pride
(1,0,1,0)
guilt
(1,0,0,1) deligth
(2,1,-1,0)
(2,1,1,0)
(2,1,0,1)
(0,-1,-1,0)
(0,-1,1,0)
(0,-1,-1,0)
(0,-1,1,0)
(2,1,-1,0)
(2,1,1,0)
(2,1,0,1)
love, pride
love, guilt
love, delight
contempt
shame
contempt
shame
love, pride
love, guilt
love, delight
1
1
1
1
-1
-1
-1
-1
1
1
-1
-1
1
1
-1
-1
(1,0,-1,0)
(1,0,1,0)
(1,0,-1,0)
(1,0,1,0)
(1,0,-1,0)
(1,0,1,0)
(1,0,-1,0)
(1,0,1,0)
pride
guilt
pride
guilt
pride
guilt
pride
guilt
(1,1,-2,0)
(1,1,0,0)
(-1,-1,2,0)
(-1,-1,0,0)
(1,-1,0,0)
(1,-1,2,0)
(-1,1,0,0)
(-1,1,0,0)
pride, dominance
love
despair, shame
remorse
morbidness
guilt, shame
sentimentality
envy, dominance
S
D
Param
Name
Result
Name
(1,0,-2,-1)
pride, aggression
(1,0,0,-1)
delight
(1,0,-1,0)
pride
(1,0,0,1)
delight
(1,0,2,1)
guilt, alarm
(1,0,1,2)
delight, alarm
(-1,0,0,-1)
pessimism
(-1,0,-2,-1)
envy, aggression
(-1,0,-1,-2) pessimism, aggression
(-1,0,2,1)
despair, alarm
(-1,0,0,1)
disappointment
(-1,0,1,0)
despair
Table 4.2: Action feedback generation example table
47
Figure 4.2: Character perception sensors
choose actions that increase one while decreasing the other; however, the
agent does not know what the feedback will be. It is the learning’s module
task to attempt to predict it, based on the current context. The better the
prediction, the better informed a decision the agent can make.
We chose to use linear regression to predict feedback in our application.
Linear regression is a way of modelling the relationship between a dependent
variable and one or more explanatory variables. The data is modelled using a
linear combination of the set of coefficients and explanatory variables. Linear
regression is a supervised machine learning technique that estimates unknown
model parameters from the data.
In our case, the algorithm attempts to learn the feedback method explained. The dependent variable is the emotional force received by the agent,
while the explanatory variable is a feature vector constructed based on the
action’s context (Fig. 4.4). The structure of this array is made up of three
segments:
1. Base Contains the basic information about the action.
(a) Action The emotion force associated with the action
(b) Action direction Denotes whether the action is incoming or outgoing
(c) Area The emotion force associated with the area that the action
takes place in
2. Agents Contains information about the agents involved in the action
(a) Multiple Number of agents taking part in the action
48
Figure 4.3: Information fusion
(b) Agents Average emotion force associated with the agents taking
part in the action
(c) Coherence Coherence of the emotion forces associated with the
1
)
agents taking part in the action (coherence = variance
(d) Equipment Average emotion force associated with the equipment
carried by the agents
3. Objects Contains information about the objects (other than equipment,
i.e. containers, traps) involved in the action
(a) Multiple Number of objects involved in the action
(b) Objects Average emotion force associated with the agents taking
part in the action
(c) Locked Number of locked containers taking part in the action
(d) Coherence Coherence of the emotion forces associated with the
1
agents taking part in the action (coherence = variance
)
(e) Blocked Average emotion force for all agents blocking containers
involved in the action
The algorithm then receives the correct feedback from the environment
which modifies the weights in the algorithm according to the difference between the estimated feedback force and the actual feedback force.
Following Lazarus’ synthesis, emotions are integrated in the game mechanism as a 3-phase process, by which the agent interacts with the current
49
Figure 4.4: Context feature vector
game context in terms of emotions. Every action is first estimated, by means
of Machine Learning, to produce the value of emotional force it implies. At
this point, no reference is made to his current state, so it represents a quantitative exploration of his action possibilities, and can be viewed as the Agent’s
emotional knowledge. A qualitative evaluation follows, where the Agent determines the effect each action will have on his current state and chooses one
accordingly. He sets priorities to the actions he is able to perform at the
given time, therefore the outcomes of this phase can be viewed as emotional
meaning of the actions in respect to his current state. Upon choosing an
action he is confronted with the real force which determines his next state.
This last phase has a normative role on the Agent’s performance, as the new
state is also influenced by the quality of the estimation.
4.2.4
Behavior module
For the behavior generation module, we have chosen to use Behavior Trees.
Behavior Trees are a framework for game AI that model character behaviors
extremely well, being simple and highly customizable at the same time. Behavior trees are one abstraction layer higher than Finite State Machines and
combine the best features of linear scripts and reactive state machines while
at the same time allowing purposeful behavior similar to planning techniques
[12–14].
A behavior tree is a form of hierarchical logic, made up of behavior nodes.
That means that the overall behavior is made up of a task that breaks down
recursively into subtasks, which in turn would break down into even lower
level tasks until it reaches the leaves of the tree. The leaves provide the
interface between the artificial intelligence logic and the game engine. We
only use the Decorator leaf node behavior, that checks a condition, and if
this condition is true it attempts and action that makes changes in the environment. Complexity is built up in the branches of the tree. The branches
are responsible for managing the child nodes. This is done using Composite
tasks, which are tasks made up of multiple child tasks. We use four types of
Composite tasks: Sequence, Selector, Loop and Concurrent. Fig. 4.6 shows
the behavior tree management system hierarchy.
50
Figure 4.5: Action evaluation
51
Figure 4.6: Behavior tree
Sequences are responsible for running their child tasks in sequence one
by one. If one child fails, the whole sequence fails. Selector nodes run all
child nodes until one either succeeds or returns a running state. Concurrent
nodes visit all of their children during each traversal. Loop nodes behave as
sequences, however they loop around when reaching their last child.
To update a behavior tree, it’s always traversed beginning with its root
node in depth first order.
Algorithm 2 Behavior
if status == INVALID then
Initialize()
end if
status = Update()
if status != RUNNING then
Terminate(status)
end if
return status
Example Let’s assume we have the following behavior tree for OrcWorld:
1. Root (Selector )
(a) Guard Treasure (Decorator )
i. Is enemy near treasure? (Condition)
52
Algorithm 3 Decorator
if Condition() then
return Action()
else
return FAILURE
end if
ii. Intimidate enemy! (Action)
(b) Pillage Treasure (Sequence)
i.
ii.
iii.
iv.
Know any treasure? (Condition)
Go to treasure (Action)
Intimidate guard (Action)
Steal treasure (Action)
(c) Idle (Decorator )
All nodes start out as ready. At the first tree update, the ready Root
node is run. The Selector node checks all children from first to last until a
child node returns a success or running state. It fails if no such child can be
found. The Guard Treasure node is run, which is a Decorator made up of a
condition and an action. The Is enemy near trasure? condition fails because
there is no enemy visible and thus the whole Guard Treasure node fails.
The next node to be run is the Pillage Treasure node. The sequence node
starts by running its Know any treasure? node. A target trasure is chosen
successfully and the node returns success. The next node in the sequence is
the Go to treasure node. As the treasure is far, it can’t be reached in this
step, so the node returns running. Its parent, Pillage treasure also returns
running.
Algorithm 4 Selector
while true do
status = current.Tick()
if status != FAILURE then
return status
end if
if Current.MoveNext() == false then
return FAILURE
end if
end while
53
For the next iteration, all nodes that are not currently running are set to
ready. The Guard treasure node is visited and fails again, however, when the
Pillage treasure node is run again, it picks up in the Go to treasure node,
where it left off in the sequence.
In OrcWorld, we use a behavior tree for each character in order to generate
all possible actions, which are then added to an action module, where they
ratio and added to an
are appraised and prunned according to their Gain
Risk
execution queue. Since the nature of the game is turn-based, all actions are
run until completion.
In order to select an appropriate action, the agent computes the resulting
state of the Action Force on his Initial State and compares the Impact of all
actions. He is then given feedback for the action he selected, representing
the real force of the action. Confronting this force to what he estimated via
Machine Learning results in a Regret Force which is then applied to the real
Force in order to compute the Current State of the Agent (Fig. 4.7).
Algorithm 5 Sequence
while true do
status = Current.Tick()
if status != SUCCESS then
return status
end if
if Current.MoveNext() == false then
return SUCCESS
end if
end while
54
Figure 4.7: Decision flow
55
Chapter 5
Conclusion and future work
This section presents our conclusions, a summary of the contributions and
interesting future research directions.
5.1
Conclusion
The main goal of this work has been to develop an emotion simulation model
and agent architecture that provides artificial characters in virtual environments with believable behavior in order to enhance virtual environment experiences for human users; this means that emotions should act as motivation
for agent behavior, influence and improve perception, reasoning and decisionmaking.
The main objective of this project has been the proposal of an original architectural solution emotion representation and virtual character framework
that possesses the following characteristics
• easily expandable; independent of the AI techniques used to endow the
virtual characters with behavior
• simple, few algorithm parameters, easy to learn, use and implement
• accomplishes all effects of emotion on behavior according to theory, use
emotions as motivation
• easily integrated with a game engine
We first put forth the Emotion-Driven Character Reasoning Engine, a
Belief-Desire-Intension inspired architecture that replicates our understanding of the functioning of the human mind at an abstract level. We use
Plutchik’s emotion taxonomy and Lazarus’ appraisal model to endow virtual
56
characters with an emotional layer. The architecture gives an in-depth specification of how emotions interact with and influence other cognitive processes
in the BDI architecture, such as perception, memory management, fact recall
and rule application. The main contributions of this model are that emotion
is not only tied to the agent’s learning and reasoning process, but it is a key
component, providing agents with the underlayer for the simulated emotion,
making the reasoning process more akin to humans, achieving complex, believable behavior for virtual agents. Another important contribution of this
model is the improvement of agent performance and effectiveness through
emotion simulation by speeding up relevant rule and fact recall, as it is currently believed emotions do in humans.
Further on, we refined the emotion representation scheme used in the
Emotion-Driven Chracter Reasoning Engine into the Newtonian Emotion
System, a representation model vectorial in nature, easily integrated into
computational environments and firmly grounded in Pluthcik’s theory of
emotion. We have also developed laws that dictate how emotions interact
with each other and external emotional forces. This representation scheme is
easily understood by technical minded people and should pose no problems
during implementation or authoring, helping with its adoption by developers
and designers. The NES is fast and light-weight and we have also shown
that it is very flexible, being used to represent emotional states, moods and
personality. The model is also expandable, allowing for example to create a
more complex personality description through several personality filters, each
applied in certain cases. Our NES is fast and lightweight, without sacrificing
any expressitivity.
The artificial intelligence technique used needs to match the agent’s,
game’s and environment’s capabilities. This is why we took a step back
from the BDI paradigm, as not all applications require the same complex
and in-depth reasoning process, and developed the Plug-and-Play architecture as a generic means of providing an emotion layer to any virtual character
architecture, as an interface between the character’s behavior layer and the
environment. The system still functions according to psychological theory,
influencing the way that a character perceives the environment. The system
does not influence reasoning, however it does influence the agent’s actions
through the motivation mechanism. A generic interface architecture frees up
dvelopers and designers to create the rest of the architecture as simple or as
complex as necessary. The behavior generation module may be as simple as
a finite state machine or as complicated as the belief desire architecture, with
everything from decision trees to behavior trees in between. The appraisal
module is again left to the developer, the only requierement being that it attepmt to predict and action’s feedback or evaluate a percept correctly based
57
on experience. We have thought of several machine learning techniques that
can be used, depending on the constraints imposed, such as linear regression,
support vector machines and clustering. By using the Plug-and-Play interface proposed in this thesis, the game logic can be kept as simple as possible,
as there is no need to adapt the game engine architecture to accomodate
computationally expensive artificial intelligence techniques.
To demonstrate the system, we developed a computer game artificial intelligence framework using the Unity 3d Game Engine. The framework provides
the underlying mechanisms for a character’s existence and interaction with
a game world, including a behavior module. To ease behavior management
and generation, we implemented a behavior tree library in c# to be used with
Unity. we provide and example of how to integrate the Newtonian Emotion
System and the Plug-and-Play subsystem with the mentioned framework in
a computer role-playing game. We chose computer role-playing games as an
application domain due to personal preference, however, the system can be
used in any type of application that requires emotion simulation. Moreover,
other applications may not have the processing power restrictions that running along side CPU intensive special effects and physics simulations impose,
and would leave more compuring power for the artificial intelligence module.
We have provided virtual characters with the means of artifical expression
and methods for its integration into intelligent artificial agent architectures.
Both systems are firmly grounded in theory, respecting PLutchik’s taxonomy,
Lazarus’ appraisal model, replicating the way emotions influence human perception and motivating a character’s actions in the environment. We have
developed a game artificial intelligence framework and demonstrated how
easily the Plug-and-play architecture is integrated with the AI framework as
well as the game engine used. We have also succeeded in keeping the system
complexity down in order to make the system more accessible and easy to use
and adopt. Last, but not least, the Plug-and-Play system is scalable (as the
system only intervenes within the agent’s context, the overhead is linear) and
both the New Newtonian Emotion System and the Plug-and-Play interface
are easily expandable.
5.2
Contribution
The main contributions of this thesis are as follows:
• We have conceived a model of emotional BDI (Belief-Desire-Intention)
agents to be used as the basis for the construction of believable virtual
characters.
58
• We have conceived and developed the emotion-driven virtual character
BDI reasoning engine that replicates the way I understand and believe
human cognition works and is influenced by emotions.
• We have proposed the Newtonian Emotion model and the associated
Newtonian Emotion System, an emotion representation scheme easily
integrated and used in a computational environment, firmly grounded
in Plutchik’s theory of emotion.
• We have conceived and implemented an artificial emotion appraisal
process grounded in the Lazarus’ leading theory in the field that was
used in tailoring the behavior of believable virtual characters.
• We have designed and implemented a Plug-and-play architecture for
the integration of affective computing into intelligent artificial agent
systems.
• We have proposed a framework for the integration of intelligent believable behaviour into computer games (specifically computer role-playing
games).
• We have conceived and implemented a behavior tree framework that
manages the execution of behavior generation nodes for virtual characters.
• We have developed an action module as part of a demo, that resolves
a virtual character’s actions and their consequences and feedback.
• We have conceived and validated a model in which emotions act as
motivation for virtual character actions and in which emotions are used
as heuristics in order to speed up event evaluation and decision-making.
• We have developed an emotion simulation module where a character’s
personality and past experience (through machine learning) influence
its reasoning and decision-making.
• We have designed and developed a generic, domain-independent artificial emotion simulation system, which is simple, easy to learn, adopt
and integrate.
• We have proposed a set of methods for the graphical representation of
artificially generated emotions.
59
• We have investigated several general theories of emotions and computational theories of affective reasoning, we have compared and discussed
their merits and drawbacks.
Publications
• V. Lungu and A. Baltoiu. Using Emotion as Motivation in the Newtonian Emotion System for Intelligent Virtual Characters. In Buletin
Universitatea Politehnica Bucuresti, Bucuresti, submitted 2012.
• V. Lungu. Newtonian Emotion System. In Proceedings of the 6th
International Symposium on Intelligent Distributed Computing - IDC
2012, Springer Series on Intelligent Distributed Computing, vol. VI,
pages 307-315, Calabria, Italy, 2012.
• V. Lungu. Artificial Emotion Simulation Model and Agent Architecture. In Advances in Intelligent Control Systems and Computer Science, Springer Series in Advances in Intelligent Systems and Computing, vol. 187, pages 207-221, 2013.
• V. Lungu and A. Sofron. Using Particle Swarm Optimization for Particle Systems. In The 18th International Conference on Control Systems
and Computer Science, vol. 2, pages 750-754, Bucharest, Romania,
2011.
• V. Lungu. Artificial Emotion Simulation Model. In Agents for Complex Systems Workshop (ACSYS 2010) held in conjuction with 7th
Workshop on Agents for Complex Systems - SYNASC 2010, Timisoara,
Romania, 2010.
• V. Lungu. Rule-based System for Emotional Decision-Making Agents.
In Distributed Systems Conference, Suceava, Romania, 2009.
5.3
Future work
There are several improvements that can be made to the model in order
to make it more flexible, and better able to suit the motivation needs for
virtual characters, as well as interesting new research directions that lead to
new application domains for the model.
The main issue pointed out about the model is its inability to scale incoming emotions according to action class. For example, greed is considered a
driving force for humans, however, greed is not considered an emotion within
60
Plutchik’s taxonomy. This can be achieved by rewarding a character acquiring wealth o power through Joy emotion (which is a driving force, that the
character attempts to maximize, within our model). However, in its current
state, the model is unable to create a very greed driven character. One solution would be to add an amplifier filter (similar to the personality filter) that
only applies in certain cases (for a certain class of actions, in our example,
those that have as a result increases in wealth and/or power; in our scenario
this would be the Take action).
We would also like to mention an implementation issue here, as we feel
it is important, namely that the method we used in our scenario to provide emotional feedback gives out inconclusive results in some border cases
(namely when the feedback is distributed evenly accross the four axes). Fortunately, these are rare (only when the feedback is distributed evenly accross
the four axes), however we feel a more refined feedback generation method
could solve this issue.
Another interesting issue that has been pointed out in discussion, that is
not currently being handled by the model, is the cold start problem, how the
system behaves when there is no data available upon initialization. Currently,
the system can generate data sets randomly and label them using the formula.
Agents that do not do this, are considered blank slate newborns, and will
only learn from experience (we believe that having your world views skewed
by personal experience is a boon for human behavior simulation).
As mentioned before, there are several application domains that the model
can easily expand to. The first among these would be as a trust and reputation model. Emotions play such a role in humans. The process is called
reciprocal altruism, an emergent norm in human systems that states that one
person will help another, seemingly without any possibility of reciprocation,
in the hopes that the initial actor will be helped themselves at a later date - in
artificial intelligence terms, this means that an agent would act in a manner
that temporarily reduces its fitness while increasing another agent’s fitness,
with the expectation that the other agent will act in a similar manner at a
later time. Emotions serve as an evaluator to help spot cheaters that try to
abuse the system. We could use the Trust-Disgust axis in order to quantify
how a given agent performs when a contract has been agreed upon, and use
the Surprise-Anticipation axis in order to evaluate how an agent conforms to
emergent but non-contractual norms. In order for the system to be useful in
spotting cheaters, we would also need to achieve a level of information fusion
(integrating emotion information about an agent in relation to norms from
multiple sources - i.e. all other agents that have come into contact); this can
be achieved in a centralized or distributed manner, depending on goals.
Another possible extension of the model would be in the field of multi
61
agent systems, endowing them with collective emotions, where based on the
law of universal attraction, agents would be able to influence each other’s
states through proximity. This would work to give an overall state of the
system. For example, if one or a few agents are damaged or not functioning
properly, it would go unnoticed, however, if there is something wrong with
the network, the overall emotion displayed would be poor. This could be
used in self-organizing systems (such as ambient intelligence applications) in
order to give a non-invasive diagnosis of the system (both for the system
overall, as well as for individual devices); different emotions could be used
to make users aware of different classes of problems. The same system could
be used by artificial agents part of the same network to signal states among
themselves. It could also serve as an indicator of how devices interact and
influence each other (if emotion influence happens during direct interaction,
and not through the dissemination proposed earlier).
Yet another application is the integration of the model, together with a
natural language processing unit capable of extracting emotion from text,
into social networks. This would serve to analize how the mood of a message influences a person’s emotional state and how emotional states diffuse
themselves in human networks, granting new insight into how emotions work.
The model could also be used in the psychological domain, for example
allow medical professionals to play out different scenarios for patients in a
virtual reality environment in order to study their reaction and prescribe
treatment, as well as enable them to create scenarios that teach people how
to express their emotions by means of example.
As emotions have been shown to establish the importance of events and
details around us, and we have shown that the same can be applied to artificial systems, the Newtonian Emotion System could be used in time critical
information fusion applications, in order to establish information integration
priority (or belief maintenance in artificial agents).
It has been show that mood is contagious in humans, so a large number
of applications will probably want to take advantage of this, and attempt to
influence users in order to get them into a more agreeable mood or ensure
that they are happy or enjoying themselves using a certain product or service. The technology could further be integrated with ambient intelligence
applications to influence people’s mood in order to make them more productive/happier/more accepting. Coupled with an emotion reading system, the
model could serve as an indicator for the state of an area (i.e. mental health
or quality of life), as well as give an idea on how neighboring areas interact
and influence each other.
62
Appendix A
Plutchik emotion model details
This chapter presentes two figures that give some details about Pluthik’s
emotion model (the model that the Newtonian Emotion System presented in
this thesis is based off of).
Dyads
In our emotion representation model, emotions are created as a mix of the
eight basic emotions. Each pair of two (dominant) emotions gives out a new
emotion. These pairs are called dyads. Figure A.11 presents the names given
to dyads in Plutchik’s emotion model: first, second and third-degree dyads
are shown.
1
Image
dyads.jpg
source:
http://danspira.files.wordpress.com/2010/12/plutchik-emotional-
63
Figure A.1: Plutchik extended dyads
64
Appraisal
Figure A.22 presents some examples of event appraisals within the Plutchik
emotion theory. The chart shows what the triggering event is (stimulus
event), what the significance of the event is to the subject (cognitive appraisal), the emotion felt (subjective reaction), what the subject’s ensuing
behavior should be (behavioral reaction) and the function it serves (function).
Figure A.2: Plutchik appraisal examples
2
Image source: http://thumbnails.visually.netdna-cdn.com/robert-plutchiks-psychoevolutionary-theory-of-basic-emotions 50290c02eeb7e w587.jpg
65
Bibliography
[1] Soar: Along the frontiers. Online, available at www.soartech.com, 2002.
Soar technology.
[2] Fahrenheit
video
game.
http://en.wikipedia.org/wiki/
2009.
Online,
available
at
October
Fahrenheit (video game),
[3] Itspoke: An intelligent tutoring spoken dialogue system. Online, available at http://www.cs.pitt.edu/ litman/itspoke.html, October 2009.
[4] The sims video game. Online, available at http://thesims.ea.com, October 2009.
[5] M. Ben Ari. Principles of concurrent and distributed programming. Prentice Hall, N.Y., 1990.
[6] Ruth Aylett and Michael Luck. Applying artificial intelligence to virtual
reality: Intelligent virtual environments. In Applied Artificial Intelligence 4, pages 3–32, 2000.
[7] Christoph Bartneck. Integrating the occ model of emotions in embodied
characters. Design, pages 1–5, 2002.
[8] J. Bates. The role of emotion in believable agents. In Commun, pages
122–125. ACM 37, 1994.
[9] Ronald Brachman and Hector Levesque. Knowledge Representation and
Reasoning. Morgan Kaufman, 2004.
[10] A. Burke, F. Heuer, and D. Reisberg. Remembering emotional events.
Memory and Cognition, 20:277–290, 1992.
[11] Jans W. Carton. The hypertext d20 system reference document. Online,
2012. available at http://d20srd.org.
66
[12] Alex J. Champandard.
Behavior trees for next-gen
game ai (video,
part 1).
Online,
December 2007.
http://aigamedev.com/open/article/behavior-trees-part1/.
[13] Alex J. Champandard. Understanding behavior trees. Online, September 2007. http://aigamedev.com/open/article/bt-overview/.
[14] Alex J. Champandard. Behavior trees for next-gen game ai. Online,
December 2008. http://aigamedev.com/insider/presentations/behaviortrees/.
[15] Alex J. Champandard. Trends and highlights in game ai for 2011.
Online, February 2012. http://aigamedev.com/insider/discussion/2011trends-highlights.
[16] Eric Chown, Randolph M. Jones, and Amy E. Henninger. An architecture for emotional decision-making agents. In Proceedings of the first
international joint conference on Autonomous agents and multiagent
systems, pages 352–353, Bologna, Italy, July 2002.
[17] R. Cowie.
Emotion oriented computing:
State of
the art and key challenges.
Online,
available at
http://emotionresearch.net/projects/humaine/aboutHUMAINE/
HUMAINE%20white%20paper.pdf, October 2010. Humaine Network
of Excellence.
[18] A.D. Craig. Interoception: The sense of the physiological condition of
the body. Current Opinion in Neurobiology, 13:500–505, 2003.
[19] Michael David and Chen Sande. Serious games: Games that educate,
train, and inform, 2006.
[20] Sara de Freitas and Steve Jarvis. A framework for developing serious
games to meet learner needs. In Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC), 2006.
[21] Sara de Freitas and Steve Jarvis. Serious games engaging training solutions: A research and development project for supporting training needs.
British Journal of Educational Technology, 38(3):523–525, May 2007.
[22] Ronald de Sousa.
Emotion.
Online,
available at
http://plato.stanford.edu/entries/emotion/, September 2010. Stanford
Encyclopedia of Philosophy.
67
[23] A. S. Drigas, L. G. Koukianakis, and J. G. Glentzes. Knowledge engineering and a virtual lab for hellenic cultural heritage. In Proceedings
of the 5th WSEAS Int. Conf. on Artificial Intelligence, 2006.
[24] Oliver Duvel and Stefan Zerbst. 3D Game Engine Programming. Course
Technology PTR, 2004.
[25] David Eberly. The 3D Game Engine Architecture: Engineering Realtime Applications with Wild Magic. Morgan Kaufmann, 2005.
[26] David Eberly. 3D Game Engine Design: A Practical Approach to RealTime Computer Graphics. Morgan Kaufmann, 2006.
[27] Magy Seif El-Nasr, Thomas R. Ioerger, and John Yen. Peteei: a pet
with evolving emotional intelligence. In Proceedings of the third annual
conference on Autonomous Agents, pages 9–15, Seattle, Washington,
United States, April 1999.
[28] Wolfgang f. Engel. Shader X2 Introductions and Tutorials with DirectX
9. Wordware Publishing Inc., 2004.
[29] Randima Fernando. GPU Gems: Programming Techniques, Tips and
Tricks for Real-Time Graphics. Pearson Education, 2004.
[30] Adina Magda Florea and Adrian George Boangiu. Bazele logice ale
inteligentei artificiale. Universitatea Politehnica Bucuresti, 1994.
[31] Patrick Gebhard. Alma - a layered model of affect. In 4th International
Joint Conference of Autonomous Agents and Multi-Agent Systems (AAMAS 05), 2005.
[32] M.P. Georgeff, B. Pell, M.E. Pollack, , M. Tambe, and M. Wooldridge.
The belief-desire-intention model of agency. In Proceedings of the 5th
international Workshop on intelligent Agents V, Agent theories, Architectures, and Languages, 1999.
[33] Aron Granberg. A* pathfinding project. Online, 2012. available at
http://www.arongranberg.com/unity/a-pathfinding.
[34] Jason Gregory. Game Engine Architecture. A.K. Peters, 2009.
[35] Kevin Gurney. An Introduction to Neural Networks. CRC Press, 1997.
68
[36] A.E. Henninger, R.M. Jones, and E. Chown. Behaviors that emerge from
emotion and cognition: implementation and evaluation of a symbolicconnectionist architecture. In Proceedings of the Second international
Joint Conference on Autonomous Agents and Multiagent Systems, pages
321–328, Melbourne, Australia, 2003. ACM, New York, NY.
[37] Amy E. Henninger, Randolph M. Jones, and Eric Chown. Behaviors that
emerge from emotion and cognition: A first evaluation. In Interservice/Industry Training Systems and Education Conference(I/ITSEC),
2002.
[38] Obe Hostetter. Video games - the necessity of incorporating video games
as part of constructivist learning. James Madison University, 2002. Department of Educational Technology.
[39] E. Hudlicka. Affective computing for game design. In Proceedings of the
4th Intl. North American Conference on Intelligent Games and Simulation (GAMEON-NA), pages 5–12, 2008.
[40] HUMAINE Network of Excellence.
Humaine emotion annotation and representation language (earl). Online, 2009. available
at http://emotion-research.net/projects/humaine/earl/index html/
?searchterm=emotion%20markup%20language.
[41] R. M. Jones. An introduction to cognitive architectures for modeling
and simulation. In Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC), 2004.
[42] James Kennedy and Russell C. Eberhart. Swarm Intelligence. Morgan
Kaufmann, 2001.
[43] E.A. Kensinger. Remembering emotional experiences: The contribution
of valence and arousal. Reviews in the Neurosciences, 15:241–251, 2004.
[44] J.E. Laird, A. Newell, and P.S. Rosenbloom. Soar: An architecture for
general intelligence. Artificial Intelligence, 33:1–64, 1987.
[45] Pat Langley, John E. Laird, and Seth Rogers. Cognitive architectures:
Research issues and challenges. Technical report, University of Michigan,
2002.
[46] R.S. Lazarus. Emotion and Adaptation. Oxford University Press, New
York, NY, 1991.
69
[47] Jill Fain Lehman, John Laird, and Paul Rosenbloom. A gentle introduction to soar, an architecture for human cognition. In S. Sternberg and
D. Scarborough (Eds). MIT Press, 1996. Invitation to Cognitive Science.
[48] Iolanda Leite, Carlos Martinho, André Pereira, and Ana Paiva. icat: an
affective game buddy based on anticipatory mechanisms. In Proceedings
of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 3, AAMAS ’08, pages 1229–1232, Richland,
SC, 2008. International Foundation for Autonomous Agents and Multiagent Systems.
[49] Valentin Lungu. Rule-based system for emotional decision-making
agents. In Sisteme distribuite ISSN 2067-5259, Suceava, 2009. online.
[50] Valentin Lungu. Artificial emotion simulation model. In Agents for
Complex Systems Workshop, Timisoara, 2010. SYNASC 2010.
[51] Valentin Lungu. Newtonian emotion system. In Proceedings of the 6th
International Symposium on Intelligent Distributed Computing - IDC
2012, volume VI, pages 307–315. Springer Series on Intelligent Distributed Computing, September 2012.
[52] Valentin Lungu. Artificial emotion simulation model and agent architecture. Advances in Intelligent Control Systems and Computer Science,
187:207–221, 2013.
[53] Valentin Lungu and Angela Şofron. Using particle swarm optimization
to create particle systems. Proceedings of CSCS-18 18th International
Conference on Control Systems and Computer Science, 2:750–754, May
2011.
[54] Vincent Corruble Magalie Ochs, Nicolas Sabouret. Simulation of the dynamics of non-player characters’ emotions and social relations in games.
IEEE Transactions on Computational Intelligence and AI in Games,
1(4):281–297, December 2009.
[55] Xiao Mao and Zheng Li. Implementing emotion-based user-aware elearning. In CHI 2009, Spotlight on Works in Progress, 2009.
[56] D. Moura and E. Oliveira. Fighting fire with agents: an agent coordination model for simulated firefighting. In Proceedings of the 2007 Spring
Simulation Multiconference, pages 71–78, San Diego, CA, 2007. Society
for Computer Simulation International. Volume 2.
70
[57] D. Nardi and R. Brachman.
An introduction
description
logics.
Online,
2010.
available
http://www.inf.unibz.it/ franconi/dl/course/dlhb/dlhb-01.pdf.
to
at
[58] E. Oliveira and L. Sarmento. Emotional advantage for adaptability and
autonomy. In Proceedings of the Second international Joint Conference
on Autonomous Agents and Multiagent Systems, pages 305–312, Melbourne, Australia, 2003. ACM, New York, NY.
[59] A. Ortony, G. Clore, and A. Collins. The cognitive structure of emotions.
Cambridge University Press, Cambridge, 1988.
[60] H. Van Dyke Parunak, R. Bisson, S. Brueckner, R. Matthews, and
J. Sauter. A model of emotions for situated agents. In Proceedings
of the Fifth international Joint Conference on Autonomous Agents and
Multiagent Systems, pages 993–995, New York, NY, 2006. ACM.
[61] André Pereira, Carlos Martinho, Iolanda Leite, and Ana Paiva. icat, the
chess player: the influence of embodiment in the enjoyment of a game.
In Proceedings of the 7th international joint conference on Autonomous
agents and multiagent systems - Volume 3, AAMAS ’08, pages 1253–
1256, Richland, SC, 2008. International Foundation for Autonomous
Agents and Multiagent Systems.
[62] D. Pereira, E. Oliveira, and N. Moreira. Formal modelling of emotions
in bdi agents. Computational Logic in Multi-Agent Systems: 8th international Workshop, CLIMA VIII, 2008.
[63] David Pereira, Eugnio Oliveira, and Nelma Moreira. Modelling emotional bdi agents. In Workshop on Formal Approaches to Multi-Agent
Systems (FAMAS 2006), 2006.
[64] David Pereira, Eugnio Oliveira, Nelma Moreira, and Lus Sarmento. Towards an architecture for emotional bdi agents. In EPIA 05: Proceedings
of 12th Portuguese Conference on Artificial Intelligence, 2005.
[65] David Pereira, Eugnio Oliveira, Nelma Moreira, and Lus Sarmento. Towards an architecture for emotional bdi agents. In EPIA 05: Proceedings
of 12th Portuguese Conference on Artificial Intelligence, pages 40–47.
Springer, 2005.
[66] Matt Pharr. GPU Gems 2: Programming Techniques for HighPerformance Graphics and General-Purpose Computation. Pearson Education, 2005.
71
[67] R. Plutchik. Emotion: Theory, research, and experience: Vol. 1. Theories of emotion. New York: Academic, New York, NY, 1990.
[68] David Poole, Alain Mackworth, and Randy Goebel. Computational Intelligence: a Logical Approach. Oxford University Press, 1998.
[69] Craig Reynolds. Steering behaviors for autonomous characters. Online,
1999. available at http://www.red3d.com/cwr/steer/.
[70] J. Rickel and W.L. Johnson. Animated agents for procedural training
in virtual reality: Perception, cognition, and motor control. In Applied
Artificial Intelligence 13, pages 343–382, 1999.
[71] Jeff Rickel and W. Lewis Johnson. Steve: An animated pedagogical
agent for procedural training in virtual environments (extended abstract). SIGART Bulletin, 8:16–21, 1997.
[72] Jeff Rickel and W. Lewis Johnson. Steve: An animated pedagogical
agent for procedural training in virtual environments (extended abstract). SIGART Bulletin, 8:16–21, 1997.
[73] Stuart J. Russell and Peter Norvig. Artificial Intelligence – A Modern
Approach. Prentice Hall, Englewood Cliffs, New Jersey 07632, 1995.
[74] S. Schachter and J. Singer. Cognitive, social, and physiological determinants of emotional state. Psychological Review, 69:379–399, 1962.
[75] K. Scherer. What are emotions and how can they be measured? Social
Science Information, 44(4):695–729, 2005.
[76] Silvia Schiaffino, Analı́a Amandi, Isabela Gasparini, and Marcelo S. Pimenta. Personalization in e-learning: the adaptive system vs. the intelligent agent approaches. In Proceedings of the VIII Brazilian Symposium on Human Factors in Computing Systems, IHC ’08, pages 186–
195, Porto Alegre, Brazil, Brazil, 2008. Sociedade Brasileira de Computação.
[77] M. Schrder, H. Pirker, and M. Lamolle. First suggestions for an emotion
annotation and representation language. In Proceedings of LREC’06
Workshop on Corpora for Research on Emotion and Affect, pages 88–
92, Genoa, Italy, 2006.
[78] M. Schroder, L. Devillers, K. Karpouzis, J. Martin, C. Pelachaud, C. Peter, H. Pirker, J. Schuller, J. Tao, and I. Wilson. What should a generic
72
emotion markup language be able to represent? In Second International
Conference on Affective Computing andIntelligent Interaction: Lecture
Notes in Computer Science. Springer, 2007.
[79] T. Sharot and E.A. Phelps. How arousal modulates memory: Disentangling the effects of attention and retention. Cognitive, Affective and
Behavioral Neuroscience, 4:294–306, 2004.
[80] M.P. Singh. Agent communication languages: Rethinking the principles.
Computer 31, 12:40–47, 1998.
[81] Bas R. Steunebrink, Mehdi Dastani, and John-Jules Ch. Meyer. Towards
a quantitative model of emotions for intelligent agents.
[82] Bas R. Steunebrink, Mehdi Dastani, and John-Jules Ch. Meyer. A logic
of emotions for intelligent agents. In Proceedings of the 22nd national
conference on Artificial intelligence - Volume 1, pages 142–147. AAAI
Press, 2007.
[83] Bas R. Steunebrink, Mehdi Dastani, and John-Jules Ch. Meyer. The
occ model revisited. In Proceedings of the 4th Workshop on Emotion
and Computing – Current Research and Future Impact, 2009.
[84] Bas R. Steunebrink, Mehdi Dastani, and John-Jules Ch. Meyer. The
occ model revisited. In Proceedings of the 4th Workshop on Emotion
and Computing, 2009.
[85] B.R. Steunebrink, M.M. Dastani, and J-J.Ch. Meyer. Emotions as
heuristics in multi-agent systems. In Proceedings of the 1st Workshop on
Emotion and Computing - Current Research and Future Impact, pages
15–18, 2006.
[86] Martin Strauss and Michael Kipp. ERIC : A Generic Rule-based Framework for an Affective Embodied Commentary Agent. pages 97–104,
2008.
[87] Arges Systems. Unitysteer - steering components for unity. Online, 2000.
[88] Unity
Technologies.
Unity
3d
game
engine
reference
manual.
Online,
2012.
available
at
http://docs.unity3d.com/Documentation/Components/index.html.
[89] Gerard Tel. Introduction to Distributed Algorithms. Cambridge University Press, 1994.
73
[90] W3C Incubator Group. Elements of an emotionml 1.0. Online,
2008. available at http://www.w3.org/2005/Incubator/emotion/XGRemotionml/.
[91] Michael Wooldridge and Nicholas R. Jennings. Pitfalls of agent-oriented
development. In Proceedings of the Second International Conference on
Autonomous Agents, pages 385 – 391. ACM Press, 1998.
[92] Nick Yee. The labor of fun: How video games blur the boundaries of
work and play. Games and Culture, 1:68–71, 2006.
74