Download teză de doctorat - AI-MAS

Document related concepts

Enactivism wikipedia , lookup

Agent (The Matrix) wikipedia , lookup

Perceptual control theory wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Soar (cognitive architecture) wikipedia , lookup

Agent-based model wikipedia , lookup

Agent-based model in biology wikipedia , lookup

Artificial intelligence in video games wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Affective computing wikipedia , lookup

Transcript
FONDUL SOCIAL EUROPEAN
Investeşte în oameni!
Programul Operaţional Sectorial pentru Dezvoltarea Resurselor Umane 2007 – 2013
Proiect POSDRU/88/1.5/S/61178 – Competitivitate şi performanţă în cercetare prin programe doctorale de calitate (ProDOC)
UNIVERSITATEA POLITEHNICA DIN BUCUREŞTI
Facultatea de Automatica si Calculatoare
Catedra de Calculatoare
Nr. Decizie Senat 219 din 28.09.2012
TEZĂ DE DOCTORAT
Tehnici de simulare a emotiilor pentru personaje virtuale
Artificial emotion simulation techniques for intelligent virtual characters
Autor: Ing. Valentin Lungu
COMISIA DE DOCTORAT
Preşedinte
Conducător de doctorat
Referent
Referent
Referent
Prof.dr.ing. Florica Moldoveanu
Prof.dr.ing. Adina Florea
Prof.dr.ing. Viorel Negru
Prof.dr.ing. Costin Badica
Prof.dr.ing. Florin Radulescu
de la
de la
de la
de la
de la
Bucureşti 2012
Universitatea Politehnica Bucuresti
Universitatea Politehnica Bucuresti
Universitatea Timisoara de Vest
Universitatea din Craiova
Universitatea Politehnica Bucuresti
University POLITEHNICA of
Bucharest
Doctoral Thesis
Artificial Emotion Simulation
Techniques for Intelligent Virtual
Characters
Author:
M.Sc. Valentin Lungu
Supervisor:
Prof. Adina Magda Florea
A thesis submitted in fulfilment of the requirements
for the degree of Ph.D.
in the
Artificial Intelligence and Multi-Agent Systems Laboratory
Computer Science
November 2012
i
UNIVERSITY POLITEHNICA OF BUCHAREST
Abstract
Faculty of Computer Science and Automatic Control
Computer Science
Ph.D.
Artificial Emotion Simulation Techniques for Intelligent Virtual
Characters
by M.Sc. Valentin Lungu
iii
The study of what is now called emotional intelligence has revealed yet another
aspect of human intelligence. Research in human emotion has provided insight
into how emotions influence human cognitive structures and processes, such as
perception, memory management, planning, reasoning and behavior. This information also provides new ideas to researchers in the fields of affective computing
and artificial life about how emotion simulation can be used in order to improve
artificial agent behavior.
Virtual characters acting in an intelligent manner based on their goals and desires
as opposed to standing still, waiting around for the user and interacting based on
pre-scripted dialog trees brings a great deal of depth to the experience. Evolving
virtual worlds are by far more interesting than static ones. Applications in which
activity independent of the user takes place, such as populating a virtual world
with crowds, traffic, virtual humans and non-humans, behaviour and intelligence
modeling [1] are possible by integrating artificial intelligence and virtual reality
tools and techniques.
Emotion theories attribute several effects of emotions on cognitive, decision and
team-working capabilities (such as directing/focusing attention, motivation, perception, behavior towards objects, events and other agents.
The main goal of our work is to provide virtual characters with the ability of
emotion expression. Our approach is based on the idea that emotions serve a
fundamental function in human behavior, and that in order to provide artificial
agents with rich believable affective behavior, we need to develop an artificial system that serves the same purpose. We believe that emotions act as a subsystem
that enhances human behavior, by stepping up brain activity in arousing circumstances, directing attention and behavior, establishing importance of events and
act as motivation.
This thesis analyzes important aspects of human emotions and models them in
order to create more believable artificial characters and more effective agents and
multi-agent systems. This thesis deals with the effects that emotions have on
perception, memory, learning and reasoning and their integration into an artificial agent architecture. We show that emotion simulation does not only serve
to improve human-computer interaction and behavior analysis, but can also be
used in order to improve artificial agent and multi-agent system performance and
effectiveness.
iv
We study various emotion theories, emotion computational simulation models,
and, drawing from their wisdom, we propose a model of artificial agents, which attempts to simulate the effects of human-like emotions on the cognitive and reactive
abilities of artificial agents.
This thesis summarizes the psychological theories that our computational emotion
model is based on, defines the key concepts of the Newtonian emotion model that
we developed and describes how they work within an agent architecture. The
Newtonian emotion model is a light-weight and scalable emotion representation
and evaluation model to be used by virtual agents. We also designed a plugand-play emotion subsystem to be integrated into any agent architecture. We
also describe the first theoretical prototype for our architecture, an artificial agent
emotion-driven architecture that not only attempts to provide complex believable
behavior and representation for virtual characters, but that attempts to improve
agent performance and effectiveness by mimicking human emotion mechanics such
as motivation, attention narrowing and the effects of emotion on memory.
We have chosen role-playing games where virtual characters exist and interact
in virtual environments as our application domain for the unique challenges and
constraints that arise for artificial intelligence techniques geared towards this type
of application, namely, being forced to run alongside CPU-intensive processes such
as graphics and physics simulations, as well as the fact that role-playing games are
unique in the fact that they have social interactions built into the game. We believe
that believable emotional expression and behavior will add a new dimension and
progress virtual environments beyond visually compelling but essentially empty
environments to incorporate other aspects of virtual reality that require intelligent
behavior.
Acknowledgements
The work has been funded by the Sectoral Operational Programme Human Resources Development 2007-2013 of the Romanian Ministry of Labour, Family and
Social Protection through the Financial Agreement POSDRU/88/1.5/S/61178.
v
Contents
Abstract
ii
Acknowledgements
v
List of Figures
viii
List of Tables
x
1 Introduction
1.1 Motivation and objectives . . . . . . . . . . . . . . . . . . . . . . .
1.2 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
2
4
2 Overview of emotion theory
2.1 Emotion representation . . . . . . . .
2.2 Emotion synthesis . . . . . . . . . . .
2.2.1 Perceptual . . . . . . . . . . .
2.2.2 Cannon-Bard . . . . . . . . .
2.2.3 Two-factor . . . . . . . . . . .
2.2.4 Lazarus . . . . . . . . . . . .
2.3 Emotion and cognitive processes . . .
2.3.1 Selectivity of attention . . . .
2.3.2 Emotion and memory storage
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3 Overview of artificial emotion simulation models
3.1 Ortony Clore and Colins (OCC) . . . . . . . . . .
3.2 Connectionist model (Soar) . . . . . . . . . . . .
3.3 Evolving emotions (PETEEI) . . . . . . . . . . .
3.4 EBDI . . . . . . . . . . . . . . . . . . . . . . . .
3.5 ERIC . . . . . . . . . . . . . . . . . . . . . . . . .
3.6 Deep . . . . . . . . . . . . . . . . . . . . . . . . .
3.7 iCat . . . . . . . . . . . . . . . . . . . . . . . . .
3.8 Comparison . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6
7
7
9
9
9
9
10
11
11
.
.
.
.
.
.
.
.
12
12
13
15
16
19
21
23
25
4 Artificial emotion simulation techniques for virtual characters
27
4.1 Emotion-driven behavior engine . . . . . . . . . . . . . . . . . . . . 27
vi
Contents
4.1.1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
28
28
33
36
44
47
53
53
54
55
55
57
58
62
63
5 Emotion integration in game design
5.1 Artificial intelligence in virtual environments
5.1.1 Application domains . . . . . . . . .
5.1.2 Techniques . . . . . . . . . . . . . . .
5.2 ALICE (SGI) . . . . . . . . . . . . . . . . .
5.2.1 Implementation . . . . . . . . . . . .
5.3 Orc World . . . . . . . . . . . . . . . . . . .
5.3.1 Scenario and rules . . . . . . . . . .
5.3.2 Actions and feedback . . . . . . . . .
5.3.3 Learning module . . . . . . . . . . .
5.3.4 Behavior module . . . . . . . . . . .
5.4 Graphical representation . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
66
66
67
69
74
76
84
84
84
90
94
97
4.2
Architecture . . . . . . . . . . . . .
4.1.1.1 Inference engine . . . . .
4.1.1.2 Truth maintenance engine
4.1.1.3 Algorithms . . . . . . . .
4.1.2 Emotion simulation . . . . . . . . .
4.1.3 Knowledge representation . . . . .
4.1.4 Effects on cognitive processes . . .
4.1.4.1 Perception management .
4.1.4.2 Memory management . .
Newtonian Emotion System (NES) . . . .
4.2.1 Newtonian emotion space . . . . .
4.2.2 Plug-and-play architecture . . . . .
4.2.3 NES Learning . . . . . . . . . . . .
4.2.4 Personality filter . . . . . . . . . .
4.2.5 Emotion as motivation . . . . . . .
vii
6 Conclusion and future work
101
6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.2 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.3 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
A Plutchik emotion model details
109
Bibliography
112
List of Figures
2.1
2.2
Plutchik’s wheel of emotions . . . . . . . . . . . . . . . . . . . . . . 8
Lazarus emotion synthesis model . . . . . . . . . . . . . . . . . . . 10
3.1
3.2
3.3
3.4
Soar 9 system architecture
Soar emotion subsystem .
ERIC emotion system . .
Deep emotion system . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
14
15
20
22
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
4.9
4.10
4.11
4.12
4.13
Architecture . . . . . . . . .
Inference engine . . . . . . .
Forward chaining . . . . . .
RETE Network . . . . . . .
Backward chaining . . . . .
Truth maintenance system .
Agent’s world model . . . .
Knowledge indexing . . . . .
Emotion system architecture
Joy learning curve . . . . . .
Trust learning curve . . . .
Fear learning curve . . . . .
Surprise learning curve . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
28
29
30
32
33
34
51
53
58
60
61
61
62
5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8
5.9
5.10
5.11
5.12
5.13
5.14
Game AI Framework . . . . . . . . . .
Finite state machine . . . . . . . . . .
Module diagram . . . . . . . . . . . . .
Readers module overview . . . . . . . .
Linking module overview . . . . . . . .
Round sequence diagram . . . . . . . .
Character perception sensors . . . . . .
Information fusion . . . . . . . . . . .
Context feature vector . . . . . . . . .
Action evaluation . . . . . . . . . . . .
Behavior tree . . . . . . . . . . . . . .
Decision flow . . . . . . . . . . . . . .
Emotion system architecture . . . . . .
A virtual character displaying the basic
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
emotions
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
75
76
77
82
83
85
90
91
92
93
94
97
98
99
viii
List of Figures
ix
5.15 Bidimensional emotion representation graph . . . . . . . . . . . . . 100
A.1 Plutchik extended dyads . . . . . . . . . . . . . . . . . . . . . . . . 110
A.2 Plutchik appraisal examples . . . . . . . . . . . . . . . . . . . . . . 111
List of Tables
3.1
Emotion simulation model comparison . . . . . . . . . . . . . . . . 26
4.1
4.2
4.3
4.4
4.5
Knowledge example . . . .
Forward chaining process .
Backward chaining process
Attack impact table . . . .
Take impact table . . . . .
5.1
5.2
Action feedback table . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Action feedback generation example table . . . . . . . . . . . . . . 89
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
x
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
30
31
34
65
65
List of Algorithms
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Agent.Run(Plan p) . . . . . . . . . . . . . .
Forward.Update() . . . . . . . . . . . . . . .
Forward.Run() . . . . . . . . . . . . . . . .
RETE(IIndividual m) . . . . . . . . . . . . .
RETE(TMSNode node) . . . . . . . . . . .
Trigger(δ) . . . . . . . . . . . . . . . . . . .
Update(Goal G, Plan p) . . . . . . . . . . .
Run(Goal G, Plan p) . . . . . . . . . . . . .
Emotion influence on rule instances . . . . .
Event influence . . . . . . . . . . . . . . . .
SetLabel(TMSNode node, NodeLabel value)
TMSEngine.Evaluate(Expression expr) . . .
Turn sequence . . . . . . . . . . . . . . . . .
Behavior . . . . . . . . . . . . . . . . . . . .
Decorator . . . . . . . . . . . . . . . . . . .
Selector . . . . . . . . . . . . . . . . . . . .
Sequence . . . . . . . . . . . . . . . . . . . .
xi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
37
38
38
39
40
40
41
41
42
42
43
44
86
95
95
95
95
Chapter 1
Introduction
Several beneficial effects on human cognition, decision-making and social interaction have been attributed to emotions, while several studies and projects in
the field of artificial intelligence have shown the positive effects of artificial emotion simulation in human-computer interaction and simulation applications. This
thesis presents various emotion theories and artificial emotion simulation models,
and proposes a model of emotions that tries to reproduce the human-like effects on
cognitive structures, processes, patterns and functions in artificial agents through
emotion simulation and emotion based memory management, leading thus to more
believable agents, while also improving agent performance.
Autonomous artificial entities are required to be able to exist in a complex environment and respond to relevant changes in the environment, reason about the
selection and application of operators and actions and be able to pursue and accomplish goals. This type of agent has applications in various fields, including
character behavior in virtual environments, as used in e-learning applications and
serious games.
Emotion integration is necessary in such agents in order to generate complex believable behavior. It has been shown that emotions influence human cognition,
interaction, decision-making and memory management in a beneficial and meaningful way. Several studies in the field of artificial intelligence have also been
conducted ([2–4]) and have shown the necessity of emotion simulation in artificial
intelligence, especially when human-computer interaction is involved.
Emotions are a part of the human evolutionary legacy [5], serving adaptive ends,
acting as a heuristic in time-critical decision-processes (such as fight-or-flight situations); however, emotions have also become an important part of the multi-modal
1
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
2
process that is human communication, to the point that its absence is noted and
bothersome.
Affective computing is the study and development of systems that can recognize,
interpret, process and simulate human affects, in order to add this extra dimension
into our interaction with machines. Our goal is to create believable intelligent
artificial characters capable of displaying affective behavior. To achieve this, we
attempt to provide artificial characters with an emotional layer similar to that
of humans, serving adaptive ends. The emotion subsystem will influence agent
behavior by establishing the importance of events and by influencing knowledge
processing, as well as provide the agent with an emotional state that it will be
able to express and that will further influence its behavior. We first describe our
Newtonian emotion system, that explains the way we represent emotions and how
external and internal forces act upon them, based on the work of psychologist R.E.
Plutchik [6] and Newton’s laws of motion. Based on the psychological theory of
dr. R. Lazarus [7], we devised an emotional subsystem that interacts with and
influences an agent architecture. The system also takes into account the effects
that emotions have on human perception and cognitive processes.
As an application domain, we chose virtual characters in computer role playing
games (CRPGs) as they are a type of game meant to simulate all manner of human
interactions (be they gentle or violent in nature). This type of game provides
us with complex dynamic environments in which modern agents are expected to
operate. It also focuses on individual characters and the way they respond to
relevent changes in their environment (behavior). Last, but not least, the limited
action set available to characters and current generation game engines significantly
reduce the overhead of proof-of-concept apps.
Applications of the technology are not limited simply to CRPGs, as several types
of endeavours find it useful, such as behavior simulation, interactive storytelling,
and, indeed, any application involving human-computer interaction (for example,
online social sites could recommend events and friends based on mood).
1.1
Motivation and objectives
There are several key issues that current artificial emotion simulation models need
to address:
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
3
• Expandability Most models are not expandable; they, in fact, impose a
certain knowledge representation scheme, agent architecture and reasoning
method. While not inherrently bad, game AI algorithms and techniques are
continually evolving as shown by AI Game Dev’s own Alex Champandard
[8].
• Complexity Most emotion simulation models are very complex not only
from an implementation point of view, but from the theoretical and design
ones as well. Lots of algorithm parameters and a complex theory and system
mean a steep learning slope for both designers and developers, which will
in turn lead to a low adoption ratio of the model both in the industry and
academia.
• Scalability Artificial emotion simulation models need to be at least as scalable as current Game AI techniques, however, due to th high complexity of
the simulation, models can’t handle a large number of agents at the same
time.
• Theory It is important for artificial emotions to have similar effects on
virtual character cognitive processes as human emotions do on human cognitive processes, in order for emotions to influence behavior in a believable
way. It is also important to use emotions as motivation for virtual character
behavior.
• Game engine integration Most artificial emotion simulation models are
ofa cognitive nature, however, most game engines do not provide support for
knowledge meta data for objects and agents in virtual environments.
Keeping in mind these restrictions, the main objectives of this thesis are as follows:
• Analysis of psychological effects of emotions on human cognitive processes
• Analysis of current artificial emotion simulation models
• Proposal of an original architectural solution emotion representation and
virtual character framework that possesses the following characteristics
– easily expandable; independent of the AI techniques used to endow the
virtual characters with behavior
– simple, few algorithm parameters, easy to learn, use and implement
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
4
– accomplishes all effects of emotion on behavior according to theory, use
emotions as motivation
– easily integrated with a game engine
• Implementation and test of a prototype of the proposed solution
1.2
Thesis outline
The thesis is structured in six chapters. In this section we present the detailed
content of each chapter.
In Chapter 1: Introduction, this chapter we establish the context and motivation of the research, define the problem domain, establish the thesis objectives
and mention how the thesis responds to thes objectives.
In Chapter 2: Overview of emotion theory we present the most influential
psychological theories on emotion and explain their approaches. We begin by
presenting Plutchik’s emotion taxonomy, and the theory behind it. An emotion
classification that lends itself well to vectorial representation and use in artificial
systems. We then go on to present theories on how emotion synthesis is achieved in
humans, stressing that cognitive processing is necessary for emotions for occur and
that quality and intensity of emotions are controlled through cognitive processes.
last but not least in this chapter we present the effects that emotions have on
human cognitive processes such as perception and memory, and what their utility
might be in an artificial system.
In Chapter 3: Overview of artificial emotion simulation models we analyze
some influential emotion simulation models. We present two popular emotion
models with different goals and approaches in emotion simulation, and end by
presenting the findings of the PETEEI study that shows the importance of learning
and evolving emotions. As shown in this chapter, there are several key aspects to
take into consideration when designing a memory and emotion architecture, and
we believe that all are covered by our model. We also point out that the models
presented are cumbersome (the models have lots of parameters to adjust, or a
steep learning curve), or are not adaptable and expandable (they impose a certain
type of behavior generation and reasoning algorithm).
In Chapter 4 Artificial emotion simulation techniques for virtual characters we present our two approaches to providing virtual characters with complex,
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
5
rich, believable emotion-influenced behavior. We start off in the first half with
the emotion-driven behavior engine, which presents a full belief-deisire-intention
architecture that accomplishes the task, but, that, however, we have discovered
to be inadequate for the task and hand. We then go on to explain how we have
improved upon the solution and solved the issues with our second model, the
Newtonian Emotion System and the Plug-and-Play architecture, a versatile, lightweight, expandable and easily adaptable artificial emotion simulation model and
agent architecture. The best part about the Newtonian Emotion System is that
it is designed in such a way as to work with any behavior generation and machine
learning algorithm, allowing designers and developers to adapt and make it as
simple or as complex as the task demands.
In Chapter 5: Emotion integration in game design we present how the two
architectures (the Emotion-Driven Character Behavior Engine, and the Newtonian Emotion System) were integrated to provide non-player character behavior
in two games: a fire evacuation scenario serious game named ALICE, meant to
teach preteen students proper fire evacuation procedures, and a demo game that
showcases the capabilites of the NES, called OrcWorld. We present the underlying virtual character framework developed in both cases, as well as game-specific
modules, such as action, behavior and learning. We also present some ideas on
emotion representation in games as well as diagnostic/research tools.
In Chapter 6: Conclusion and future work we present the conclusions, contributions and new interesting research directions to be undertaken. In the first
part we present a discussion pertaining to conclusions of the thesis. The second
section presents a synthesis of original research contributions and published work,
while the last part proposese some future developments and directions for research.
The results obtained in this thesis are both theoretical and experimental.
Chapter 2
Overview of emotion theory
In order to endow virtual characters with an emotion system that provides agents
with believable emotional reactions and influences their cognitive processes (perception, decision making) in the same way that emotions influence behavior in
humans, we need a grasp of human cognitive emotion theory. In this chapter we
summarize the psychological theories upon which we built our emotion simulation
model, present their main ideas and effects on physical and cognitive processes [5]
and make references to their application regarding artificial agent design.
Increased interest in emotional intelligence has revealed yet another aspect of
human intelligence. It has been shown that emotions have a major impact on the
performance of many of our everyday tasks, including decision-making, planning,
communication, and behavior [9].
Emotion is linked with mood, temperament, personality, disposition, and motivation. It is thought that emotions act as catalysts in brain activity, that increase
or decrease the brain’s activity level, direct our attention and behavior and inform
us of the importance of events around us [10].
An important distinction is between the emotion and the results of the emotion,
(behaviors, emotional expressions). Often, a certain behavior is triggered as a
direct result of an emotional state (crying, fight or flight). If it is possible to
experience a certain emotion without the associated behavior, then the behavior
is not essential to the emotion.
Another mean of distinguishing emotions concerns their occurrence in time. Some
emotions occur over a short period of time (surprise), while others are more long
lasting (love). Longer lasting emotions could be viewed as a predisposition to
6
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
7
experience a certain emotion regarding a certain object or person (this may be
implemented as a slow rate of emotion decay associated with a certain object
or agent). A distinction is made between emotion episodes and emotional dispositions. An emotional disposition means the subject is generally inclined to
experience certain emotions (e.g. an irritable person is disposed to feel irritation
more easily than others).
Some theories place emotions within a more general category of affective states
where affective states can also include emotion-related phenomena such as pleasure
and pain, motivational states, moods, dispositions and traits.
2.1
Emotion representation
One of the most influential classification approaches for general emotional responses was developed by Robert Plutchik in the 1980s. Plutchik developed a
theory showing eight primary human emotions: joy, trust, fear, surprise, sadness,
disgust, anger, and, interest (Fig. 2.11 ), and argued that all human emotions can
be derived from these [6]. Plutchik also stated that emotions serve an adaptive
role in helping organisms deal with key survival issues posed by the environment,
and that the eight basic emotions evolved in order to increase the reproductive
fitness of the individual, each triggering a behavior or type of behavior with a high
survival value. These eight basic emotions were grouped into four pairs of polar
opposites with varying degrees of intensity (displayed as the wheel of emotions),
making it easy to represent emotional state in a four dimensional vectorial space.
2.2
Emotion synthesis
Some theories argue that cognitive activity is necessary for emotions to occur.
This means to imply the fact that emotions are about something (an object) or
possess intentionality. The cognitive activity in question (judgment, evaluation)
may be conscious or not. Lazarus’ theory [7] states that an emotion is a three stage
disturbance that occur in the following order: 1. cognitive appraisal - the subject
assesses the event cognitively. 2. physiological change - biological changes occur as
a reaction to cognitive appraisal (increased heart rate, adrenal response). 3. action
- the individual feels the emotion and chooses how to react. Lazarus accentuates
1
Image source: http://wakkanew.files.wordpress.com/2010/03/plutchiks wheel of emotions1.png
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
8
Figure 2.1: Plutchik’s wheel of emotions
that the type and intensity of emotion are controlled through cognitive processes.
These processes influence the forming of the emotional reaction by altering the
relationship between the person and the environment. Our model uses a similar
sequence to that described by Lazarus in order to cognitively appraise events and
choose an action: 1. triggering event. 2. cognitive assessment of event and
associated emotional state 3. modify emotional state and physiological response
(if any) 4. an action or series of actions are inserted into an action queue (be it
an agenda or a plan, as used by forward or backward chaining inference engines
respectively) according to the exhibited emotion and its intensity.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
2.2.1
9
Perceptual
This theory argues that conceptually based cognition is unnecessary for emotions
to be about something. It states that bodily changes themselves perceive the
meaningful content of the emotion. According to perceptual theory, emotions
become similar to other senses, such as vision or touch, and provide information
about the subject and the environment.
2.2.2
Cannon-Bard
The Cannon-Bard theory is a psychological theory which suggests that people feel
emotions first and then act upon them. These actions include changes in muscular
tension, perspiration, etc. The theory is based on the premise that one reacts to
an event and experiences the associated emotion concurrently. The subject first
perceives the stimulus, then experiences the associated emotion. Subsequently,
the perception of this associated emotion influences the subject’s reaction to the
stimulus. The theory states that a subject is able to react to a stimulus only after
perceiving the experience itself and the associated emotion.
2.2.3
Two-factor
In the 1960s, Singer-Schachter conducted a study showing that subjects can have
different emotional reactions despite being in the same physiological state. During
the study, experiments were performed where the subjects were placed in the same
physiological state with an injection of adrenaline and observed to express either
anger or amusement depending on whether a planted peer exhibited the same emotion. The study validated their hypothesis (known as the Singer-Schachter theory)
that cognitive processes, not just physiological reactions, play an important role
in determining emotions [11].
2.2.4
Lazarus
A pioneer in the study of emotions and their relationship to cognition, Richard
Lazarus, developed an influential theory in the field that explains the steps of
emotion synthesis and how it affects cognition. As Singer-Schachter, Lazarus
argued that cognitive activity (judgements, evaluations) is necessary for emotions
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
10
Figure 2.2: Lazarus emotion synthesis model
to occur. He stressed that the quality (i.e. the combination of basic emotions
according to Plutchik’s theory) and intensity of emotions are controlled through
cognitive processes, thus, the core of Lazarus’ theory is the cognitive process of
appraisal (Fig. 2.2): before emotion occurs, people make an automatic, often
unconscious assessment of the situation and its significance, and described the
process as a three stage disturbance [7].
1. appraisal - the subject evaluates the event cognitively (consciously or not,
symbolic or not)
2. (a) emotion - cognitive appraisal triggers emotional changes (e.g. fear)
(b) arousal - the cognitive reaction coupled with the emotional response
trigger physiological changes (e.g. increased heart rate and adrenal
response)
3. action - the subject feels the emotion (which provides additional information) and chooses how to react
2.3
Emotion and cognitive processes
Emotions have a big impact on memory. Many studies have shown that the most
vivid memories tend to be of emotionally charged events, which are recalled more
often and with greater detail than neutral events. This can be linked to human evolution, when survival depended on behavioral patterns developed through a process
of trial and error that was reinforced through life and death decisions. Some theories state that this process of learning became embedded in what is known as
instinct. A similar system may be used to decide the importance and relevance of
events and knowledge and manage memory elements in artificial agents.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
2.3.1
11
Selectivity of attention
Emotions emerged in the process of evolution as the means by which living creatures determine the biological significance of the various states of the organism
as well as of external influences; as a critical tool for adaptation and survival,
emotions influence how we process and store information [12]. Through the attentional blink paradigm, it has been shown that events and objects that hold a high
level of emotional arousal for the subject are more likely to be processed in conditions of limited attention [13], suggesting prioritized processing of such stimuli.
What this means is that subjects will focus on the arousing details of the stimuli
which will be processed, while peripheral details may not be (if attention is limited
this leads to attention narrowing, where the subject’s range of stimuli to which it
is sensitive is decreased) [14]. To sum up, an increase in arousal levels leads to
attention narrowing, a decrease in the range of stimuli from the environment to
which the subject is sensitive. This theory states that attention will be focused on
arousing details, so that information regarding the source of the emotional arousal
is memorized while peripheral details may not be [12].
2.3.2
Emotion and memory storage
Emotions have a big impact on memory. Many studies have shown that the most
vivid memories tend to be of emotionally charged events, which are recalled more
often and with greater detail than neutral events. This can be linked to human
evolution, when survival depended on behavioral patterns developed through a
process of trial and error that was reinforced through life and death decisions.
Some theories state that this process of learning became embedded in what is
known as instinct.
A similar system may be used to decide the importance and relevance of events
and knowledge and manage memory elements in artificial agents.
Emotions have positive effects on cognitive processes, such as increasing the likelihood of memory consolidation (the process of creating a permanent record). Also,
over time, the quality of memories decreases, however memories involving arousing
stimuli remain the same or improve [12]. A similar system may be used to organize and manage an artificial agents memory elements, improving memory storage
efficiency, knowledge recall and rule firing speed.
Chapter 3
Overview of artificial emotion
simulation models
We begin this chapter by presenting two popular emotion models with different
goals and approaches in emotion simulation, and end by presenting the findings of
the PETEEI study that shows the importance of learning and evolving emotions.
As shown in this section, there are several key aspects to take into consideration
when designing a memory and emotion architecture, and we believe that all are
covered by our model.
3.1
Ortony Clore and Colins (OCC)
The OCC (Ortony Clore & Collins) model has become the standard model for the
synthesis of emotion. The main idea of this model is that emotions are valenced
reactions to events, agents or objects (the particular nature being determined by
contextual information). This means that the intensity of a particular emotion
depends on objects, agents and events present in the environment [2]. In this
model, emotions are reactions to perceptions and their interpretation is dependent
on the triggering event, for example: an agent may be pleased or displeased by the
consequences of an event, it may approve or disapprove of the actions of another
agent or like or dislike an object. Another differentiation taken into account within
the model is the target of an event or action and the cause of the event/action:
events can have consequences for others or for oneself and an acting agent can be
12
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
13
another or oneself. These consequences (whether for another or oneself) can be
divided into desirable or undesirable and relevant or irrelevant.
In the OCC model, emotion intensity is determined by three intensity variables:
desirability - reaction towards events and is evaluated with regard to goals, praiseworthiness relates to actions of other agents (and self), evaluated with regard
to standards, and appealingness depends on reaction to objects and its value is
determined with regard to attitudes.
The authors also define a set of intensity variables: sense of reality, proximity,
unexpectedness and arousal are the global variables that influence all three emotion
categories. There is also a threshold value for each emotion, below which an
emotion is not subjectively felt. Once an emotion is felt however, it influences the
subject in meaningful ways, such as focusing attention, increasing the prominence
of events in memory, and influencing judgments.
As shown, according to the OCC model, behavior is a response to an emotional
state in conjunction with a particular initiating event [2]. As in the OCC model, in
our model, emotions are valenced reactions to events, agents or objects determined
by contextual information; in fact we will be using a simplified version of the OCC
emotion evaluation process similar to the one used in [4] in early validation of our
model.
3.2
Connectionist model (Soar)
In this model, emotions are regarded as subconscious signals and evaluations that
inform, modify, and receive feedback from a variety of sources including higher
cognitive processes and the sensoriomotor system. Because the project focuses on
decision-making, the model emphasizes those aspects of emotion that influence
higher cognition [15].
The model uses two factors, clarity and confusion (Fig. 3.2), in order to provide
an agent with an assessment of how well it can deal with the current situation.
Confusion is a sign that the agent’s current knowledge, rule set and abilities are
inadequate to handle the current situation, while clarity comes when the agent’s
internal model best represents events in the world. Since confusion is a sign of
danger it is painful and to be avoided, while clarity, indicating the safety of good
decisions, is pleasurable and sought-after.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
14
Figure 3.1: Soar 9 system architecture
Pleasure/pain, arousal and clarity/confusion form the basis of the emotional system. Instead of creating separate systems for fear, anger, etc., the authors assume
that humans attach emotional labels to various configurations of these factors.
For example, fear comes from the anticipation of pain. Anxiety is similar to fear,
except the level of arousal is lower [15]. The advantage of a generalized system is
that it does not require specialized processing.
Arousal is the main way that the emotional system interacts with the cognitive
system. Memory and attention are most affected by changes in arousal. Highly
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
15
Figure 3.2: Soar emotion subsystem
aroused people tend to rely on well-learned knowledge and habits even if they have
more relevant knowledge available.
In the Soar implementation of this system, the Soar rules that comprise have
special conditions such that different kinds of rules only fire at differing levels of
arousal. For example, highly cognitive rules will not fire at high levels of arousal,
while more purely emotional rules may only fire at such levels [15]. This allows
for a very general approach to emotions.
The Soar model emphasizes the effects of emotions on cognition and decisionmaking, effects considered important and also present in our own model.
3.3
Evolving emotions (PETEEI)
Emotion is a complex process often linked with many other processes, the most
important of which is learning: memory and experience shape the dynamic nature
of the emotional process. PETEEI is a general model for simulating emotions in
agents, with a particular emphasis on incorporating various learning mechanisms
so that it can produce emotions according to its own experience [4]. The model
was also developed with the capability to recognize and react to the various moods
and emotional states of the user.
In order for an agent to simulate a believable emotional experience, it must adapt
its emotional process based on its own experience. The model uses Q-learning with
the reward treated as a probability distribution to learn about events and uses a
heuristic approach to define a pattern and to further define the probability of an
action, a1, to occur given that an action, a0, has occurred [4], thus learning about
the user’s patterns of actions. External feedback was used to learn what actions
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
16
are pleasing or displeasing to the user. The application also uses accumulators
to associate objects to emotions (Pavlovian conditioning). The accumulators are
incremented by the repetition and intensity of the object-emotion occurrence.
An evaluation involving twenty-one subjects indicated that simulating the dynamic
emotional process through learning provides a significantly more believable agent
[4].
The PETEEI study focuses on the importance of learning agents and evolving
emotions. Our memory and emotion architecture is built primarily with these two
aspects in mind.
3.4
EBDI
The EBDI [16] model proposes an extension to the BDI architecture in which
agent behavior is guided not only by beliefs, desires and intentions, but also by
emotions in the processes of reasoning and decision-making. The architecture
is used to establish some properties that agents under the effect of an emotion
should exhibit. The EBDI model describes agents whose behavior is dictated by
interactions between beliefs, desires and inteintions (as in the classical BDI arhitecture), but influenced by an emotional component. The emotional component
attempts to impose some of the positive aspects that emotions play in reasoning
and decision-making in humans into the artifical agent paradigm. EBDI introduces
a multi-modal logic for specifying emotional BDI agents.
The logic of EBDI The temporal relation is established over a set of elements,
called situations. Situations are pairs (world, state), where the world refers to a
scenario the agent considers as valid, and the state refers to a particular point
in that scenario. Actions are a labelling of the branching time structure and can
be either atomic (cannot be subdivided into a combination of smaller actions) or
regular (constructions of atomic actions).
The model introduces two emotional driving forces: fear and fundamental desire.
Fear motivates agent behavior, while the notion of fundamental desire plays a role
in properly establishing the notion of fear. The emotions can be triggered either
by an action which will lead to some wanted/unwanted situation or triggered by
believing that such situations may or will inevitably be true in the future.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
17
Bounded reasoning The model also introduces the notions of capabilities and
resources. Both Capabilities and resources are prerequisites for successful actionexecution.
• Resources physical or virtual means which may be drawn in order to enable
the agent to execute actions. If the resources for an action do not exist, the
action most likely cannot be executed.
• Capabilities abstract means that the agent has to change the environment
in some way (resemble abstract plans of action). EBDI considers the set of
capabilities as a dynamic set of plans which the agent has available to decide
what to do in each execution state.
Fundamental desires are special desires which are vital desires of the agent, or
desires which cannot be failed to achieve, in any condition, since may put in cause
the agents own existence. Fundamental desires must always be true and the agent
must always do its best to maintain them valid.
Fear is one of the most studied emotions.The triggering of this emotion is usually associated with the fact that one or more fundamental desires are at stake.
However, this triggering does not always occur in a similar way in all agents. It
is based on this differences that different classes of Emotional-BDI agents can be
modelled.
Fearful agents The fearful agent is an agent that always acts in a very selfpreserving way. Negative emotions, such as fear occur when unwanted or uncontrollable facts or events are present in the environment. In its example, EBDI only
considers threats and unpleasant facts. Threats are facts or events that directly
affect one or more fundamental desire of the agent, putting its self-preservation at
stake.
• Dangerous a threat is dangerous when the agent believes that some condition leads inevitably to the falsity of a fundamental desire, and also believes
that will also be inevitably true in the future
• Serious a threat is serious if the same conditions of a dangerous one hold,
except that the agent believes that may eventually be true in the future, and
not always
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
18
• Possible a threat is possible if the fundamental desire continues at stake,
but the agent believes that it may hold only in the future
Unpleasant facts are facts or events that may put one or more desires in a situation
of non-achievemnt. The agent will attempt to protext its desires.
• Highly Unpleasant something becomes highly unpleasant if the agent believes that the source of the unpleasantness will always occur in the future
and will always in the future put in cause some desire
• Strongly Unpleasant something becomes strongly unpleasant if the agent
believes that the source of the unpleasantness will eventually occur in the
future and will always in the future put in cause some desire
• Possibly Unpleasant something becomes possibly unpleasant if the agent
believes that the source of the unpleasantness will eventually occur in the
future and, in the case of occurring, maybe it will put in cause some desire
The model also defines the following actions, which represent specific behavior
exhibited by the agent under certain conditions.
• Self-preservation the self-preservation behaviour is activated when the agent
is fearing the failure of some of its fundamental desires. We can see this as
atomic action which mainly reacts to threats in a self-protective way. In
EBDI, this special action is represented by spreserv
• Motivated Processing Strategy this processing strategy is employed by
the agent when some desire which directs its behaviour must be maintained
but may be at risk. This strategy is computationally intensive, as its should
produce complex data-structures for preserving desires.
• Direct Access Strategy this processing strategy relies on the use of fixed
preexisting structures/knowledge. It is the simplest strategy and corresponds
to a minimization of the computational effort and to fast solutions.
A fearful agent is an agent which exhibits a very careful behaviour, considering
any threat as a fear factor, and considering all uncomfortable events at the same
level. Being careful, the agent also acts towards solving threats and unpleasant
facts with the best of its means, and even if the means for the best action are
not available, the agent always tries to put itself on a safe, or selfpreservation
condition.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
3.5
19
ERIC
ERIC is an affective embodied agent for realtime commentary in many domains
[17]. The system is able to reason in real time about events in its environment and
can produce natural language and non-verbal behavior based on artificial emotion
simulation. Efforts have been made to keep the expert system domain indepndent
(as a demonstration of this, ERIC was used in two domains: a horse race simulator
and a multiplayer tank battle game).
The ERIC2 framework (Fig. 3.3) allows for the development of an embodied character that commentates on events happening in its environment. The framework
is a collection of Java modules that each encapsulate a Jess rule engine. All cognitive tasks are taken care of in the rule-based expert system paradigm. The ERIC
framework also uses template based natural language generation in order to create
a coherent discourse.
Knowledge module The main job of the knowledge module is to elaborate
information perceived from the environment (through the domain interface) into a
world model that is the basis for the agent’s natural language and emotional state
generation. As with all rule based expert systems, logical inference (by applying
axioms) is used on perceived in order to elaborate the world model. This world
model is the sent to the other ERIC modules in order to generate affect and
language. Timestamps are used within the model in order to manage beliefs and
allow ERIC to detect patterns.
Affective appraisal Emotions are used in order to enhance the character’s
believability as well as to convey some information (i.e. excitement means that
a a decisive action is being performed). ERIC uses ALMA to model affect [18].
There are three features in the ALMA model:
• emotions (short-term) - usually bound to an event, action or object; they
decay through time; ERIC uses te OCC model of emotions in order to deduce
emotions from stimuli.
• moods (medium-term) - generally more stable than emotions; modelled as
an average of emotionl states accross time; described by a vector in PAD
(pleasure, arousal, dominance) space; intensity is equal to the magnitude of
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
20
Figure 3.3: ERIC emotion system
the PAD vector; the characters emotion changes his mood according to the
pull and push mood change function.
• personality (long-term) - long-term description of an agent’s behavior;
modelled along five factors: openness, conscientiousness, extraversion, agreeableness and neuroticism; these traits influence the intensity of the character’s emotional interactions and the decay of its emotional intensities.
ERIC makes the appraisals that ALMA requires by comparing events, objects
and actions to the agents desires and beliefs. The framework also separates goals
and desires from causal relations. The cause-effect relations allow the model to
appraise events, actions and objects that are not specified as goals or desires (the
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
21
following four relations are used: leadsto, hinders, supports and contradicts). The
agent also maintains a set of beliefs about other agant’s goals and desires to enable
it to judge the desirability of and event for other agents in the scenario.
The agent’s affective state can be expressed through hand and body gestures, facial
expression and selection of words and phrasese used to fill in a natural language
template.
In the ERIC model, affective output is generated from the dynamic appraisal of
events, actions and objects related to goals and desires, acting as an indicator of
the agent’s progress and its opinion on events.
3.6
Deep
The DEEP project specifically designed this model to be used in the video game
industry to improve the believability of Non-Player Characters. The emotion
dynamics model takes into account the personality and social relations of the
character. The model focuses on the influence of personality in triggering emotions
and the influence of emotions in inter-character social interactions. Because of the
specific nature of video games, the goal is not to reach an optimal behavior, but
to produce behaviors that are believable from the player’s standpoint [19].
The proposed model (Fig. 3.4) is an adaptation of the OCC model based on attitudes of NPCs towards actions, objects and other characters of the environment.
Each character is defind through its personality, defined by a set of traits, and its
social roles, which determine its initial social relation with other characters.
Knowledge representation The knowledge representation model used by the
DEEP Project as a formal represetnation of the virtual environment that the game
takes place in has been kept purosefully simple in order to be easily handled by
designers and developers that do not have a background in artificial intelligence.
Actions are defined as couples (namea , ef f ecta ). A character is defined by two
feature vectors, one of attitudes that express its attitude towards element in the
lexicon (each possible element in the world) and one of praises, which is a feature
vector of attitudes towards possible actions in the environment. Events are simply
represented by a quadruplet (agent, action, patient, certainty), where certainty
denotes the degree of certainty of the event.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
22
Figure 3.4: Deep emotion system
Emotion representation Emotion representation is based on a simpification
of the OCC model, in which 10 emotions represent an agent’s emotional experience. Emotions are considered according to triggering conditions: joy (distress)
- desirable (undesirable) event; hope (fear) - expectation of a desirable (undesirable) event; relief (disappointment) - non-occurrence of an expected undesirable
(desirable) event; pride (shame) -praiseworthy (blameworthy) action done by the
agent; admiration (anger) - praiseworthy (blameworthy) action done by another
agent.
An emotional state is represented as a feature vector of values varying in [0,1].
The emotional stimulus of an event is the emotion that an agent should feel after
the event, provided that the agent has a neutral personality. The OCC model
posits that the intensity of emotions is directly proportional to the desirabilty/undesirability factor of the event (and its probability of occurence).
Personality The personality of a virtual character is represented as an n-dimensional
feature vector, each value corresponding to a personality trait. The emotion triggerd by an event is computed from the emotional stimulus and the personality
trait’s values. When no emotional stimulus occurs, a character’s emotion intensity naturally decays towards the neutral state, by a monotonous function that
depends on personality. The model also mentions intensity thresholds that define
the minimal value for an emotion to have and impact on a character’s behavior.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
23
Social The relationship between two virtual characters is represented as a quadruplet = (liking, dominance, familiarity, solidarity), respectively the degree of liking
i has for j at time t; the power that i thinks it can exert on j at time t; the degree
of familiarity, i.e. the average degree of confidentiality for i of the information
that i has transmitted to j; and the degree of solidarity that represents the degree
of like-mindedness or having similar behavioral dispositions that i thinks to have
with j. Relationships are not symmetrical.
A social role rx /ry is definedby a quadruplet where each element represents the
degree of liking, dominance, familiarity and solidarity that a character with role rx
feels for a character with a role ry . In order to compute a combination of several
roles, using an average of complementary roles.
The model has been implemented in Java as a tool for game designers to simulate
the impact of a scenario on non player characters. The vocabulary of the game,
NPCs personality, social roles nad scenario events need to be defined.
The DEEP Project motivates the use of emotionally-enhanced intelligent agents
as virtual characters in video games, in order to improve the player’s feeling of
immersion and enjoyment. The model takes into account personality and social
relations The heart of the model presented is context-independent. It can be
applied to any domain that makes use of intelligent agents with an affective level.
This work is part of the DEEP project supported by RIAM (Innovation and research network on audiovisual and multimedia).
3.7
iCat
iCat is a system developed in order to study how emotions and expressive behavior
in interactive characters can be used to help users better understand the game state
in educational games. The emotion model is mainly influenced by the current game
state and is mased on the emotivector anticipatory mechanism. The hypothesis
is that if a virtual character’s affective behavior reflects the game state, users will
be able to better peprceive the game state [20].
Architecture The architecture is composed of three modules:
• The game module is made up of the game played by the user and the virtual
character. The game module is the main input for the emotion module.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
24
• The emotion module receives evaluations from the game module, processes
them and update the agent’s emotional state.
• The animation module animates the character by setting up a character position or running the character through predetermined animation sequences. The predefined sequences are used to display emtoional reactions
(each one of the 9 elements in the emotivector is mapped to a different
animation). Mood is reflected asa linear interpolation between the two
parametrizations corresponding to the two extremes of the valence space.
Emotion The affective state is represented by two components: emotional reactions and mood. Emotional reactions are the immediate emotional feedback
received by the character. In order to endow the agent with emotional ractions,
the iCat uses the emotivector anticipatory system. The system contains a predictive system of itself and its environment that generates an affective signal resulting
from the mismatch of the expected and received values of the sensor. This means
that the system is either rewarded or punished if events do not go according to
predictions, and this will show up through the displayed animation.
Mood is represented as a valence variable that varies between a minimum and a
maximum value. The mood magnitude represents the mood intensity. Positive
values are associated to good scores, and negative ones to bad ones. Mood works
like a background affective state, when other emotions are not occuring. Mood
decays towards zero, over time in the absence of new stimuli. Upon receiving a
new stimuli, the adjustment is gradual until the desired value is reached.
Evaluation The architecture was implemented in a scenario composed by a
social robot, the iCat, which acts as the opponent of a user in a chess game.
A preliminary experiment was performed, where the iCat played chess against a
human player. The results (correlated from a group where the iCat showed the
proper emotions, a group where it showed the wrong ones randomly and a control
group where it showed no emotions), indicate that users were able to correctly
interpret the iCats emotional behaviour and that the correlation between the users
own analysis of the game and the actual game state variables is also higher in the
emotional condition.
The results of a preliminary evaluation suggested that the emotional behaviour
embedded in the character indeed helped the users to have a better perception of
the game [20].
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
3.8
25
Comparison
In this section we present a comparison of the emotion simulation models presented
relating to features we consider important.
• Knowledge whether the model influences agent knowledge
• Perception whether the model influences agent perception
• Behavior whether the model influences agent behavior
• Motivation if emotions are used as agent motivation within the model
• Personality if the model is able to generate different types of characters,
characters that would act differently, given the same virtual world state
• Theory How complex the theory is
• Dev how high the design, development and implementation effort would be
for someone to use the system
• Adoption how easily a system would be adopted by the game industry
• Graphics does the model have a way for artificial characters to express their
emotions?
• Emotion what emotion representation model the system uses
• Synthesis what emotion synthesis model the system uses
• Expandability how expandable the model is
• Scalability how scalable the model is
• Flexibility how flexible the model is
• Evolving if the emotion process and behavior are adapted based on agent’s
experience
As can be seen in table 3.1, most systems use the OCC model for emotion expression and synthesis which is difficult to learn and implement, which can be deduced
from the fact that the authors have reduced adapted by reducing the number of
emotions from 21 to 10, and as can be seen through the effort expanded to formalize it [21–23]. We can also see from the table that for most models, either the
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
Name
Soar
PETEEI
EBDI
DEEP
iCat
ERIC
Knowledge Perception Behavior Motivation
yes
maybe
yes
yes
yes
yes
yes
yes
Name
Soar
PETEEI
EBDI
DEEP
iCat
ERIC
Theory(cpx) Dev (cpx)
simple
simple
high
med.
high
high
med.
simple
high
med.
Name
Soar
PETEEI
EBDI
DEEP
iCat
ERIC
Synthesis Expandability Scalability Flexibility Evolving
Soar
yes
yes
med.
static
OCC
low
dynamic
OCC
yes
yes
high
static
OCC
yes
yes
high
static
iCat
med.
static
OCC
yes
yes
static
26
Personality
yes
yes
yes
Adoption Graphics
Emotion
med.
Pleasure/Pain
med.
yes
OCC
OCC
yes
OCC
med.
yes
iCat
high
yes
OCC
Table 3.1: Emotion simulation model comparison
emotion process or behavior are not adapted according to the agent’s experiece,
and none of them use emotion as a driving force for a character’s actions.
We have taken it upon ourselves to design and develop a fast light-weight, easily
understood and adopted emotion representation model, the system necessary for
its integration into an artificial agent architecture that uses emotion simulation as
motivation for the virtual character’s actions, as well as the underlying framework
for its integration into our chosen demo application type.
Chapter 4
Artificial emotion simulation
techniques for virtual characters
In this chapter we present our two approaches to providing virtual characters
with complex, rich, believable emotion-influenced behavior. We start off with
the emotion-driven behavior engine, which presents a full belief-deisire-intention
architecture that accomplishes the task, but, that, however, we have discovered
to be inadequate for the task and hand. We then go on to explain how we have
improved upon the solution and solved the issues with our second approach, the
Newtonian Emotion System.
4.1
Emotion-driven behavior engine
In this section we first present our emotion representation scheme and its behavior, and then elaborate how emotions are used within and influence/manage
knowledge recall, the knowledge base hierarchy, attention span and the decision
process. The model is based on the Cannon-Bard theory of emotion, cognitive
appraisal is necessary for emotion synthesis and emotions are first felt and then
acted upon. The model also implements the presented effects of emotion on memory and may be expanded according to the perceptual theory that emotions also
provide information about the subject and the environment.
27
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
28
Figure 4.1: Architecture
4.1.1
Architecture
The following section presents the proposed agent architecture along with an expanded view of the inference engine module. Facts and rules are fed into the
inference engine according to their associated emotion and its intensity, by way of
knowledge indexing. A diagram of the architecture is shown in Fig. 4.1.
4.1.1.1
Inference engine
The inference engine (Fig. 4.2) is made up of a backward chaining engine and a
forward chaining engine. These produce a plan and agenda, respectively, which
are merged later according to action priority based on the associated emotion.
Inference engine rules are made up of antecedents, which are expressions that need
to be true, and consequents, which are expressions that will be true after the rule
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
29
Figure 4.2: Inference engine
is fired. Expressions may contain unbound variables, that will be bound upon a
matching knowledge item.
We will be using the knowledge base in table 4.1 to illustrate the two techniques.
Forward chaining Forward chaining is a data-driven inference method. The
engine hungrily awaits new knowledge in order to apply rules that match existing
data.
Forward chaining starts with the available data and uses inference rules to extract
more data (from an end user for example) until a goal is reached. An inference
engine using forward chaining searches the inference rules until it finds one where
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
Goals Q
Rules P =⇒
L∧M
B∧L
A∧P
A∧B
Facts A
B
Q
=⇒
=⇒
=⇒
=⇒
30
P
M
L
L
Table 4.1: Knowledge example
Figure 4.3: Forward chaining
the antecedent (If clause) is known to be true. When found it can conclude, or
infer, the consequent, resulting in the addition of new information to its data.
Forward chaining (Fig. 4.3) inference engines will iterate through this process
until a goal is reached. This seems as an ineffective way to go about solving a
problem, especially when we have a large number of rules and data and only a few
paths that lead to goal states. Backward chaining seems to be the better problem
solver, and indeed it is; however, backward chaining does have a weakness, it
never infers data that is not explicitly goal-related, even if somewhere down the
inference path it may lead to goal-related data, or worse, conflicting data. In
essence, we are using the forward chaining part of our inference engine to gather
knowledge about the complex, dynamic environment, where multiple aspects of
the world may be changing at any given moment. When run, a forward-chaining
algorithm presents a conflict set of rules to apply (all rules that match current
knowledge). Ways of deciding on application order will be discussed in a further
section. At times, our forward-chaining inference engine may infer contradictory
data; we need a way of detecting and dealing with such situations. The best
approach is a truth maintenance system. Also, given that rule-based systems may
have a large number of rules working with a large knowledge base, we need a
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
Facts
A, B
A, B, L
A, B, L, M
A, B, L, M, P
A, B, L, M, P, Q
31
Rule
A ∧ B =⇒ L
B ∧ L =⇒ M
L ∧ M =⇒ P
P =⇒ Q
Table 4.2: Forward chaining process
fast pattern-matching algorithm (pattern-matching may be as much as 80% to
90% of the effort of a forward chaining rule based-system). A good approach to
this is using the RETE pattern matching algorithm, as it interfaces well with a
truth maintenance system, as well as with our backward-chaining generated plan.
The forward chaining inference engines purpose is to reason about the world,
gather new information and make necessary assumptions in order to keep belief
(knowledge) base consistency; rules that contain primitive operators that take
actions in the world should not be here, although we will keep the architecture
flexible and allow for them; a separation mechanism will be required between the
two. The end-product of the forward chaining inference cycle will result in the
addition and retraction of rule instances to and from an agenda of rules to be
applied.
Example In our example (Table 4.1, Fig. 4.3), the inference would start out
from what is available in the knowledge base and apply rules until it reached the
goal. Table 4.2 shows the steps the system would go through.
When rules are added to the rule base, a RETE network (Fig. 4.4) is built for
each one, the nodes that make up the network, and their functions are:
Alpha An alpha network is created for each expression found in a rule’s the
antecedent. This node creates a list of bound instances, an instance of the initial
expression, that has matched an expression in working memory and in which all
variables have been bound accordingly. Upon creation, an alpha memory also
creates a beta memory.
Beta A beta memory makes the connection between an alpha memory and another beta memory. It’s purpose is to join the expressions found in all the alpha
memories found in this particular rule and make sure that all variables with the
same name are bound to the same value.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
32
Figure 4.4: RETE Network
Delta A delta memory is a rule instance. It contains a list of bound consequents
(based on the bound antecedents) that need to be added, removed or updated
when this rule instance is fired.
Backward chaining Backward chaining (Fig. 4.5) is a lazy inference method.
It only does as much work as it has to. Backward chaining starts with a list of
goals (or a hypothesis) and works backwards from the consequent to the antecedent
to see if there is data available that will support any of these consequents. An
inference engine using backward chaining would search the inference rules until it
finds one which has a consequent that matches a desired goal. If the antecedent
of that rule is not known to be true, then it is added to the list of goals (in order
for one’s goal to be confirmed one must also provide data that confirms this new
rule), making it ideal at building an at least partially (we may have to accomplish
disjoint goals, as well as having to accomplish several goals concurrently with no
specified order in order to confirm a rule) ordered plan to follow in order to achieve
the agents goals, dynamically decomposing goals into subgoals recursively until
the agent selects among primitive operators that perform actions in the world.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
33
Figure 4.5: Backward chaining
The backward chaining part of our inference engine accomplishes our agents goaloriented planning, resulting in the composition of a plan, a sequence of partiallyordered goals to pursue in the future.
Goals Each goal is represented through an expression of the form (subject, predicate, object). Any of these may be a variable, making the goal possibly match an
infinite number of facts, and thus renders it unable to be accomplished, however
the inference engine will work towards accomplishing all possibly matching expressions. The backward chaining network is built by taking all goals and finding
rules that have matching patterns as consequences. The bound antecedents of the
rule are then added as subgoals of this goal, and the process is repeated, each goal
recursively creating a hierarchy of possibly necessary subgoals to accomplish itself.
This graph is created up to a maximum depth given as an algorithm parameter.
The algorithm then searches the agenda for any rule instances that contain any
of the subgoals as a consequence and adds them to the global plan based on their
priority. The plan is then run and rule are executed according to priority and
partial plan order.
Example Table 4.3 shows how the backward chaining process starts out by
adding new goals based on the rules it knows and how these goals are fulfilled as
new facts are infered.
4.1.1.2
Truth maintenance engine
A truth maintenance system is used to maintain a consistent set of beliefs about
the world. It is based on the following principles:
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
Facts
A, B
A, B
A, B
A, B
A, B, L
A, B, L, M
A, B, L, M, P
A, B, L, M, P, Q
Rule
P =⇒ Q
L ∧ M =⇒ P
B ∧ L =⇒ M
A ∧ B =⇒ L
B ∧ L =⇒ M
L ∧ M =⇒ P
P =⇒ Q
34
Goals
Q
P, Q
L, M, P, Q
L,M,P,Q
M, P, Q
P, Q
Q
Table 4.3: Backward chaining process
each action in the problem-solving process has an associated justification
when a contradiction is obtained, find the minimal set of assumptions that generated the contradiction. Select an element from this set and defeat it. The
justification for the contradiction is no longer valid and the contradiction is removed.
propagate the effects of adding a justification and of eliminating a belief (keep
consistency)
Figure 4.6: Truth maintenance system
A truth maintenance system (Fig. 4.6) keeps beliefs as a network of two types of
nodes: beliefs and justifications. A belief is an expression which can be true or
false. A belief node contains a label (IN if the belief is considered true, OUT otherwise), a list of justifications for the node, and a list of the justifications of which
the node is part of (and the label it must have in order for that justification to be
valid). Justification nodes contain the following information: the inference type of
a justification (in our case premise (always IN) or modus ponens (inferred)), a list
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
35
of nodes the justification justifies and a list of nodes (and necessary associated values) that participated in the inference. This representation allows the propagation
of consequences through the belief network of an agent. A truth maintenance system is easily integrated with the RETE pattern-matching algorithm (nodes that
are IN will be considered true, and present in the RETE network, and nodes that
are OUT will be considered false and will not be present in the RETE network;
nodes that change state from OUT to IN will be added and nodes that change
state from IN to OUT will be retracted from the RETE network).
As can be seen, the RETE network and a TMS are easy to interface with each
other, since they operate on different areas of a rule system (working memory vs.
rule base).
Truth maintenance systems are not usually used alongside backward chaining inference engines because the inference engine rarely changes a nodes state; however,
should this occur, the change is fed into the TMS by the inference engine and the
changes propagated through the network, just as in the forward chaining case.
When facts are inserted into the knowledge base by the inference engine, a justification network is created (or updated if it already exists) for the fact. The
network is made up of two types of nodes, structured as follows:
Fact nodes
• label - node state: IN or OUT.
• type - type of node: PREMISE, JUSTIFIED, UNJUSTIFIED, CONTRADICTION.
• justifications - a list of justifications for this fact.
• consecIn - list of justifications in which the node takes part and appears in
the IN list.
• consecOut - list of justifications in which the node takes part and appears
in the IN list.
Justification nodes
• type - justification type: PREMISE, OBSERVATION, RULE, UNJUSTIFIED.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
36
• label - node label that this justification justifies.
• consequence - the node this justification justifies.
• inList - list of nodes that participated in the inference and that have to be
IN for this justification to be valid.
• outList - ist of nodes that participated in the inference and that have to be
OUT for this justification to be valid.
4.1.1.3
Algorithms
This system has been implemented during an internship at the Serious Games
Institute, based in Coventry, United Kingdom, as the artificial intelligence system
behind non-player character behavior in an educational fire simulation game called
ALICE.
This section contains the algorithms used to drive the agent’s behavior: truth
maintenance, inference and planning processes, as well as perception. The algorithms are presented in a logical top-down fashion.
The first algorithm (Alg. 1) forms the core of our framework. This is where all
other algorithms are called from, and performs one behavior cycle for the agent.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
37
Algorithm 1 Agent.Run(Plan p)
rulesTriggeredThisTurn ← {}
Forward.Update() {see Alg. 2}
p ← {}
for all g ∈ goals do
Update(g, p) {see Alg. 7}
end for
stepsTaken ← 0
while p 6= ∅andstepsT aken < maxSteps do
δ ← Pop(p)
if ¬triggeredδ then
Trigger(δ) {see Alg. 6}
stepsTaken++
Forward.Update() {see Alg. 2}
p ← {}
for all g ∈ goals do
Update(g, p) {see Alg. 7}
end for
end if
end while
The forward inference engine attempts to fire as many rule instances as it can
(Alg. 3), subject to availability and algorithm parameter imposed restrictions.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
38
Algorithm 2 Forward.Update()
stepsTaken ← 0
while perceptions 6= ∅andstepsT aken < maxSteps do
p ← Pop(perceptions)
TMSEngine.Evaluate(p) {see Alg. 12}
stepsTaken++
p.Influence()
end while
for all r ∈ rules do
if Repeatable(r) then
Reactivate(r)
end if
agenda ← agenda ∪ agendar
agendar ← ∅
end for
Algorithm 3 Forward.Run()
Update() {see Alg. 2}
stepsT aken ← 0
while agenda 6= ∅andstepsT aken < maxSteps do
δ ← P op(agenda)
if ¬triggered(δ) then
T rigger(δ) {see Alg. 6}
stepsTaken++
Update() {see Alg. 2}
end if
end while
The RETE algorithm is a fast pattern-matching algorithm that compiles the left
hand side of the production rules into a discrimination network (in the form of
an augmented dataflow network). Changes to working memory are input to and
propagated down the network and the network reports changes to the conflict set of
rules to be executed (adding new rule instances that have activated and retracting
old ones that are no longer valid from the programs agenda). Our backwardchaining generated plan may benefit from this compilation of the network as well,
since we may identify goals in the plan that may have been instanced in the agenda.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
39
We use the RETE algorithm on initial agent load (Alg. 4), in order to scan the
knowledge base for any matching facts and build the discrimination network, as
well as later on to add new facts to the already created network (Alg. 5). In both
cases, new rul instances are created and added to the agenda where appropriate.
Algorithm 4 RETE(IIndividual m)
Require: IIndividual m
Ensure: add m to appropriate RETE networks
for all r ∈ rules do
for all α ∈ r do
for all p ∈ propertiesm do
for all v ∈ m[p] do
e ← newExpr(m, p, v)
if Valid(e) then
ok ← true
for all γ ∈ joinβα do
if BindingsDoNotCorrespond(γ, e) then
ok ← f alse
end if
end for
if ok then
γ.Add(e)
δ ←CreateRuleInstance(γ)
agenda ← agenda ∪ δ
end if
end if
end for
end for
end for
end for
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
40
Algorithm 5 RETE(TMSNode node)
Require: TMSNode node
Ensure: adds node to appropriate RETE networks
for all r ∈ rules do
for all α ∈ r do
if Match(node, exprα ) then
if labelnode = labelexprα then
joinβα .Add(exprnode )
else
joinβα .Remove(exprnode )
end if
end if
end for
end for
The trigger algorithm (Alg. 6) shows the steps taken when a rule instance is fired.
Algorithm 6 Trigger(δ)
Require: rule instance δ
Ensure: triggers rule instance δ
if ¬triggeredδ then
triggeredδ ← true
if goalδ then
Influence(δ, goal · φgoal ) {see Alg. 9}
end if
for all expr ∈ δ do
Evaluate(expr, δ) {see Alg. 12}
end for
rulesTriggeredThisTurn ← rulesT riggeredT hisT urn ∪ δ
end if
An important part of the agent’s logic are the plan revision (Alg. 7) and plan
pursuit (Alg. 8) algorithms. After all possible plan steps are added to the plan
(which, as the event queue, uses a sorted insertion list), the most important among
these (according to the associated emotion) is triggered and knowledge, events and
the plan are then revised.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
41
Algorithm 7 Update(Goal G, Plan p)
Require: Goal G
Ensure: update G’s subgoals
if accomplished(G) then
return
end if
for all r ∈ relatedrules do
{r}ules that contain a form of this goal as a consequent
for all a ∈ antecedentr do
g = new Goal(a) {bind all necessary variables so that g would lead to G
and g still matches a}
g = G · φgoal−subgoal
subgoalsG ← subgoalsG ∪ {g}
end for
end for
for all g ∈ subgoalsG do
if depthg < maxDepth then
Update(g, p)
end if
end for
Run(G, p) {see Alg. 8}
Algorithm 8 Run(Goal G, Plan p)
Require: Goal G, Plan p
Ensure: adds expressions that lead to G to the global plan
if depth > maxDepth then
return
end if
for all δ ∈ agenda do
if M atch(G, δ) then
p ← p ∪ {δ}
end if
end for
for all g ∈ subgoalsG do
Run(g, p)
end for
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
42
Another important module within our framework deals with how emotion feedback
is received through events and how this feedback is distributed throughout the
system, according to framework parameters.
Algorithm 9 Emotion influence on rule instances
Require: rule instance δ, emotion Ensure: propagation of emotion influence through the system (learning)
δ + = · φevent−δ
for all expr ∈ γδ do
ruleexpr + = · φevent−δ · φδ−rule
expr + = · φevent−δ · φδ−expr
for all σ ∈ expr do
σ + = · φevent−δ · φδ−expr · φexpr−σ
end for
end for
for all g ∈ goalsδ do
g + = · φevent−δ · φδ−goal
end for
Algorithm 10 Event influence
Require: Event event
Ensure: propagation of emotion influence through the system (learning)
∆ = {set of rule instances triggered this inference cycle}
for all δ ∈ ∆ do
Influence(δ, event ) {see Alg. 9}
end for
Last, but not least, we have the algorithsm responsible for truth maintenance (Alg.
11) and expression evaluation Alg. 12. These get called when a truth maintenance
node changes its label (Alg. 11), and when an expression needs to be evaluated
(Alg. 11), i.e. a rule is triggered or an event is processed.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
Algorithm 11 SetLabel(TMSNode node, NodeLabel value)
Require: TMSNode node, NodeLabel value
Ensure: truth maintenance when a node changes
if labelnode 6= value then
labelnode = value
for all α ∈ alphaListnode do
for all j ∈ justnode do
if M atch(exprj , exprα then
Match(α, node)
end if
end for
end for
end if
if value = IN then
for all j ∈ consecIn do
if V alid(j)and(labelj 6= labelconseqj ) then
if ¬∃higherpriorityjustif ication ∈ justconseqj then
labelconseqj ← labelj
end if
end if
end for
for all j ∈ conseqOut do
if ¬V alid(j)and∃justconseqj then
h ← GetHighestP riorityV alidJustif ication(conseqj )
if ∃h then
labelconseqj ← labelh
else
U njustif ied(conseqj )
end if
end if
end for
else if value = OU T then
for all j ∈ consecIn do
if ¬V alid(j)and∃justconseqj then
h ← GetHighestP riorityV alidJustif ication(conseqj )
if ∃h then
labelconseqj ← labelh
else
U njustif ied(conseqj )
end if
end if
end for
for all j ∈ conseqOut do
if ¬V alid(j)andlabelj 6= labelconseqj then
if ¬∃higherpriorityjustif ication then
labelconseqj ← labelj
end if
end if
end for
end if
43
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
44
Algorithm 12 TMSEngine.Evaluate(Expression expr)
Require: Expression arg
Ensure: changes to TMS based on arg
expr ← EvalConsequeces(arg)
if ¬CheckRestrictions(expr) then
return
end if
node ← newN ode(expr)
just ← newJust(node, rule)
val ← subjectexpr [propertyexpr ]
if ¬val then
subjectexpr [propertyexpr ] ← {node}
else
n ← F ind(val, node)
if ∃n then
if ¬betterJustif iedn then
SetLabel(n, labelnode ) {see 11}
n.Add(just)
else
val.Add(node)
engine.Match(node)
end if
end if
end if
return node
4.1.2
Emotion simulation
Increased interest in emotions has revealed another aspect of human intelligence,
showing that emotions have a major impact on the performance of many human
tasks, including decision-making, planning, communication and behavior ([5]).
We intend to use our emotion simulation model in order to achieve the same
effects that emotions have on human cognition, in artificial agents: catalyst in
brain activity, motivation, direct attention and behavior, establish importance of
events around us, therefore we follow a human cognitive emotion theory.
According to Lazarus ([7]), an emotion is a three-stage process:
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
45
1. cognitive appraisal the subject assesses the event cognitively
2. physiological change biological changes occur as a reaction to cognitive appraisal (increased heart rate, adrenal response)
3. action the individual feels the emotion and chooses how to react
This is the model we will be using in our implementation in order to appraise
events and apply operators:
1. triggering event
2. cognitive assessment of event and associated emotional state
3. modify emotional state and physiological response (if any)
4. an action or series of actions are inserted into the action queue
We represent an emotion in a way similar to how primary colors combine, primary
emotions could blend to form the full spectrum of human emotional experience.
For example interpersonal anger and disgust could blend to form contempt. An
emotion is encoded as a series of eight weights, one for each basic emotion category
of Plutchik’s taxonomy. Each weight represents the influence that emotion has on
the current state ([24]).
The sum of emotion weights in our representation always equals one. The enforcement of this condition when certain emotions are modified by external stimuli
allows gradual shifts from one dominant emotion to another, thereby avoiding
sudden emotion changes (such as going from extreme anger (rage) to joy; the
renormalization would increase joy and decrease anger in order to keep the sum of
emotion weights equal to one; however, the dominant emotion would not become
joy unless the stimulus was extremely powerful). The dominant (main) emotion
is the emotion or combination of two adjacent emotions with the greatest weight
/ combined weights. Predisposition towards simple or complex emotions is left as
the subject of future studies. For now, we will assume that characters / humans
prefer simpler emotions as opposed to more complex ones.
Initially the agent is in a neutral, indifferent state, called the baseline state, where
all emotion weights are equal to each other and equal to 0.125 (1/8). This representation was chosen because it provides an emotional state to relate to and
because it is simpler to compute the effects of events on the emotional state as
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
46
opposed to a percentage approach where it is possible for the sum of percentages
to be lower than 100%. Emotion intensity (either one of the eight basic emotions
or one of the eight compound emotions is calculated relative to this baseline, using
the following formula:
P
Er
.ϕ,
nr
where Er is the value of the weight that belongs to each basic emotion that contributes to the emotion being evaluated minus the baseline weight, as modified by
internal and external events, based on the agent’s preferences and expectations, as
in the OCC model ([2]), nr is the number of relevant emotions and ϕ is a normal1
ization factor that scales the result into the [0, 1] range (ϕ = 1−0.125
= 1.142857).
The emotion with the highest intensity according to the above formula is chosen
as the dominant (main) emotional state. If no clear emotion is dominant (most
weights are within ε of each other), the character is considered to be in the baseline
state.
We also need a way of compositing emotions and computing the influence of one
emotion on another (the resulting emotion). A weighted average works best:
Compositing emotions according to factors φ1 and φ2 :
e1 ◦ e2 =
φ1 · e1 + φ2 · e2
φ1 + φ2
(4.1)
if we want both emotions to have the same influence, then φ = φ1 = φ2 ,
e1 ◦ e2 =
φ · e1 + φ · e2
e1 + e2
=
2·φ
2
(4.2)
if we want one emotion e2 to have an influence θ ≤ 1 on another emotion e1 ,
e1 ◦ e2 =
(1 − θ) · e1 + θ · e2
= (1 − θ) · e1 + θ · e2
(1 − θ) + θ
(4.3)
Computing emotion similarity:
similarity(e1 , e2 ) = −
X
(be1 − be2 )2
(4.4)
b∈e
The weight vector representation is very well suited to graphical representation,
as we can have one graphical representation for each base emotion, and use the
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
47
emotion vector weights to interpolate them. This resulting representation may
in turn be interpolated with animation sequences such as speaking or breathing.
We may also vary activity levels by speeding up or slowing down the animation
according to emotion intensity. The baseline state yields the agents base movement
speed. This is increased or decreased according to emotion intensity ([24]).
4.1.3
Knowledge representation
In order to enable the artificial character to learn about its environment and
user it will be equipped with an event history that will record external events
(environment changes, other agents actions) as well as internal events (such as
adding new knowledge or initiating an action) as well the associated main emotion
and its intensity (the emotion that the agent is feeling in regard to the event) in
chronological order. For the time being, we will use a Q-learning based approach
to learn about events and a heuristic approach to learn about other agents’ actions
([4]).
The agent’s knowledge is organized in an inheritance-based hierarchical manner
according to patterns and characteristics of items and other agents that it determines and considers important (the agent builds its ontology automatically); all
objects have a common root node, similar to a top-level concept, which expands
into subcategories which in turn branch out into new subcategories, based on observed common features. Individuals are represented as leaves in the knowledge
tree. An individual object / agent (or a subcategory), may be absorbed into its
parent category (while retaining information about which category an individual
knowledge object belongs to) based on its frequency of use, time of last access and
emotional response intensity (agents are more likely to learn and retain for longer
periods of time and sometimes improve knowledge on subjects with a strong emotional load, in order to prune out useless, unimportant or infrequent knowledge
while still retaining some information on how to react to stimuli and behave towards other agents (based on the characteristics of the group it belongs to). The
agent will also be able to repeat knowledge it believes important in the long-term
to itself, in order to increase its access frequency (similar to human learning).
The occurrence of events and knowledge in general are represented as lists of
symbols in a manner similar to CLIPS or LISP. The following listing gives an
example of a language used to specify knowledge structure, rules and data.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
? self = Gork
classes {
class Humanoid
class Orc extends Humanoid
Move2 : 4
class Goblin extends Humanoid , Orc
}
properties {
property Move : Humanoid -> Int
property Move2 extends Move : Humanoid -> Int
property WeaponSkill : Orc -> Int
property Ballistics : Humanoid -> Int
property Hatred : Humanoid -> Humanoid
property InSight : Humanoid -> Humanoid
property InRange : Humanoid -> Humanoid
property Wealth : Humanoid -> Int
property Health : Humanoid -> Int
}
individuals {
individual Gork : Orc
IN Move2 = 4
// property = value
IN WeaponSkill = 3
OUT Loveliness = 4
individual Mork : Orc
IN Move2 = 4
IN WeaponSkill = 4
individual Tork : Orc
IN Move2 = 5
IN WeaponSkill = 3
}
goals {
goal Survival IN ? self Health 20
goal Greed IN ? self Wealth 20
// survival
// greed
}
actions {
action MoveTo ? orc
// action header
IN ? self InRange ? orc
// expr / consequence
action PickPocket ? orc // method to evaluate
IN ? self Wealth +10
// consequences
}
rules {
rule InRange
IN ? orc1 InSight ? orc2
== >
assert ? orc1 InRange ? orc2
48
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
49
rule PickPocket
IN ? self InRange ? orc2
== >
action PickPocket ? orc2
}
For example, rule InRange adds and InRange knowledge item into working memory
when the conditions are fulfilled. The PickPocket rule attempts a Pickpocket
action, where the following packet would be sent to all characters witnessing the
action (?self PickPocket ?target).
It has been determined that humans are more likely to remember bits of knowledge
in similar emotional contexts that they learned or have used said knowledge in the
past. In order to replicate this mechanic in artificial agents, we propose a system
of indexing knowledge based on dominant emotion (including secondary emotions)
and emotion intensity, similar to a hash-table, because there is a high probability
for the same emotional context to be triggered in similar situations which may be
solved by similar types of reasoning. For this purpose, both knowledge and rules
(in rule-based systems for example) will have corresponding emotion types and
intensities which will be updated every time the rule / knowledge item is used.
These will be indexed in a table according to main emotion and sorted according
to emotion intensity. When knowledge is required, the corresponding memory
elements to the states main emotion and intensity are retrieved first and then the
search is expanded around this point in the index table.
Attention narrowing In fast-changing environments an agent may not have
enough resources in order to react to every event that occurs, therefore a way
of prioritizing important / significant events. This may be achieved through attention narrowing. Events are stored into an event queue and sorted according
to emotional arousal associated with the event, ensuring more important events
are processed first. Emotion preference is a domain-specific problem, for example
fear (usually signaling danger) may be considered more important than disgust,
and therefore should be processed with a higher priority. Events will be queued
according to emotion intensity.
Decision process We are not proposing a specific mechanic that influences
the decision making process, however, knowledge retrieval and event handling are
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
50
influenced through attention narrowing and knowledge indexing, thereby determining rule order in the agents agenda or plan. It should be clear that the agent
adapts its behavior according to the dominant emotion and its context; for example, if its current state is fear, it will make efforts (introduce operators in the
agenda or formulate plans, depending on the particularities of the agent) in order
to eliminate the threat or get away from this situation. It will also make efforts in
the future (if able) to avoid circumstances that led to the current state. Another
example, if something causes the agent joy, the agent will be more likely to fire
operators or formulate plans that lead to the joyous occurrence.
The artificial agent will be able to learn about its environment (and other agents)
through the emotions and emotion intensities associated to its knowledge items
(facts and rules).
Ontology The agent’s knowledge is organized in an inheritance-based hierarchical manner called an ontology. We are using an ontology because we want to be
able to share common understanding of the structure of information among people
or software agents and separate domain knowledge from operational knowledge.
Learning about the environment, human users and other agents is key to providing
believable behavior in artificial characters ([4]). Each class and individual in the
ontology has an associated emotion vector and intensity, which allows the agent
to learn how to interact with a certain individual, as well as a class of individuals,
as these associated emotions are propagated up the ontology hierarchy.
Rules and knowledge used in certain situations receive a fraction of the agent’s
current emotional state (this also travels upward through the inference chain adn
ontology, also modifying rules and superclasses in order to help the agent cope with
new, unknown individuals. For example, if the agent mainly encounters hostile
characters (with whom it associates fear/anger) the emotion associated with the
Character:Other class in the ontology, which is an average of all the individuals in
that class, will be skewed in order to reflect this, and the agent will act accordingly
with newly encountered agents). Fig. 4.7 shows an example ontology.
The emotion e associated with an ontological class is
P
e=
esubclass
nsubclasses
nsubclasses
(4.5)
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
51
Figure 4.7: Agent’s world model
or, if an individual’s or a subclasses’ emotion has been modified by an emotion
vector e0 , we modify its superclasses
esuperclass = esuperclass ◦ e0 (θ)
where θ =
formula.
1
nindividuals
(4.6)
is used as the influence factor in the emotion composition
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
52
In this way, we can also take a look at the emotion associated with the top-level
root node in the ontology (Thing) in order to get an idea of the agent’s feelings
about its environment / universe.
Initially, a new individual’s associated emotion will be
P
eindividual =
esuperclass
nsuperclasses
(4.7)
Knowledge indexing Emotions have a big impact on memory. Many studies
have shown that the most vivid memories tend to be of emotionally charged events,
which are recalled more often and with greater detail than neutral events. This
can be linked to human evolution, when survival depended on behavioral patterns
developed through a process of trial and error that was reinforced through life
and death decisions. Some theories state that this process of learning became
embedded in what is known as instinct.
A similar system may be used to decide the importance and relevance of events
and knowledge and manage memory elements in artificial agents. One method
of indexing knowledge according to associated emotional intensity and dominant
emotion would be to use a stack of matrixes as follows (we assume that emotion intensity is more important than the type of dominant emotion, however, the
method works just as well the other way around): we divide emotion intensity into
intervals, and use this as the main classifier (the emotion level). On each emotion
level there is an 8x8 matrix, one cell for each basic emotion combination (a dominant emotion may be a single basic emotion or a combination of basic emotions),
where each cell contains a list of knowledge items. Assigning a knowledge item
to a cell is straight forward: we compute the level according to emotion intensity
and then assign it to a cell based on the dominant emotion. Fig. 4.8 illustrates
this system.
Let’s take the following example: we have 10 emotion intensity levels in our emotion matrix stack and a knowledge item with the [0.5 0.5 0 0 0 0 0 0] emotion vector
·φ = 0.5714285. This places it in the
attached. The emotion intensity equals 0.5+0.5
2
t
[0.5, 0.6] interval, or the 6 h level. On this level, we add the new knowledge item
to the list in the (Joy, Trust) cell (dominant emotion is Love). Knowledge retreval
works the same way, except we retrieve the list of knowledge items addressed by
the agent’s current state. Rules are retreieved in a similar manner.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
53
Figure 4.8: Knowledge indexing
4.1.4
Effects on cognitive processes
It has been shown that emotions establish the importance of events around us, and
influence important cognitive processes such as memory, perception and decision
making. In this chapter we describe how the theories presented in section 2.3 are
applied within the model.
4.1.4.1
Perception management
It has been discovered in humans that an increase in arousal levels leads to attention narrowing ([12]). Attention narrowing is a decrease in the range of stimuli
from the environment that the subject is sensitive to, in other words, perception
will be focused on more arousing events and details first and these will be processed sooner than non-arousing details and events [12]. This means that highly
emotional events are more likely to be processed when attention is limited, leading experts to attest that emotional information is processed according to priority
([13]).
In fast-changing environments an agent may not have enough resources in order
to react to every event that occurs, therefore a way of prioritizing important /
significant events. This may be achieved through attention narrowing. Events are
stored into an event queue and sorted according to emotional arousal associated
with the event, ensuring more important events are processed first. Emotion
preference is a domain-specific problem, for example fear (usually signaling danger)
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
54
may be considered more important than disgust, and therefore should be processed
with a higher priority. Events will be queued according to emotion intensity. Let’s
assume for example that the agent running the labyrinth does so in a turn-based
manner, and has a limited number of actions each turn. It will not waste its time
searching for traps if there are more serious threats, such as hostile creatures,
around.
4.1.4.2
Memory management
Humans are more likely to remember bits of informationand exhibit similar behavior and reasoning as exhibited in past similar emotional contexts. This can
be used as a way of remembering context-appropriate facts and rules in artificial
agents and may speed up the reasoning process as it does in humans.
We have devised a way of indexing facts and rules based on dominant emotion and
emotion intensity similar to a hash-table. Both facts and rules have an associated
emotion and intensity, which will be updated by a fraction of the agent’s emotional
state every time they are used to infer new data. These will be indexed in a table
according to main emotion and sorted according to emotion intensity value (each
cell in the table is a fact/rule).
At each new inference stage, a relevant rules relevant facts list are created, starting from the table cell that most closely matches the agent’s current state and
expanding outwards. These lists are then used to create an operator activation
list, which will be triggered in the order that they are inferred.
The main differences between this model and other emotion simulation approaches
are the perception and memory management mechanics that allow emotions to
indirectly influence the agent’s decision and planning processes. Another difference
from other approaches is that in this model, emotions serve a purpose, and are
not merelely a goal in and of themselves.
This architecture, while providing an interesting insight into the effects of emotion on cognitive processes such as memory, perception, reasoning, learning and
decision-making, and being suitable for use within artificial constructs, has some
weaknesses in practice.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
55
First of all, its greatest weakness would be the fact that it is a complex system. It
has a lot of parameters that a designer would need to understand before making
use of the system, while the interactions between emotional states and their results
are not always intuitive. This coupled together with the fact that the system is
not adaptable or easily extendable (making use of techniques that may not be a
developer’s first choice) would lead to a low adoption rate within academia and
industry.
However, based on this early prototype we have come up with the Newtonian Emotion System, a light-weight, scalable, extendable and adaptable model presented
in the following chapter.
4.2
Newtonian Emotion System (NES)
This chapter presents our new and improved emotion simulation technique, still
based off of Plutchik’s emotion taxonomy. However, the emotion representation scheme has been streamlined, the emotion feature being reduced to a fourdimensional vector, and emotion interactions follow Newtonian interaction laws
that are easier to learn and understand. The architecture has also been reduced
to the bare minimum necessary for emotion simultaion and expression while still
influencing character behavior and memory in the same way. This is because,
while we believe that the best way to produce human-like behavior is to attempt to reproduce thought patterns, game developers, for example, tend to favor
finite-state-machine based solutions that are very fast and reduce development
complexity. The new architecture allows for any behavior generation technique
(belief-desire-intention, expert systems, behavior trees, finite state machines or
even backtracking) and any machine learning algorithm (SVM, linear regression,
clustering), in essence, allowing designers to develop a system that suits them best,
while incoroprating emotion simulation.
4.2.1
Newtonian emotion space
In this section we introduce the Newtonian Emotion Space, where we define concepts that allow emotional states to interact with each other and external factors,
as well as the two laws that govern the interaction between an emotional influence
and an emotional state.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
56
Concepts
Definition 4.1. Position specifies the intersection of an emotional state with
each of the four axes
Definition 4.2. Distance (||p~2 − p~1 ||) measures the distance between two emotional positions
Definition 4.3. Velocity (~v = p~2 −t p~1 ) represents the magnitude and direction of
an emotion’s change of position within the emotion space over a unit of time
Definition 4.4. Acceleration (~a = v~2 −t v~1 ) represents the magnitude and direction of an emotion’s change of velocity within the emotion space over a unit of
time
Definition 4.5. Mass represents an emotional state’s tendency to maintain a
constant velocity unless acted upon by an external force; quantitative measure of
an emotional object’s resistance to the change of its velocity
Definition 4.6. Force (F~ = m · ~a) is an external influence that causes an emotional state to undergo a change in direction and/or velocity
Laws of Emotion Dynamics The following are two laws that form the basis
of emotion dynamics, to be used in order to explain and investigate the variance
of emotional states within the emotional space. They describe the relationship
between the forces acting on an emotional state and its motion due to those forces.
They are analogous to Newton’s first two laws of motion. The emotional states
upon which these laws are applied (as in Newton’s theory) are idealized as particles
(the size of the body is considered small in relation to the distance involved and
the deformation and rotation of the body are of no importance in the analysis).
Theorem 4.7. The velocity of an emotional state remains constant unless it is
acted upon by an external force.
Theorem 4.8. The acceleration ~a of a body is parallel and directly proportional
to the net force F~ and inversely proportional to the mass m: F~ = m · ~a
Emotion center and gravity The emotion space has a center, the agent’s
neutral state, a point in space to which all of the agent’s emotional states tend to
gravitate (usually (0,0,0,0), however, different characters might be predisposed to
certain kinds of emotions, thus, we should be able to specify a different emotional
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
57
centre, and instill a certain disposition, for each character; we also use different
emotion state mass to show how easy/hard it is to change an agent’s mood). In
order to represent this tendency we use gravitational force:
~ = m · p~ − ~c · kg ,
G
||~p − ~c||
where p is the current position, c is the center and kg is a gravitational constant.
This force ensures that, unless acted upon by other external forces, the emotional
state will decay towards the emotion space center.
4.2.2
Plug-and-play architecture
We developed an emotion subsystem architecture (Fig. 4.9) that interposes itself
between the agent’s behavior module and the environment. The subsystem models
the process described by Lazarus.
Events perceived from the environment are first processed by the appraisal module,
where an emotional force is associated with it. The resulting list of events is
then sorted in descending order according to magnitude and fed into the agent’s
perception module. This is done in accordance with the psychological theory of
attention narrowing, so that in a limited attention span scenario, the agent will
be aware of the more meaningful events.
We treat the agent behavior module as a black box that may house any behavior
generation technique (rule based expert system, behavior trees, etc.). The interface
receives as input events perceived from the environment and produces a set of
possible actions. The conflict set module takes this set of rules and evaluates
them in order to attach an emotional state to each. Out of these, the action
whose emotion vector most closely matches the agent’s current state is selected to
be carried out.
arg min
i∈{conf lictset}
arccos
eagent · ei
||eagent || · ||ei ||
Feedback from the environment has an associated emotion force vector that actually affects the agent’s emotional state. Feedback is distributed to the appraisal
and conflict set modules as well as the agent’s behavior module. Actions undertaken are temporariliy stored in an action buffer for feedback purposes.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
58
Figure 4.9: Emotion system architecture
4.2.3
NES Learning
Both appraisal (percept and action appraisal) modules use machine learning techniques in order to learn to predict the outcome of events and actions, respectively.
As there are many viable options for which learning technique to use (hidden
markov models, neural nets, q-learning), we use a plug-and-play model where the
technique can be replaced.
The appraisal module attempts to predict the emotional feedback received from
the environment based on characteristics of the event to classify and previous
feedback. The goal of the appraisal module is to better label (and thus establish
priority of) events for the perception model. The conflict set module works in a
similar way based on the characteristics of actions taken. The goal of the conflict
set module is to settle conflicts between competing actions by selecting the most
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
59
appropriate action to be performed. The action that has the greatest Gain − Risk
ratio will be chosen.
Both modules treat the event, rule respectively, configuration as the observable
state of the model, and attempt to predict the next hidden state (the emotional
feedback force).
Newtonian emotion system interaction is a three stage process. Each inference
turn, the behavior module generates a list of possible actions within the given
context for the virtual character. Each action is then appraised by the action
appraisal module (that is a wrapper interface for a machine learning technique
that attempts to learn the feedback function). At this time, the value is and
absolute value, that needs to be related to the agent’s current state. This is done
by evaluating the impact of an emotional force on the agent’s current state. Based
on this impact factor, actions to be carried out are selected and executed. Once
executed, these actions receive emotional feedback from the environment and the
appraisal module that evaluated them is updated with the new values.
Let us take the following example: character A’s behavior module generates as the
next action, an attack on character B. The following simplified feature vector for
the action is built, that encodes information about the current context (feelings
about the action, feelings about our target and action direction). From previous
experience, the agent knows that the action would raise it’s aggression level (feelings about the attack action are on the anger and anticipation axes [0,0,-.9,-.3]);
we also assume that we are somewhat afraid of the character that we are attacking
[0,0,.7,0], and we know that we are performing the action, so action direction is 1.
Based on this feature vector, the appraisal module predicts an emotional feedback
with a dominant on the anger and anticipation axes of [0,0,-.2,-.2]. We assume
this action has a satisfactory Gain/Risk ratio and gets selected and executed. We
also assume that it succeeds. According to the environment feedback formula we
used, the result would be [0,0,-.2,-.3] which means that the estimation was close,
but could be better. The feedback result is added to the learning module data set,
and the results are nudged in the direction of [0,0,-.2,-.3] - [0,0,-.2,-.2] = [0,0,0,-.1],
for a better future prediction. The perception appraisal module works in a similar
way to appraise perceived actions, however the percepts are sorted according to a
factor based on their distance to the agent’s current emotional state (minimum) or
their distance to the average emotional state of events perceived this turn (maximum). This is so that an agent will prioritize events that it resonates with or that
stand out.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
60
Validation In order to validate the learning module, we randomly generated
feature vectors with valid values, used the feedback formula to label them, and
separated them into learning and validation data sets. These data sets were used
to validate a linear regression learning model. The results can be seen in the
following graphs that show the error between training and validation related to
number of training examples. A slight oscillation can be seen in convergence due
to the fact that some information mey be missing in certain contexts.
Figure 4.10: Joy learning curve
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
Figure 4.11: Trust learning curve
Figure 4.12: Fear learning curve
61
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
62
Figure 4.13: Surprise learning curve
4.2.4
Personality filter
The personality filter allows designers to skew an agent’s perception of events.
It defines a perception in intepreting emotional forces when applied to feedback
received from the environment. The personal filter consists of a four dimensional
vector (each element matching an axis of the Newtonian Emotion representation
scheme) with values between -1 and 1 that scales the agent’s emotional feedback
according to personal criteria. For example, an agent may feel fear more accutely
than joy and as such, its personal filter should reflect this, having a lower scaling
factor for joy than for fear. Another agent may be extremely slow to anger, this
would be reflected in the personal filter as a very low scaling factor for the anger
axis. It would even be possible to change the sign altogether so that some agents
may feel anger instead of fear or joy instead of sadness (leaving room for the
representation of psychologically damaged individuals). The feedback applied to
the agent’s state would then be:
F eedback 0 = F ilter · F eedback
(4.8)
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
63
Together with the emotional center and emotion mass, these three would be all the
parameters that a character designer needs to interact with in order to generate
varied virtual characters capable of rich unique nuanced emotion-driven behavior.
As can be seen, the system would be easy to learn and use in the industry thanks
to the small number of parameters that influence it and the intuitive nature of
these and their interactions within the model.
4.2.5
Emotion as motivation
As mentioned before, the emotional feedback that an agent receives from the
environment influences its behavior. This is achieved by selecting an action from
the conflict set provided by the behavior model according to the feedback that the
agent predicts from the environment.
In order to evaluate its effect on the agent’s current state, we put forth the notion
of tensor product between two states, because of the information this gives about
the influence that one axis has on another. This product has an interpretation as
stress produced by an emotional force on the current emotional state, showing how
the force is, relative to the current state of the agent. This is called the emotional
impact and the tensor product is computed between the two forces acting on the
agent: the Gravity force, keeping him in his current state, and the learned action
force (estimated feedback).
Stress = [F eedback] · [Gravity]


fjoy


i
 ftrust  h

 · gjoy gtrust gf ear gsurprise
= 

 ff ear 
fsurprise
(4.9)
This allows the assessment of the influence that each emotion axis of the action
has on those of the current state. In line with Lazarus’ appraisal theory, emotional
evaluation implies reasoning about the significance of the event, as well the determination of the ability to cope with the event. Therefore we define two metrics
for assessing emotional experiences: Well-being (Joy and Trust) and Danger (Fear
and Surprise). Well-being measures the effect the action has on the positive axes
(Joy-Trust) of the current state, thus establishing the gain implied by the action;
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
64
while Danger describes the risk, as it evaluates the action in terms of its influence
on the Fear-Surprise axes.
When choosing an action, an agent estimates how much the current well-being is
influenced by the action in regard to how much danger the action implies, a gainrisk ratio. In terms of the tensor product, this involves a column-wise computation
with respect to the Well-being axes of the agent’s current state (for Gain) and a
row-wise computation with respect to the Danger axes of the action (for Risk).
Gain = fjoy ·(gjoy +gtrust )+ftrust ·(gjoy +gtrust )+ff ear ·(gjoy +gtrust )+fsurprise ·(gjoy +gtrust )
(4.10)
Risk = ff ear ·(gjoy +gtrust +gf ear +gsurprise )+fsurprise ·(gjoy +gtrust +gf ear +gsurprise )
(4.11)
Thus the Impact factor can be used to assign priorities to all actions in order to
resolve the conflict set.
Impact = Gain − Risk
(4.12)
Let’s assume that a character’s current state is [.3,.5,-.1,-.1] (gravity = [-.5,.8,.16,.16]). We will further assume that the agent finds itself in a conflictual
situation surrounded by other agents. We will also assume that the behavior
module will only generate options to attack the other characters, and that the appraisal module predicts the correct feedback. In the table, the possible feedback
options are various degrees of
• fear and surprise (alarm) - the attack fails, but was not expected to
• anger and surprise (outrage) - when the attack succeeds, but was not
expected to
• anger and anticipation (aggression) - when the attack succeeds and was
expected to
According to our feedback formula, we have the following impact factors:
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
[-0.5
[-0.5
[-0.5
[-0.5
[-0.5
[-0.5
Gravity
-0.83 0.16
-0.83 0.16
-0.83 0.16
-0.83 0.16
-0.83 0.16
-0.83 0.16
0.16]
0.16]
0.16]
0.16]
0.16]
0.16]
Feedback
Gain
[0 0 0.2 0.3]
0.4
[0 0 -0.6 -0.6] 1.56
[0 0 -0.4 0.3] 0.13
[0 0 -1 -1]
2.6
[0 0 1 0.6]
-2.08
[0 0 1 0.3]
-1.69
65
Risk Impact
0.3
0.10
1.176
0.38
0.098
0.03
1.96
0.64
-1.568 -0.51
-1.274 -0.42
Table 4.4: Attack impact table
[-0.5
[-0.5
[-0.5
[-0.5
[-0.5
[-0.5
Gravity
-0.8 0.16
-0.8 0.16
-0.8 0.16
-0.8 0.16
-0.8 0.16
-0.8 0.16
0.16]
0.16]
0.16]
0.16]
0.16]
0.16]
Feedback Gain
[0.1 0 0 0] -0.13
[0.2 0 0 0] -0.26
[0.3 0 0 0] -0.39
[0.4 0 0 0] -0.52
[0.5 0 0 0] -0.65
[0.6 0 0 0] -0.78
Risk
0
0
0
0
0
0
Impact
0.13
0.26
0.39
0.52
0.65
0.78
Table 4.5: Take impact table
In table 4.4) we can see that the agent will prefer the two options that will increase
it’s aggression, will be neutral towards attempting to attack an agent that will
suprise it, and will be reluctant to attack when it might fail.
As a further example, let us assume that the agent is in the same state and that
it is surrounded by objects that it can pick up, the only actions generated are
actions to pick up objects and the joy felt when picking up an object is equal to
value(object)
.
value(object)+wealth(agent)
As we can see in the second table (Fig. 4.5), for varying item values, we get
different priorities. The agent would prefer to pick up the most valuable item
first.
Chapter 5
Emotion integration in game
design
In this chapter we present how the two architectures (the Emotion-Driven Character Behavior Engine, and the Newtonian Emotion System) were integrated to
provide non-player character behavior in two games: a fire evacuation scenario serious game named ALICE, meant to teach preteen students proper fire evacuation
procedures, and a demo game that showcases the capabilites of the NES, called
OrcWorld.
5.1
Artificial intelligence in virtual environments
Research into virtual environments on the one hand and artificial intelligence on
the other has mainly been carried out by two different groups of people, but some
convergence is now apparent between the two fields. One of the great selling
points of virtual environments is their immersion factor, therefore researchers are
seeking to progress beyond visually compelling but essentially empty environments
to incorporate other aspects of virtual reality that require intelligent behavior.
Virtual characters acting in an intelligent manner based on their goals and desires
as opposed to standing still, waiting around for the user and interacting based on
pre-scripted dialog trees brings a great deal of depth to the experience. Evolving
virtual worlds are by far more interesting than static ones. Applications in which
activity independent of the user takes place, such as populating a virtual world
with crowds, traffic, virtual humans and non-humans, behaviour and intelligence
66
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
67
modeling are possible by integrating artificial intelligence and virtual reality tools
and techniques.
This combination of intelligent techniques and tools, embodied in autonomous
creatures and agents, together with effective means for their graphical representation and interaction of various kinds, has given rise to a new area at their meeting
point, called intelligent virtual environments [1].
5.1.1
Application domains
Games A video game is an electronic game that involves interaction with a user
interface to generate visual feedback on a visual device. Video games have been
using virtual environments from the start. The popular video game Tron inspired
the movie of the same name where a computer programmer is transposed int o the
virtual world of Tron. The most popular virtual environments to date belong to
video games (World of Warcraft, Second Life, The Sims). The video game industry
has also been using artificial intelligence techniques as well, so in essence, the field
of intelligent virtual environments has existed for decades, however, video games
usually limit themselves to brute force or very time-efficient techniques, however
it is our opinion that the video game industry would have much to gain from the
finess offered by more refined artificial intelligence techniques.
Serious Games Another field that has recently gained momentum is the field
of serious games. More and more companies and state agencies are using serious
games to train and educate employees or the general public, respectively. A serious game is a game designed for a primary purpose other than entertainment,
such as education or training. Although they can be entertaining, serious games
are designed in order to solve a problem and may deliberately trade fun and entertainment in order to make a serious point.
There are many advantages to using serious games in education. First of all, computer game developers are accustomed to developing games that simulate functional entities quickly and cheaply by using existing infrastructure. Secondly, game
developers are experienced at making games fun and engaging, automatically injecting entertainment and playability into their applications, features that serious
games, while meant to train or educate the user, could also benefit from.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
68
One artificial intelligence paradigm has been used extensively in the field of training and education, namely rule-based expert systems. These have expert knowledge in their domain of expertise and are able to create plans, explain actions and
even reason non-monotonously [25].
Affective gaming has received much attention lately, as the gaming community
recognizes the importance of emotion in the development of engaging games. Affect
plays a key role in the user experience, both in entertainment and in serious
games developed for education, training, assessment, therapy or rehabilitation.
A significant effort is being devoted to generate affective behaviours in the game
characters, player and AI-controlled avatars, in order to enhance their realism and
believability [26].
Games Emotion has become increasingly established, as the following examples
illustrate: in the SIMS, characters actions are influenced by an emotive state,
which results from their personality and events occurring during the game [27]; in
Fahrenheit, players are able to influence the emotional state of characters indirectly
[28]. There is also an increased interest in affective storytelling, creating stories
interactively.
E-Learning Education has always seemed a natural application domain for artificial intelligence and virtual reality technologies, because visual representation
and social interaction play a large part in learning and human memory. There
are a few applications worth mentioning: in the Cosmo System, an artificial agent
used emotional means to achieve teaching goals. ITSPOKE [29] tried to register
the students emotional states and use them to tailor spoken tutorial dialogues
appropriately. FearNot used emotion-driven virtual characters in order to teach
lessons about bullying [30].
The growing need for communication, visualization and organization technologies
in the field of e-learning environments has led to the application of virtual reality
and the use of collaborative virtual environments [31]. Virtual reality, together
with emotion-capable artificial cognitive agents can highly broaden the types of
interaction between students and virtual tutors. As in conventional learning tasks,
the computer can monitor students, responding to questions and offering advice
in a friendly manner, while keeping the lessons pace comfortable to the student.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
69
Artificial agents are able to fulfill several valuable roles in a (virtual) e-learning
environment: act as a virtual tutor and explain lessons, give help, advice and feedback to students; act as a students learning peer and/or participate in simulations,
especially useful in team training scenarios; an artificial agent may also be used
(known or unknown to the student) in order to monitor progress and personalize
the students learning experience. An artificial agent present in a virtual environment should dispose of the following capabilities: demonstrate and explain tasks,
monitor students and provide assistance when it is needed, provide feedback to
the student, ability to explain the reasoning behind its actions and adapt (a plan)
to unexpected events and be able to fill the roles of extras [31].
A virtual tutor needs to have a way of storing and manipulating domain specific
knowledge in a useful way in order to be able to explain to, and correct students,
and justify its actions. In practice, the best way to build such a system is by using
a rule-based system together with a justification-based truth maintenance system.
Steve [25] is an animated pedagogical agent for procedural training in virtual
environments that teaches students how to perform procedural tasks. Steve is
able to demonstrate and explain tasks, and monitor students performing tasks,
providing assistance when it is needed, by making use of expert knowledge of the
domain stored in the form of hierarchical plans that contain a set of steps (the steps
required to complete an action, in order for Steve to demonstrate and explain), a
set of ordering constraints (so that Steve is able to provide assistance or intervene
at any point during a student’s performance of a task), and a set of causal links
(that allow Steve to explain a certain action).
5.1.2
Techniques
In this section we present and discuss applications of well-known artificial intelligence paradigms and techniques in virtual environment applications, with a focus
on games, serious games and e-learning.
Neural networks An artificial neural network, usually called neural network,
is a mathematical model or computational model that is inspired by the structure
and/or functional aspects of biological neural networks. A neural network consists
of an interconnected group of artificial neurons, and it processes information using
a connectionist approach to computation.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
70
In other words, a neural network is an interconnected assembly of simple processing
elements, units or nodes whose functionality is loosely based on the biological
neuron. The processing ability of the network is stored in the inter-unit connection
strengths or weights obtained by a process of adaptation to or learning from a set
of training patterns [32].
In most cases an ANN is an adaptive system that changes its structure based
on external or internal information that flows through the network during the
learning phase. Modern neural networks are non-linear statistical data modeling
tools. They are usually used to model complex relationships between inputs and
outputs or to find patterns in data.
Artificial neuron design is based off of the initial model developed by McCulloghPitts, where each neuron has several inputs (synapses, x ) and one output (axon,
y). Each input has an associated weight (w ), that stores the neuron’s evolution
and allows it to adapt to the data that it is being trained on.
internal activation potential:
ni =
n
X
wij · xi + bi
(5.1)
j=0
activation of the neuron:
yi = ψ(ni )
(5.2)
Neural networks are capable of both supervised and unsupervised learning, and
several error propagation and weight update algorithms exist. Neural networks are
fundamentally different from the processing done by traditional computers and are
better suited to problems currently difficult to solve, such as meaning of shapes in
vidual images, distinguishing between different kinds of similar objects (classification and association), learning certain functions (including unknown functions).
In a MMORPG context, neural networks could act as fast, efficient classifiers (the
costly process of training neural networks is usually done offline, while the classification is usually fast) that help artificial agents classify and reason about objects in
their environment, including users. For example, based on characteristics exhibited
by user or artificial characters, such as faction, class, gender, build, background,
an artificial agent could classify another agent as a threat or non-threat, or as an
enemy or a foe.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
71
Genetic algorithms Genetic algorithms are a search heuristic that mimics the
process of natural evolution, routinely used to generate useful solutions to optimization and search problems. In a genetic algorithm, a population of individuals,
each of which encode candidate solutions to an optimization problem evolves toward better solutions by using techniques inspired by natural evolution such as
inheritance, mutation, selection and crossover.
Problems which appear to be particularly appropriate for solution by genetic algorithms include timetabling and scheduling problems. Genetic algorithms are also
a good approach to solve global optimization problems, especially domains that
have a complex fitness landscape as the combination of mutation and crossover
operators is designed to move the population away from local optima.
Within the virtual intelligent environment context, genetic algorithms may be used
to generate walking methods for computer figures, bounding box generation, as
well as any optimization problems artificial agents may require, where an exact
solution is not strictly required and strict time constraints are imposed, such as
planning.
Self organizing systems Self-organization is a process in which structure and
functionality at the global level of a system emerge solely from numerous interactions among the lower-level components of a system without any external or
centralized control. The system’s components interact in a local context either by
means of direct communication of environmental observations without reference to
the global pattern.
Swarm intelligence is the collective behaviour of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence. Also known as self organizing systems, they are typically made up of a
population of simple agents interacting locally with one another and with their
environment. The agents follow very simple rules, and although there is no centralized control structure dictating how individual agents should behave, local interactions between such agents lead to the emergence of intelligent global behavior,
unknown to the individual agents. Natural examples of swarm intelligence include
ant colonies, bird flocking, animal herding, bacterial growth, and fish schooling
[33].
Swarm intelligence as a field is based on algorithms developed to simulate this
type of complex behavior that emerges in the natural world, and thus, the obvious
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
72
application of these techniques in virtual environments would be to simulate the
behavior of similar creatures that inhabit the environment; however, other uses
are possible as well, such as using ant colony optimization in order to compute
optimal paths the agent may take in the virtual environment or using particle
swarm optimization in order to create more complex and versatile particle systems
[34]. Other applications of swarm intelligence are crowd simulation and ant-based
routing.
Reactive agents Reactive agents are simple processing units that perceive and
react to changes in their environment. The architecture consists of a perception
component, a decision component and an action component. The behavior is quite
simple: in a loop, the agent perceives the state of its environment (which may be
complex and may include the states of other agents), selects and action to perform
and then performs it, changin its environment’s state. Reactive agents are able
to reach their goal only by reacting reflexively to external stimuli. While limited, reactive agents operate in a timely fashion and are able to cope with highly
dynamic and unpredictable environments, making them ideal for an artificial environment’s lower, less intelligent artificial life forms, such as animals and unevolved
humanoids.
Cognitive agents Artificial agents inhabiting a virtual environment, where multiple aspects of the world are currently changing, must be able to infer new knowledge about the world, make plans, accomplish goals and take decisions in uncertain
conditions in real-time, in order to challenge human users. This is what cognitive agents are able to do. Several approaches based on the belief-desire-intention
model of agency have been desgined and implemented successfully, while other
approaches are being tested. The hierarchical goal decomposition used in backward chaining engines is a powerful tool, allowing agents to represent and solve
complex problems [35], while forward chaining techniques allow exploration of the
knowledge domain. Truth maintenance systems and fuzzy logic are two of the
techniques that may be employed in order to reason in uncertain conditions by
artificial agents, either by maintaining a coherent knowledge base (handles conflicting knowledge and maintains a consistent set of beliefs), or adding a certainty
weight to knowledge items.
The cognitive agent model is very similar to the reactive agent model, however, the
decision component now handles planning as well (next action), while the interaction component adds a social dimension to agent interaction. We have proposed
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
73
an architecture that uses forward and backward chaining, truth maintenance and
emotion simulation to achieve fast decision-making cognitive agents.
Expert systems Another subfield of artificial intelligence, expert systems use a
knowledge base of human expertise on a specific problem domain. An expert system uses some knowledge representation structure to capture knowledge, typically
gathering from knowledge human experts on the subject matter and codifying it
according to the structure. Last but not least, the expert system is placed in
a problem-solving situation where it can use its expertise. Successful e-learning
applications have been developed in the past [25], where the expert system acts
as a virtual tutor.
Emotion simulation Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. Emotion integration is necessary in generating complex believable behavior,
making the agent decision-making process less predictable and more realistic as
well as generating actions in time comparable to human reaction time. Emotion simulation allows virtual environments more realistic simulations of artificial
characters, while virtual environments allow artificial agents capable of emotion
simulation more expressive freedom because of the additional channels available.
There are several motivations for this research, relative to its intended application. One approach is the simulation of emotions in conversational agents in order
to enrich and facilitate interactivity between human and machine. In e-learning
applications, affective computing can be used to adjust the presentation style of
a virtual tutor when a learner is bored, interested, frustrated or pleased. Psychological health services, an application domain for virtual reality as well, can
also benefit from affective computing: patients would be able to act in a virtual
world populated by interacting artificial agents and be observed and diagnosed by
a professional or learn how to correct their own behaviour.
Our understanding of human emotion is best within the cognitive modality and
most existing models of emotion generation implement cognitive appraisal, which
is considered to be best suited for affective modeling in learning and gaming applications.
The study of emotional intelligence has revealed an aspect of human intelligence.
Emotions were shown to have great influence on many of human activities, including decision-making, planning, communication, and behavior. Emotion theories
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
74
attribute several effects of emotions on cognitive, decision and team-working capabilities (such as directing/ focusing attention, motivation, perception, behavior
towards objects, events and other agents.
One of our research papers proposes the use of emotion simulation as an efficiency
measure in cognitive agents, focusing attention/perception and managing agent
memory, as well as giving the agent a new expressive dimension [24].
Ontologies The term ontology denotes a branch of philosophy that deals with
the nature and organisation of reality. Virtual reality could benefit from an ontology. Each object that exists in virtual space could attached a defition using
domain-specific ontological terms. This could improve the virtual environment for
both human users and artificial ones. The downside is that an ontology would
have to be defined and each object in the scene graph would be doubled in an
ontolgy graph, taking up more memory.
In computer science, an ontology is a formal representation of knowledge as a
set of concepts within a domain, and the relationships between those concepts.
An ontology is necessary in order for agents to have a shared understanding of a
domain of interest, a formal an machine manipulable model of said domain. It
is used to reason about the entities within that domain, and greatly enhances
symbolical reasoning. Ontologies aid in inter-agent communication, definitions
being exchanged between agents until a consensus is reached. Ontologies are
usually defined in such a way as to be easily interpreted by humans as well, and may
be used to bridge the gap created by the deficiencies in the field of natural language
processing, making interaction between human and artificial agents easier. One
may also limit the options human agents have in expressing themselves in such
a way so as to be indistinguishable from artificial agents, increasing immersion.
However, even without limiting user expresivity, which is not a desired feature in
virtual environments, an ontology would still increase human-computer interaction
quality.
5.2
ALICE (SGI)
During an internship at the Serious Games Institute, based in Coventry, United
Kingdom, we developed and implemented a virtual character framework (Fig. 5.1)
for the Unity 3d game Engine [36] to be used in a serious game meant to teach
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
75
preteens fire safety and fire protocol. The game is called ALICE and takes place
in a school. When a fire breaks out, the fire alarms are triggered, and students
are meant to walk in an orderly manner to the nearest fire assembly point (either
in pairs or in groups under the supervision of a teacher).
The system designed and developed makes use of Unity collider components in
order to gather knowledge about the virtual environment. It was designed in such
a way as to take advantage of two Unity artificial intelligence libraries: the A*
project [37] and the Unitysteer library [38] based off of the Opensteer initiative
[39].
Figure 5.1: Game AI Framework
The most important module is the Character module. This is where character
behavior is generated, usually as an action. Based on this, an action package is
created, and sent off to the appropriate action handler. Here, the action package is
decoded, and the corresponding action module is populated, initialized and started
so that the action can be accomplished. Once the action package has reached the
appropriate action module, it starts out in the waiting state. Once initialized, the
action module transitions the package through the finite state machine states (Fig.
5.2) according to game logic until either the Complete or Stop states are reached,
at which point it is discarded. The actions available to the virtual students are
Idle, Walk, Run, Cower, WalkInGroup, RunInGroup.
The virtual students make use of the Emotion-Driven Behavior Engine in order
to react to events in the environment: when they hear the alarm, they become
fearful and start heading towards the nearest fire assembly point; if they spot a
fire or smoke their fear is increased and are reluctant to go near it, while if they
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
76
Figure 5.2: Finite state machine
are accompanied by a buddy, or a teacher and a group of other students, their fear
is decreased and their trust is increased.
5.2.1
Implementation
This chapter briefly describes the framework implementation (Fig. 5.3) as a class
library developed using C# and its integration with the Unity 3D game engine
and other AI modules developed for Unity, within the ALICE and Roma Nova
projects, during the author’s internship at the Serious Games Institute, Coventry,
UK.
The implementation corresponds with the theory presented in previous chapters.
A diagram is presented.
Core
The Core module forms the backbone of our framework, containing the Agent
class, where all reasoning and behavior simulation takes place.
The Core module also contains all of the submocules used by the Agent class.
Emotion
The emotion module provides a way to encode an emotion or emotional state
throught the Emotion class. The Emotion class also provides other methods:
• Compare two emotions based on their intensities.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
77
Figure 5.3: Module diagram
• Influence an emotion e by a factor f.
• Creates a clone of this emotion.
• Allows the adition of two emotions.
• Allows the scaling of an emotion.
The IEmotion interface provides the main learning mechanism of the model and is
implemented by all knowledge base objects that need an associated emotion, while
the Coefficient class allows easy access to emotion influence coefficients (algorithm
parameters).
Expression
The expression module provides classes to deal with expressions (collections of
symbols), references, variables, values and other symbols.
• Expr. Encodes information about expressions. An expression is of the form
(i, p, v) and signifies the fact that individual i’s property p has value v,
where i,p,v are symbols.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
78
• Symbol. Base abstract class for all symbols. Symbols may be values, variables, references, conditions and operators.
Extra
• Op. Encodes an operator to be applied to a value. Only works on Int and
Float values. Possible operators are +, −, ∗, /.
• Cond. Encodes a condition placed on a value.
Reference
• Ref. Reference to a knowledge base object. To save memory, we work with
references instead of new objects as much as possible.
• IRef. Reference interface. Implemented by objects that need to be referenced. Contains a Reference to the object that implements this interface.
• IRefVal. Reference-value interface. Used to acces the value field of variables
and references. Contains a getter for the value field common to variables and
references.
Value
• Val. Common abstract root class for all values.
• String. String extension of the value class.
• Int. Integer extension of the value class. Used to encode integer numbers.
• Float. Float extension of the value class. Used to encode floating-point
values.
Variable
• Var. Encodes data about variables, such as the variable’s name and a list
of instances of this variable.
• VarInst. Encodes information about a variable instance. Extends Var and
also holds information about any values that the variable is bound to.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
79
Inference
The inference module provides the computing power behind the agent. It allows
the agent to fire rules assert or retract facts and take actions.
• Rule. Rule data structure. Contains a list of antecedent expressions and
a list of consequent expressions, as well as a small agenda that only holds
instances of this rule.
• Engine. Inference engine class. Handles all reasoning. Made up of a forward
chaining inference engine and a backward chaining inference engine.
– adds premises
– applies rules and adds justifications
– provides a mapping between IE and TMS data structures
– informs TMS when new nodes are needed and gives the connection
between the node and corresponding expressions / assertions
– able to retrieve nodes associated with assertions that IE uses
– provides a way for defining nodes as premises
The inference engine is called to run an inference cycle. It in turn calls the
backward and forward engines.
Forward
• ForwardRule. Holds rule data required by the forward chaining inference
algorithm. Initializes RETE network. Creates alpha and beta memories.
• ForwardEngine. Forward chaining inference engine. Implements the Forward.Update() and Forward.Run() algorithms. Contains the agenda, a list
of rule instances that may be fired this inference cycle.
RETE
• Alpha. RETE network alpha memory. Contains a list of bound expressions
and a link to a beta memory.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
80
• Beta. RETE network beta memory. Contains a list of lists of cross-joined
expressions.
• Gamma. Gamma memory. Contains a list of bound expressions corresponding to the expressions in the antecedent of the rule this memory belongs to,
up until the level of the beta memory this memory belongs to.
• Delta. Rule instance. Contains a list of bound consequents that need to be
added when this rule instance is fired.
Backward
• ExprData. Encodes expression data. Used in the goal update algorithm.
• Goal. Goal class. Each goal is represented through an expression of the form
(subject, predicate, object). Any of these may be a variable, making the goal
possibly match an infinite number of facts, and thus renders it unable to be
accomplished, however the inference engine will work towards accomplishing
all possibly matching expressions. Each goal creates a hierarchy of possibly
necessary subgoals to accomplish this goal. This graph is created up to a
depth of maxPlanDepth. The algorithm then searches the agenda for any
rule instances that contain any of the subgoals as a consequence and adds
them to the global plan based on their salience.
• BackwardEngine. Backward chaining inference engine. Implements the
Backward.Update() and Backward.Run() algorithms. Contains the main
plan, made up of rule instances sorted acording to importance.
Interaction
This module contains classes and interfaces that need to be implemented / extended in order to use the framework as an external library.
• ActionDelegate. Action method delegate. The application using the
framework needs to implement a method with this signature for every possible action.
• IAction. Action module interface. Needs to be implemented into an applicationside action module. Contains and action dictionary.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
81
• Event. Encodes data relevant to events. Events are outside influences on
the agent’s internal state. These are added to the agent’s event queue using
the Perceive method.
• Action. Encodes information relevant to an action. An action is a special
type of expression that can have both immediate foreseen consequences as
well as long-term unforeseen ones, depending on the environment and other
agents.
Knowledge
Context. Encodes a knowledge context. Encapsulates all knowledge for an agent.
Contains: ontology root class, list of classes, properties and individuals in the
ontology, list of actions, rules and goals.
Ontology
• Class. Encodes information about ontology classes (such as sub and superclasses, properties and members).
• Property. Encodes ontology property information (domain, range).
• Member. Encodes ontology individual information. extends IIndividual.
• IIndividual. Ontology individual interface. Extended by Val and Member.
Contains accessor’s to individual data structures (sub/superclass, context,
properties, values).
TMS
• TMSNode. Truth maintenance system node data structure. Is associated
to a fact (s,p,o). Specifies whether the node is IN or OUT. Implements truth
maintenance label change algorithm.
• TMSJust. Truth maintenance system justification data structure. Justifies
a certain state for a certain node based on other nodes and triggered rules.
Implements truth maintenance label change algorithm.
• TMSEngine. Truth maintenance engine.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
82
Figure 5.4: Readers module overview
– caches beliefs and maintains node labels
– performs belief revision
– generates explanations
Readers
The Readers module deals with parsing input files and creating the data structures
used to store and manipulate information within the framework. There is a reader
pipeline for each type of data structure (Action, Class, Property, etc.), however
they all share the same three processing steps: parser, lexer and the actual reader.
The parsers and lexers have been generated using the ANTLR tool and serve to
read the input source code file and create the appropriate data structures for them.
These subprocesses also report parser and lexer errors.
The last step, the reader, opens and ANTLRStream and uses the parser and lexer
to extract data and compile it into a global reader. Once this step is complete, we
can start linking the data striuctures.
Linking
The Linking module ensures that all expressions, ontology structures and symbols
are properly linked before being added to the agent’s context. For an overview of
this module, and the steps involved in the processing provided.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
83
Figure 5.5: Linking module overview
The linking module also compiles the data read by the readers module into the
agent’s context.
Linking happens in several stages (Fig. 5.5).
• Linker The linker process contains a list of items that have been compiled
so far and a link to the context being created, as well as links to all other
linking processes. Data is input into the linker, and the linker calls the stage
1 linker
• Stage1 The stage 1 linker, links each individual item read, and calls the
stage 2 linker for each combination of items in the current linker list.
• Stage2 Links knowledge object together and uses a checker object to add
them to the current context. This is also where symbols are linked using the
SymbLink module.
• Checker Checks if knowledge objects are valid (fully linked, using the Validity module) and have not yet been added, and if so, adds them to the
current context.
In the end, the agent’s knowledge context will have been created.
Interface
The interface module has been designed to provide a way for applications to use the
framework as an external library. As such, the application needs only implement
problem specific logic, such as the action module (possible agent actions) and the
actual application logic, as well as any influences it may have on the agent’s state,
passed along as events.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
5.3
84
Orc World
During an internship at the Laboratoire d’Informatique Paris 6, Universite Pierre
et Marie Curie, paris, France, we devised and implemented a game on top of
the virtual character framework to test the Newtonian Emotion System. In the
following sections we go on to define the scenario and game rules, present what
actions are available to agents in the game and how their emotional feedback, and
detail the agent behavior generation and learning algorithms we used.
5.3.1
Scenario and rules
The premise of the game is simple. We created a virtual game world inhabited
by orcs. Each orc has a treasure stash that he must increase (in essence hoarding
items and valuables). The orcs may do this by either foraging in the wilderness,
stealing from other orcs or pillaging other orcs’ stashes, however, in order to do this
they must leave their own treasure unprotected. This scenario provides us with the
same goals for all virtual characters involved (survive and hoard) and allows us to
use the same behavior generation module for all characters, showing the difference
in behavior based on personality characteristics and character experience. The
scenario also provides us with a balanced mix of social, physical and environment
interactions.
In order to determine the success or failure of actions, we used a very simplified
version of the dungeons and dragons 3.5 edition rules [40]. The main reason behind
this was that the dungeons and dragons ruleset contains rules that govern social
interactions as well as physical ones. Alg. 13 describes the turn sequence.
5.3.2
Actions and feedback
Possible actions within the game world and their associated effects are:
• Attack The character attacks another character with the goal of disabling
it.
– success: the attack hits, inflicting damage (decreasing the health) of
the opposing character
– failure: the attack misses, however the target is aware of the attack
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
Figure 5.6: Round sequence diagram
85
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
Algorithm 13 Turn sequence
for all character ∈ characters
characterinitiative = roll(d20)
end for
for all character ∈ characters
Update(characterpercepts )
Generate(characteractions )
Appraise(characteractions )
Run(characteractions )
end for
for all character ∈ characters
Generate(characterf eedback )
Process(characterf eedback )
end for
86
do
{determine turn order}
do
do
– feedback: this action influences the anger and anticipation axes (aggression)
• Social The character attempts to impose its will that another character
perform an action through social interaction.
– success: the interaction succeeds and the target performs the desired
action next turn
– failure: the interaction fails and the target is aware of the attempt
– feedback:
∗ Bluff the target is tricked into performing the action; feedback is
on the trust and joy axes (disgust)
∗ Diplomacy the target is lead to believe that it is in its best interest
to pursue the action; feedback is on the positive side of the joy and
trust axes (love)
∗ Intimidate the target is bullied into carrying out the action; feedback is on the trust and anger axes (dominance)
• Move The character moves to a new destination.
– success: always succeeds
– feedback: this action receives feedback either according to the area the
character is moving to (which is an average of the feedback of all the
actions that a character has experienced in the area), or on the joy and
surprise axes if it is a new area
• Take The character attempts to take an item from a container.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
87
– success: The character succeeds in taking the item
– failure: The action fails if the container is locked, the character is
caught stealing or if the container does not contain the item
– feedback: emotional feedback is on the joy axis and is directly proportional to the acquired object’s value scaled by the character’s current
wealth
• Lock The character attempts to lock or unlock a container (fails or succeeds
based on the character’s skills; no explicit feedback ).
• Equip The character equips an item (fails if the character is unable to equip
the item, or not in possession of the item; no explicit feedback ).
• Sneak The character enters or exits sneak mode (in sneak mode a character’s
is less likely to be detected and able to steal items using the take action,
however, its speed is halved; no explicit feedback ).
The Sneak, Equip and Lock actions do not receive feedback for being executed;
however, all feedback is distributed to previous actions based on a geometric progression, so that if, for example, a character would unlock a container and then
receive a valuable item, the joy felt from the item would also influence the previous
Lock action, making it pleasurable as well, thus acknowledging its contribution
in the action sequence that led to acquiring the item. For example, let’s assume that an agent executes the following action sequence {Sneak, Move, Unlock,
Take(item)}, and ends up picking up an item that increases it’s joy by [.32,0,0,0].
The first three action (Sneak, Move and Unlock ) do not receive any feedback for
being done, however, they will receive a fraction of the feedback for the Take action, based on the assumption that the sequence led to it, and so, for example,
Unlock will receive [.16,0,0,0], Move - [.8,0,0,0] and Sneak - [.4,0,0,0], to show the
synergy existing between these actions.
Emotional feedback for the Attack, Social and Move actions is generated according
to the following formula, where success and direction ∈ [-1,1]:
f eedback = success · (direction · actionemotion +
X
paramemotion )
(5.3)
params
The success parameter is ignored when an agent is convinced the action is in
the best interest of both parties involved (for example in the case of diplomacy),
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
Action
Attack
Dir Success
Joy
Trust
−
src
src
success
failure
dst
dst
success
failure
src
src
dst
dst
success
failure
success
failure
+0.1
-0.1
-0.1
+0.1
-0.1
+0.1
Diplomacy
src
src
dst
dst
success
failure
success
failure
+0.1
-0.1
+0.1
-0.1
+0.1
-0.1
+0.1
-0.1
Intimidate
src
src
dst
dst
success
failure
success
failure
Bluff
Move
Take
Action
Fear
damage
−1
health
10
88
Surprise
−
damage
−1
health
10
+0.1
+0.1
damage
−1
health
damage
−1
health
10
10
-0.1
-0.1
-0.1
+0.1
+0.1
-0.1
-0.1
+0.1
-0.1
+0.1
+0.1
-0.1
as the area the agent is moving into
+0.1
+0.1
value
success + wealth
value
failure − wealth
Dir Success
Joy
Trust
Fear
Surprise
Table 5.1: Action feedback table
assuming that the proposing agent meant well, regardless of result (the result may
still be skewed disfavourably by the backwards feedback distribution).
Table 5.1 presents the base emotion force feedback used for actions available to
agents in OrcWorld (column S denotes if the action was successful and D its
directionality, either source or destination), while table 5.2 presents some examples
of action instances, their computed feedback and its meaning (where the S column
denotes the action’s succes (1 for success, -1 for failure), and the D column its
directionality (1 for outgoing, -1 for incoming)).
For example, if an agent uses the Intimidate action (that has feedback on the Trust
and Fear Axes) on an agent that makes it feel proud, in order to scare it away
and steal its treasure, and succeeds, the resulting emotion felt, according to the
feedback equation, would be a combination of pride and dominance. The virtual
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
Action
Attack
(0,0,-1,-1)
Diplomacy
(1,1,0,0)
Intimidate
(0,1,-1,0)
Action
S
D
Param
Name
Result
1
1
1
-1
-1
-1
-1
-1
-1
1
1
1
1
1
1
1
1
1
-1
-1
-1
-1
-1
-1
(1,0,-1,0)
(1,0,1,0)
(1,0,0,1)
(1,0,-1,0)
(1,0,1,0)
(1,0,0,1)
(1,0,-1,0)
(1,0,1,0)
(1,0,0,1)
(1,0,-1,0)
(1,0,1,0)
(1,0,0,1)
pride
guilt
delight
pride
guilt
delight
pride
guilt
delight
pride
guilt
delight
1
1
1
-1
-1
-1
-1
1
1
1
1
1
1
1
1
-1
-1
-1
-1
-1
(1,0,-1,0) pride
(1,0,1,0)
guilt
(1,0,0,1) deligth
(1,0,-1,0) pride
(1,0,1,0)
guilt
(1,0,-1,0) pride
(1,0,1,0)
guilt
(1,0,-1,0) pride
(1,0,1,0)
guilt
(1,0,0,1) deligth
(2,1,-1,0)
(2,1,1,0)
(2,1,0,1)
(0,-1,-1,0)
(0,-1,1,0)
(0,-1,-1,0)
(0,-1,1,0)
(2,1,-1,0)
(2,1,1,0)
(2,1,0,1)
love, pride
love, guilt
love, delight
contempt
shame
contempt
shame
love, pride
love, guilt
love, delight
1
1
1
1
-1
-1
-1
-1
1
1
-1
-1
1
1
-1
-1
(1,0,-1,0)
(1,0,1,0)
(1,0,-1,0)
(1,0,1,0)
(1,0,-1,0)
(1,0,1,0)
(1,0,-1,0)
(1,0,1,0)
pride
guilt
pride
guilt
pride
guilt
pride
guilt
(1,1,-2,0)
(1,1,0,0)
(-1,-1,2,0)
(-1,-1,0,0)
(1,-1,0,0)
(1,-1,2,0)
(-1,1,0,0)
(-1,1,0,0)
pride, dominance
love
despair, shame
remorse
morbidness
guilt, shame
sentimentality
envy, dominance
S
D
Param
Name
Result
Name
89
Name
(1,0,-2,-1)
pride, aggression
(1,0,0,-1)
delight
(1,0,-1,0)
pride
(1,0,0,1)
delight
(1,0,2,1)
guilt, alarm
(1,0,1,2)
delight, alarm
(-1,0,0,-1)
pessimism
(-1,0,-2,-1)
envy, aggression
(-1,0,-1,-2) pessimism, aggression
(-1,0,2,1)
despair, alarm
(-1,0,0,1)
disappointment
(-1,0,1,0)
despair
Table 5.2: Action feedback generation example table
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
90
character on the other end of the action, based on its feelings towards the acting
agent would be despair, shame or remorse.
Perception module A character in the virtual environment can perceive other
characters, actions and environment objects through the perception module. A
virtual character can have several senses that it uses to make sense of its surroundings. Fig. 5.7 shows a character that has two senses (virtual sensors): sight
and hearing; the area each of them covers and how they overlap. The character
gathers new knowledge through a process called multi-source information fusion.
Figure 5.7: Character perception sensors
Fig. 5.8 presents the information fusion process, showing how multiple knowledge items (each from different sensors) representing the same virtual environment object are integrated into a consistent representation. In essence, when new
information about an object is perceived, the knowledge item pertaining to it is
updated.
5.3.3
Learning module
Feedback from the environment has an associated emotion force vector that affects
the agent’s state. The agent attempts to maximize its Gain (joy and truth axes)
and minimize its Risk (fear and surprise axes), and thus will choose actions that
increase one while decreasing the other; however, the agent does not know what
the feedback will be. It is the learning’s module task to attempt to predict it,
based on the current context. The better the prediction, the better informed a
decision the agent can make.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
91
Figure 5.8: Information fusion
We chose to use linear regression to predict feedback in our application. Linear
regression is a way of modelling the relationship between a dependent variable and
one or more explanatory variables. The data is modelled using a linear combination
of the set of coefficients and explanatory variables. Linear regression is a supervised
machine learning technique that estimates unknown model parameters from the
data.
In our case, the algorithm attempts to learn the feedback method explained. The
dependent variable is the emotional force received by the agent, while the explanatory variable is a feature vector constructed based on the action’s context (Fig.
5.9). The structure of this array is made up of three segments:
1. Base Contains the basic information about the action.
(a) Action The emotion force associated with the action
(b) Action direction Denotes whether the action is incoming or outgoing
(c) Area The emotion force associated with the area that the action takes
place in
2. Agents Contains information about the agents involved in the action
(a) Multiple Number of agents taking part in the action
(b) Agents Average emotion force associated with the agents taking part in
the action
(c) Coherence Coherence of the emotion forces associated with the agents
1
)
taking part in the action (coherence = variance
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
92
(d) Equipment Average emotion force associated with the equipment carried
by the agents
3. Objects Contains information about the objects (other than equipment, i.e.
containers, traps) involved in the action
(a) Multiple Number of objects involved in the action
(b) Objects Average emotion force associated with the agents taking part in
the action
(c) Locked Number of locked containers taking part in the action
(d) Coherence Coherence of the emotion forces associated with the agents
1
taking part in the action (coherence = variance
)
(e) Blocked Average emotion force for all agents blocking containers involved in the action
Figure 5.9: Context feature vector
The algorithm then receives the correct feedback from the environment which
modifies the weights in the algorithm according to the difference between the
estimated feedback force and the actual feedback force.
Following Lazarus’ synthesis, emotions are integrated in the game mechanism as
a 3-phase process, by which the agent interacts with the current game context in
terms of emotions. Every action is first estimated, by means of Machine Learning,
to produce the value of emotional force it implies. At this point, no reference is
made to his current state, so it represents a quantitative exploration of his action
possibilities, and can be viewed as the Agent’s emotional knowledge. A qualitative
evaluation follows, where the Agent determines the effect each action will have on
his current state and chooses one accordingly. He sets priorities to the actions he
is able to perform at the given time, therefore the outcomes of this phase can be
viewed as emotional meaning of the actions in respect to his current state. Upon
choosing an action he is confronted with the real force which determines his next
state. This last phase has a normative role on the Agent’s performance, as the
new state is also influenced by the quality of the estimation.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
Figure 5.10: Action evaluation
93
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
5.3.4
94
Behavior module
For the behavior generation module, we have chosen to use Behavior Trees. Behavior Trees are a framework for game AI that model character behaviors extremely
well, being simple and highly customizable at the same time. Behavior trees are
one abstraction layer higher than Finite State Machines and combine the best features of linear scripts and reactive state machines while at the same time allowing
purposeful behavior similar to planning techniques [41–43].
A behavior tree is a form of hierarchical logic, made up of behavior nodes. That
means that the overall behavior is made up of a task that breaks down recursively
into subtasks, which in turn would break down into even lower level tasks until
it reaches the leaves of the tree. The leaves provide the interface between the
artificial intelligence logic and the game engine. We only use the Decorator leaf
node behavior, that checks a condition, and if this condition is true it attempts
and action that makes changes in the environment. Complexity is built up in
the branches of the tree. The branches are responsible for managing the child
nodes. This is done using Composite tasks, which are tasks made up of multiple
child tasks. We use four types of Composite tasks: Sequence, Selector, Loop and
Concurrent. Fig. 5.11 shows the behavior tree management system hierarchy.
Figure 5.11: Behavior tree
Sequences are responsible for running their child tasks in sequence one by one. If
one child fails, the whole sequence fails. Selector nodes run all child nodes until
one either succeeds or returns a running state. Concurrent nodes visit all of their
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
95
children during each traversal. Loop nodes behave as sequences, however they
loop around when reaching their last child.
To update a behavior tree, it’s always traversed beginning with its root node in
depth first order.
Algorithm 14 Behavior
if status == INVALID then
Initialize()
end if
status = Update()
if status != RUNNING then
Terminate(status)
end if
return status
Algorithm 15 Decorator
if Condition() then
return Action()
else
return FAILURE
end if
Algorithm 16 Selector
while true do
status = current.Tick()
if status != FAILURE then
return status
end if
if Current.MoveNext() == false then
return FAILURE
end if
end while
Algorithm 17 Sequence
while true do
status = Current.Tick()
if status != SUCCESS then
return status
end if
if Current.MoveNext() == false then
return SUCCESS
end if
end while
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
96
Example Let’s assume we have the following behavior tree for OrcWorld:
1. Root (Selector )
(a) Guard Treasure (Decorator )
i. Is enemy near treasure? (Condition)
ii. Intimidate enemy! (Action)
(b) Pillage Treasure (Sequence)
i. Know any treasure? (Condition)
ii. Go to treasure (Action)
iii. Intimidate guard (Action)
iv. Steal treasure (Action)
(c) Idle (Decorator )
All nodes start out as ready. At the first tree update, the ready Root node is run.
The Selector node checks all children from first to last until a child node returns a
success or running state. It fails if no such child can be found. The Guard Treasure
node is run, which is a Decorator made up of a condition and an action. The Is
enemy near trasure? condition fails because there is no enemy visible and thus the
whole Guard Treasure node fails. The next node to be run is the Pillage Treasure
node. The sequence node starts by running its Know any treasure? node. A
target trasure is chosen successfully and the node returns success. The next node
in the sequence is the Go to treasure node. As the treasure is far, it can’t be
reached in this step, so the node returns running. Its parent, Pillage treasure also
returns running.
For the next iteration, all nodes that are not currently running are set to ready.
The Guard treasure node is visited and fails again, however, when the Pillage
treasure node is run again, it picks up in the Go to treasure node, where it left off
in the sequence.
In OrcWorld, we use a behavior tree for each character in order to generate all possible actions, which are then added to an action module, where they are appraised
and prunned according to their Gain
ratio and added to an execution queue. Since
Risk
the nature of the game is turn-based, all actions are run until completion.
In order to select an appropriate action, the agent computes the resulting state of
the Action Force on his Initial State and compares the Impact of all actions. He
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
97
Figure 5.12: Decision flow
is then given feedback for the action he selected, representing the real force of the
action. Confronting this force to what he estimated via Machine Learning results
in a Regret Force which is then applied to the real Force in order to compute the
Current State of the Agent (Fig. 5.12).
5.4
Graphical representation
In this section we will present the graphical representation intended to be seen by
end users (players), named the weighted mesh emotion representation technique,
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
98
Figure 5.13: Emotion system architecture
as well as other more classical graph-based representation which give more aid in
the study and analysis of the system.
Weighted mesh emotion representation The Newtonian Emotion System
vector representation of emotion is very well suited to graphical representation,
as we can have one graphical representation for each base emotion (each end of
each one of the four axes), use the position of emotion in order to interpolate
between the two ends, and then merge together all four resulting mixed emotions,
achieving subtle nuances of emotion expression as the agent’s state varies over the
emotion space. This resulting representation may in turn be interpolated with
animation sequences such as speaking or breathing resulting in very believable
emotion expression. We may also vary activity levels by speeding up or slowing
down the animation according to emotion intensity.
Figure 5.14 shows a character mockup expressing the eight basic emotions and
the neutral state. This image shows that even with a low-poly virtual character,
the weighted emotion representation technique can be used to achieve different
nuances of emotion expression.
There is, however, a problem with this technique, mainly that it is not yet supported by character design tools, animation tools, and game engines.
Classical graphs In addition to the weighted mesh graphical representation, we
also use two other, more classical methods, that make it easier to follow character
emotion evolution from a research standpoint. The first one of these (Fig. 5.15)
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
(a) Anticipation
(b) Joy
(c) Trust
(d) Anger
(e) Neutral
(f) Fear
(g) Disgust
(h) Sadness
(i) Surprise
Figure 5.14: A virtual character displaying the basic emotions
99
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
100
is a normal bidimensional graph, that shows all four NES emotion representation
axis on the same chart, and their evolution over time.
Figure 5.15: Bidimensional emotion representation graph
The other emotion visualization method, makes use of 3d space, where the first
three axes are each assigned to one of the three dimensional space axes and the
fourth is represented through the color intensity of each 3d point.
Chapter 6
Conclusion and future work
This section presents our conclusions, a summary of the contributions and interesting future research directions.
6.1
Conclusion
The main goal of this work has been to develop an emotion simulation model and
agent architecture that provides artificial characters in virtual environments with
believable behavior in order to enhance virtual environment experiences for human users; this means that emotions should act as motivation for agent behavior,
influence and improve perception, reasoning and decision-making.
The main objective of this project has been the proposal of an original architectural
solution emotion representation and virtual character framework that possesses the
following characteristics
• easily expandable; independent of the AI techniques used to endow the virtual characters with behavior
• simple, few algorithm parameters, easy to learn, use and implement
• accomplishes all effects of emotion on behavior according to theory, use emotions as motivation
• easily integrated with a game engine
101
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
102
We first put forth the Emotion-Driven Character Reasoning Engine, a BeliefDesire-Intension inspired architecture that replicates our understanding of the
functioning of the human mind at an abstract level. We use Plutchik’s emotion taxonomy and Lazarus’ appraisal model to endow virtual characters with an
emotional layer. The architecture gives an in-depth specification of how emotions
interact with and influence other cognitive processes in the BDI architecture, such
as perception, memory management, fact recall and rule application. The main
contributions of this model are that emotion is not only tied to the agent’s learning and reasoning process, but it is a key component, providing agents with the
underlayer for the simulated emotion, making the reasoning process more akin to
humans, achieving complex, believable behavior for virtual agents. Another important contribution of this model is the improvement of agent performance and
effectiveness through emotion simulation by speeding up relevant rule and fact
recall, as it is currently believed emotions do in humans.
Further on, we refined the emotion representation scheme used in the EmotionDriven Chracter Reasoning Engine into the Newtonian Emotion System, a representation model vectorial in nature, easily integrated into computational environments and firmly grounded in Pluthcik’s theory of emotion. We have also
developed laws that dictate how emotions interact with each other and external
emotional forces. This representation scheme is easily understood by technical
minded people and should pose no problems during implementation or authoring,
helping with its adoption by developers and designers. The NES is fast and lightweight and we have also shown that it is very flexible, being used to represent
emotional states, moods and personality. The model is also expandable, allowing for example to create a more complex personality description through several
personality filters, each applied in certain cases. Our NES is fast and lightweight,
without sacrificing any expressitivity.
The artificial intelligence technique used needs to match the agent’s, game’s and
environment’s capabilities. This is why we took a step back from the BDI paradigm,
as not all applications require the same complex and in-depth reasoning process,
and developed the Plug-and-Play architecture as a generic means of providing an
emotion layer to any virtual character architecture, as an interface between the
character’s behavior layer and the environment. The system still functions according to psychological theory, influencing the way that a character perceives the
environment. The system does not influence reasoning, however it does influence
the agent’s actions through the motivation mechanism. A generic interface architecture frees up dvelopers and designers to create the rest of the architecture
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
103
as simple or as complex as necessary. The behavior generation module may be
as simple as a finite state machine or as complicated as the belief desire architecture, with everything from decision trees to behavior trees in between. The
appraisal module is again left to the developer, the only requierement being that
it attepmt to predict and action’s feedback or evaluate a percept correctly based
on experience. We have thought of several machine learning techniques that can
be used, depending on the constraints imposed, such as linear regression, support
vector machines and clustering. By using the Plug-and-Play interface proposed in
this thesis, the game logic can be kept as simple as possible, as there is no need
to adapt the game engine architecture to accomodate computationally expensive
artificial intelligence techniques.
To demonstrate the system, we developed a computer game artificial intelligence
framework using the Unity 3d Game Engine. The framework provides the underlying mechanisms for a character’s existence and interaction with a game world,
including a behavior module. To ease behavior management and generation, we
implemented a behavior tree library in c# to be used with Unity. we provide
and example of how to integrate the Newtonian Emotion System and the Plugand-Play subsystem with the mentioned framework in a computer role-playing
game. We chose computer role-playing games as an application domain due to
personal preference, however, the system can be used in any type of application
that requires emotion simulation. Moreover, other applications may not have the
processing power restrictions that running along side CPU intensive special effects
and physics simulations impose, and would leave more compuring power for the
artificial intelligence module.
We have provided virtual characters with the means of artifical expression and
methods for its integration into intelligent artificial agent architectures. We have
achieved this respecting the issues and concerns outlined in section 1.1. Both
systems are firmly grounded in theory, respecting PLutchik’s taxonomy, Lazarus’
appraisal model, replicating the way emotions influence human perception and
motivating a character’s actions in the environment. We have developed a game
artificial intelligence framework and demonstrated how easily the Plug-and-play
architecture is integrated with the AI framework as well as the game engine used.
We have also succeeded in keeping the system complexity down in order to make
the system more accessible and easy to use and adopt. Last, but not least, the
Plug-and-Play system is scalable (as the system only intervenes within the agent’s
context, the overhead is linear) and both the New Newtonian Emotion System
and the Plug-and-Play interface are easily expandable.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
6.2
104
Contribution
The main contributions of this thesis are as follows:
• We have conceived a model of emotional BDI (Belief-Desire-Intention) agents
to be used as the basis for the construction of believable virtual characters.
Sec. 4.1.
• We have conceived and developed the emotion-driven virtual character BDI
reasoning engine that replicates the way I understand and believe human
cognition works and is influenced by emotions. Sec. 4.1.1.
• We have proposed the Newtonian Emotion model and the associated Newtonian Emotion System, an emotion representation scheme easily integrated
and used in a computational environment, firmly grounded in Plutchik’s
theory of emotion. Sec. 4.2.
• We have conceived and implemented an artificial emotion appraisal process
grounded in the Lazarus’ leading theory in the field that was used in tailoring
the behavior of believable virtual characters. Sec. 5.3.3.
• We have designed and implemented a Plug-and-play architecture for the
integration of affective computing into intelligent artificial agent systems.
Sec. 4.2.2.
• We have proposed a framework for the integration of intelligent believable
behaviour into computer games (specifically computer role-playing games).
Sec. 5.2, 5.3.
• We have conceived and implemented a behavior tree framework that manages
the execution of behavior generation nodes for virtual characters. Sec. 5.3.4.
• We have developed an action module as part of a demo, that resolves a
virtual character’s actions and their consequences and feedback. Sec. 5.3.2.
• We have conceived and validated a model in which emotions act as motivation for virtual character actions and in which emotions are used as heuristics
in order to speed up event evaluation and decision-making. Sec. 4.2.3.
• We have developed an emotion simulation module where a character’s personality and past experience (through machine learning) influence its reasoning and decision-making. Sec. 4.2.4, 4.2.5.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
105
• We have designed and developed a generic, domain-independent artificial
emotion simulation system, which is simple, easy to learn, adopt and integrate. Sec. 5.3.
• We have proposed a set of methods for the graphical representation of artificially generated emotions. Sec. 5.4.
• We have investigated several general theories of emotions and computational
theories of affective reasoning, we have compared and discussed their merits
and drawbacks. Sec. 2.
Publications
• V. Lungu and A. Baltoiu. Using Emotion as Motivation in the Newtonian
Emotion System for Intelligent Virtual Characters. In Buletin Universitatea
Politehnica Bucuresti, Bucuresti, Romania, submitted 2012.
• V. Lungu. Newtonian Emotion System. In Proceedings of the 6th International Symposium on Intelligent Distributed Computing - IDC 2012, Springer
Series on Intelligent Distributed Computing, vol. VI, pages 307-315, Calabria, Italy, 2012.
• V. Lungu. Artificial Emotion Simulation Model and Agent Architecture.
In Advances in Intelligent Control Systems and Computer Science, Springer
Series in Advances in Intelligent Systems and Computing, vol. 187, pages
207-221, 2013.
• V. Lungu and A. Sofron. Using Particle Swarm Optimization for Particle
Systems. In The 18th International Conference on Control Systems and
Computer Science, vol. 2, pages 750-754, Bucharest, Romania, 2011.
• V. Lungu. Artificial Emotion Simulation Model. In 7th Workshop on Agents
for Complex Systems (ACSYS 2010) held in conjuction with 12th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing - SYNASC 2010, Timisoara, Romania, 2010.
• V. Lungu. Rule-based System for Emotional Decision-Making Agents. In
Distributed Systems Conference, Suceava, Romania, 2009.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
6.3
106
Future work
There are several improvements that can be made to the model in order to make it
more flexible, and better able to suit the motivation needs for virtual characters,
as well as interesting new research directions that lead to new application domains
for the model.
The main issue pointed out about the model is its inability to scale incoming
emotions according to action class. For example, greed is considered a driving
force for humans, however, greed is not considered an emotion within Plutchik’s
taxonomy. This can be achieved by rewarding a character acquiring wealth o
power through Joy emotion (which is a driving force, that the character attempts
to maximize, within our model). However, in its current state, the model is unable
to create a very greed driven character. One solution would be to add an amplifier
filter (similar to the personality filter) that only applies in certain cases (for a
certain class of actions, in our example, those that have as a result increases in
wealth and/or power; in our scenario this would be the Take action).
We would also like to mention an implementation issue here, as we feel it is important, namely that the method we used in our scenario to provide emotional
feedback gives out inconclusive results in some border cases (namely when the
feedback is distributed evenly accross the four axes). Fortunately, these are rare
(only when the feedback is distributed evenly accross the four axes), however we
feel a more refined feedback generation method could solve this issue.
Another interesting issue that has been pointed out in discussion, that is not
currently being handled by the model, is the cold start problem, how the system
behaves when there is no data available upon initialization. Currently, the system
can generate data sets randomly and label them using the formula. Agents that
do not do this, are considered blank slate newborns, and will only learn from
experience (we believe that having your world views skewed by personal experience
is a boon for human behavior simulation).
As mentioned before, there are several application domains that the model can
easily expand to. The first among these would be as a trust and reputation model.
Emotions play such a role in humans. The process is called reciprocal altruism, an
emergent norm in human systems that states that one person will help another,
seemingly without any possibility of reciprocation, in the hopes that the initial
actor will be helped themselves at a later date - in artificial intelligence terms, this
means that an agent would act in a manner that temporarily reduces its fitness
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
107
while increasing another agent’s fitness, with the expectation that the other agent
will act in a similar manner at a later time. Emotions serve as an evaluator to help
spot cheaters that try to abuse the system. We could use the Trust-Disgust axis
in order to quantify how a given agent performs when a contract has been agreed
upon, and use the Surprise-Anticipation axis in order to evaluate how an agent
conforms to emergent but non-contractual norms. In order for the system to be
useful in spotting cheaters, we would also need to achieve a level of information
fusion (integrating emotion information about an agent in relation to norms from
multiple sources - i.e. all other agents that have come into contact); this can be
achieved in a centralized or distributed manner, depending on goals.
Another possible extension of the model would be in the field of multi agent systems, endowing them with collective emotions, where based on the law of universal
attraction, agents would be able to influence each other’s states through proximity.
This would work to give an overall state of the system. For example, if one or a
few agents are damaged or not functioning properly, it would go unnoticed, however, if there is something wrong with the network, the overall emotion displayed
would be poor. This could be used in self-organizing systems (such as ambient
intelligence applications) in order to give a non-invasive diagnosis of the system
(both for the system overall, as well as for individual devices); different emotions
could be used to make users aware of different classes of problems. The same
system could be used by artificial agents part of the same network to signal states
among themselves. It could also serve as an indicator of how devices interact and
influence each other (if emotion influence happens during direct interaction, and
not through the dissemination proposed earlier).
Yet another application is the integration of the model, together with a natural language processing unit capable of extracting emotion from text, into social
networks. This would serve to analize how the mood of a message influences a
person’s emotional state and how emotional states diffuse themselves in human
networks, granting new insight into how emotions work.
The model could also be used in the psychological domain, for example allow
medical professionals to play out different scenarios for patients in a virtual reality
environment in order to study their reaction and prescribe treatment, as well as
enable them to create scenarios that teach people how to express their emotions
by means of example.
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
108
As emotions have been shown to establish the importance of events and details
around us, and we have shown that the same can be applied to artificial systems, the Newtonian Emotion System could be used in time critical information
fusion applications, in order to establish information integration priority (or belief
maintenance in artificial agents).
It has been show that mood is contagious in humans, so a large number of applications will probably want to take advantage of this, and attempt to influence users
in order to get them into a more agreeable mood or ensure that they are happy
or enjoying themselves using a certain product or service. The technology could
further be integrated with ambient intelligence applications to influence people’s
mood in order to make them more productive/happier/more accepting. Coupled
with an emotion reading system, the model could serve as an indicator for the
state of an area (i.e. mental health or quality of life), as well as give an idea on
how neighboring areas interact and influence each other.
Appendix A
Plutchik emotion model details
This chapter presentes two figures that give some details about Pluthik’s emotion
model (the model that the Newtonian Emotion System presented in this thesis is
based off of).
Dyads
In our emotion representation model, emotions are created as a mix of the eight
basic emotions. Each pair of two (dominant) emotions gives out a new emotion.
These pairs are called dyads. Figure A.11 presents the names given to dyads in
Plutchik’s emotion model: first, second and third-degree dyads are shown.
1
Image source: http://danspira.files.wordpress.com/2010/12/plutchik-emotional-dyads.jpg
109
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
Figure A.1: Plutchik extended dyads
110
Artificial Emotion Simulation Techniques for Intelligent Virtual Characters
111
Appraisal
Figure A.22 presents some examples of event appraisals within the Plutchik emotion theory. The chart shows what the triggering event is (stimulus event), what
the significance of the event is to the subject (cognitive appraisal), the emotion felt
(subjective reaction), what the subject’s ensuing behavior should be (behavioral
reaction) and the function it serves (function).
Figure A.2: Plutchik appraisal examples
2
Image
source:
http://thumbnails.visually.netdna-cdn.com/robert-plutchiks-psychoevolutionary-theory-of-basic-emotions 50290c02eeb7e w587.jpg
Bibliography
[1] Ruth Aylett and Michael Luck. Applying artificial intelligence to virtual
reality: Intelligent virtual environments. In Applied Artificial Intelligence 4,
pages 3–32, 2000.
[2] Gerald Clore Andrew Ortony and Allan Collins. The cognitive structure of
emotions. Cambridge University Press, Cambridge, 1988.
[3] Randolph M. Jones Eric Chown and Amy E. Henninger. An architecture for
emotional decision-making agents. In Proceedings of the first international
joint conference on Autonomous agents and multiagent systems, pages 352–
353, Bologna, Italy, July 2002.
[4] Thomas R. Ioerger Magy Seif El-Nasr and John Yen. Peteei: a pet with
evolving emotional intelligence. In Proceedings of the third annual conference
on Autonomous Agents, pages 9–15, Seattle, Washington, United States, April
1999.
[5] Ronald
de
Sousa.
Emotion.
Online,
available
at
http://plato.stanford.edu/entries/emotion/, September 2010.
Stanford
Encyclopedia of Philosophy.
[6] Robert Plutchik. Emotion: Theory, research, and experience: Vol. 1. Theories
of emotion. New York: Academic, New York, NY, 1990.
[7] Richard S. Lazarus. Emotion and Adaptation. Oxford University Press, New
York, NY, 1991.
[8] Alex J. Champandard. Trends and highlights in game ai for 2011. Online, February 2012. http://aigamedev.com/insider/discussion/2011-trendshighlights.
[9] Klaus Scherer. What are emotions and how can they be measured? Social
Science Information, 44(4):695–729, 2005.
112
Bibliography
113
[10] John Laird Jill Fain Lehman and Paul Rosenbloom. A gentle introduction to
soar, an architecture for human cognition. In S. Sternberg and D. Scarborough
(Eds). MIT Press, 1996. Invitation to Cognitive Science.
[11] Stanley Schachter and Jerome Singer. Cognitive, social, and physiological
determinants of emotional state. Psychological Review, 69:379–399, 1962.
[12] Tali Sharot and Elizabeth A. Phelps. How arousal modulates memory: Disentangling the effects of attention and retention. Cognitive, Affective and
Behavioral Neuroscience, 4:294–306, 2004.
[13] Elizabeth A. Kensinger. Remembering emotional experiences: The contribution of valence and arousal. Reviews in the Neurosciences, 15:241–251,
2004.
[14] Friderike Heuer Alafair Burke and Daniel Reisberg. Remembering emotional
events. Memory and Cognition, 20:277–290, 1992.
[15] Randolph M. Jones Amy E. Henninger and Eric Chown. Behaviors that
emerge from emotion and cognition: implementation and evaluation of a
symbolic-connectionist architecture. In Proceedings of the Second international Joint Conference on Autonomous Agents and Multiagent Systems,
pages 321–328, Melbourne, Australia, 2003. ACM, New York, NY.
[16] Nelma Moreira David Pereira, Eugnio Oliveira and Lus Sarmento. Towards
an architecture for emotional bdi agents. In EPIA 05: Proceedings of 12th
Portuguese Conference on Artificial Intelligence, pages 40–47. Springer, 2005.
[17] Martin Strauss. Eric: a generic rule-based framework for an affective embodied commentary agent. In In AAMAS 08: Proceedings of the 7th international
joint conference on Autonomous agents and multiagent systems, pages 97–104,
2008.
[18] Patrick Gebhard. Alma - a layered model of affect. In 4th International Joint
Conference of Autonomous Agents and Multi-Agent Systems (AAMAS 05),
2005.
[19] Nicolas Sabouret Magalie Ochs and Vincent Corruble. Simulation of the
dynamics of non-player characters’ emotions and social relations in games.
IEEE Transactions on Computational Intelligence and AI in Games, 1(4):
281–297, December 2009.
Bibliography
114
[20] Iolanda Leite André Pereira, Carlos Martinho and Ana Paiva. icat, the chess
player: the influence of embodiment in the enjoyment of a game. In Proceedings of the 7th international joint conference on Autonomous agents and
multiagent systems, volume 3 of AAMAS ’08, pages 1253–1256, Richland,
SC, 2008. International Foundation for Autonomous Agents and Multiagent
Systems.
[21] Mehdi Dastani Bas R. Steunebrink and Jean-Jules Ch. Meyer. Emotions as
heuristics in multi-agent systems. In Proceedings of the 1st Workshop on
Emotion and Computing - Current Research and Future Impact, pages 15–18,
2006.
[22] Mehdi Dastani Bas R. Steunebrink and John-Jules Ch. Meyer. A logic of
emotions for intelligent agents. In Proceedings of the 22nd national conference
on Artificial intelligence - Volume 1, pages 142–147. AAAI Press, 2007. ISBN
978-1-57735-323-2.
[23] Mehdi Dastani Bas R. Steunebrink and John-Jules Ch. Meyer. The occ model
revisited. In Proceedings of the 4th Workshop on Emotion and Computing –
Current Research and Future Impact, 2009.
[24] Valentin Lungu. Artificial emotion simulation model. In 7th Workshop on
Agents for Complex Systems (ACSYS 2010) held in conjuction with 12th International Symposium on Symbolic and Numeric Algorithms for Scientific
Computing, Timisoara, 2010. SYNASC 2010.
[25] Jeff Rickel and W. Lewis Johnson. Steve: An animated pedagogical agent
for procedural training in virtual environments (extended abstract). SIGART
Bulletin, 8:16–21, 1997.
[26] Eva Hudlicka. Affective computing for game design. In Proceedings of the
4th Intl. North American Conference on Intelligent Games and Simulation
(GAMEON-NA), pages 5–12, 2008.
[27] The sims video game. Online, available at http://thesims.ea.com, October
2009.
[28] Fahrenheit video game. Online, available at http://en.wikipedia.org/wiki/
Fahrenheit (video game), October 2009.
[29] Itspoke: An intelligent tutoring spoken dialogue system. Online, available at
http://www.cs.pitt.edu/ litman/itspoke.html, October 2009.
Bibliography
115
[30] Roddy
Cowie.
Emotion
oriented
computing:
State
of the art and key challenges.
Online,
available at
http://emotionresearch.net/projects/humaine/aboutHUMAINE/
HUMAINE%20white%20paper.pdf, October 2010.
Humaine Network of
Excellence.
[31] Jeff Rickel and W. Lewis Johnson. Animated agents for procedural training in
virtual reality: Perception, cognition, and motor control. In Applied Artificial
Intelligence 13, pages 343–382, 1999.
[32] Kevin Gurney. An Introduction to Neural Networks. CRC Press, 1997.
[33] James Kennedy and Russell C. Eberhart. Swarm Intelligence. Morgan Kaufmann, 2001.
[34] Valentin Lungu and Angela Şofron. Using particle swarm optimization to create particle systems. Proceedings of CSCS-18 18th International Conference
on Control Systems and Computer Science, 2:750–754, May 2011.
[35] Valentin Lungu. Rule-based system for emotional decision-making agents. In
Sisteme distribuite ISSN 2067-5259, Suceava, 2009. online.
[36] Unity
Technologies.
Unity
3d
game
engine
erence
manual.
Online,
2012.
available
http://docs.unity3d.com/Documentation/Components/index.html.
[37] Aron Granberg. A* pathfinding project. Online, 2012.
http://www.arongranberg.com/unity/a-pathfinding.
refat
available at
[38] Arges Systems. Unitysteer - steering components for unity. Online, 2000.
available at http://arges-systems.com/blog/2009/07/08/unitysteer-steeringcomponents-for-unity/.
[39] Craig Reynolds. Steering behaviors for autonomous characters. Online, 1999.
available at http://www.red3d.com/cwr/steer/.
[40] Jans W. Carton. The hypertext d20 system reference document. Online,
2012. available at http://d20srd.org.
[41] Alex J. Champandard. Understanding behavior trees. Online, September
2007. http://aigamedev.com/open/article/bt-overview/.
Bibliography
116
[42] Alex J. Champandard. Behavior trees for next-gen game ai (video, part 1).
Online, December 2007. http://aigamedev.com/open/article/behavior-treespart1/.
[43] Alex J. Champandard. Behavior trees for next-gen game ai. Online, December
2008. http://aigamedev.com/insider/presentations/behavior-trees/.
[44] Valentin Lungu. Newtonian emotion system. In Proceedings of the 6th International Symposium on Intelligent Distributed Computing - IDC 2012, volume VI, pages 307–315. Springer Series on Intelligent Distributed Computing,
September 2012.
[45] Christoph Bartneck. Integrating the occ model of emotions in embodied
characters. Design, pages 1–5, 2002.
[46] Sara de Freitas and Steve Jarvis. Serious games engaging training solutions:
A research and development project for supporting training needs. British
Journal of Educational Technology, 38(3):523–525, May 2007.
[47] Mehdi Dastani Bas R. Steunebrink and John-Jules Ch. Meyer. Towards a
quantitative model of emotions for intelligent agents. In KI’07 Workshop on
Emotion and Computing - Current Research and Future Impact, 2007.
[48] A.D. Craig. Interoception: The sense of the physiological condition of the
body. Current Opinion in Neurobiology, 13:500–505, 2003.
[49] Pat Langley, John E. Laird, and Seth Rogers. Cognitive architectures: Research issues and challenges. Technical report, University of Michigan, 2002.
[50] Allen Newell John E. Laird and Paul S. Rosenbloom. Soar: An architecture
for general intelligence. Artificial Intelligence, 33:1–64, 1987.
[51] Soar: Along the frontiers. Online, available at www.soartech.com, 2002. Soar
technology.
[52] Randolph M. Jones. An introduction to cognitive architectures for modeling
and simulation. In Interservice/Industry Training, Simulation, and Education
Conference (I/ITSEC), 2004.
[53] Valentin Lungu. Artificial emotion simulation model and agent architecture.
Advances in Intelligent Control Systems and Computer Science, 187:207–221,
2013.
Bibliography
117
[54] Oliver Duvel and Stefan Zerbst. 3D Game Engine Programming. Course
Technology PTR, 2004.
[55] Jason Gregory. Game Engine Architecture. A.K. Peters, 2009.
[56] Stuart J. Russell and Peter Norvig. Artificial Intelligence – A Modern Approach. Prentice Hall, Englewood Cliffs, New Jersey 07632, 1995.
[57] Adina Magda Florea and Adrian George Boangiu. Bazele logice ale inteligentei artificiale. Universitatea Politehnica Bucuresti, 1994.
[58] Alain Mackworth David Poole and Randy Goebel. Computational Intelligence: a Logical Approach. Oxford University Press, 1998.
[59] Ronald Brachman and Hector Levesque. Knowledge Representation and Reasoning. Morgan Kaufman, 2004.
[60] Gerard Tel. Introduction to Distributed Algorithms. Cambridge University
Press, 1994.
[61] Mordechai Ben Ari. Principles of concurrent and distributed programming.
Prentice Hall, N.Y., 1990.
[62] Wolfgang F. Engel. Shader X2 Introductions and Tutorials with DirectX 9.
Wordware Publishing Inc., 2004.
[63] Randima Fernando. GPU Gems: Programming Techniques, Tips and Tricks
for Real-Time Graphics. Pearson Education, 2004.
[64] Matt Pharr. GPU Gems 2: Programming Techniques for High-Performance
Graphics and General-Purpose Computation. Pearson Education, 2005.
[65] David Eberly. The 3D Game Engine Architecture: Engineering Real-time
Applications with Wild Magic. Morgan Kaufmann, 2005.
[66] David Eberly. 3D Game Engine Design: A Practical Approach to Real-Time
Computer Graphics. Morgan Kaufmann, 2006.
[67] Munindar P. Singh. Agent communication languages: Rethinking the principles. Computer 31, 12:40–47, 1998.
[68] Pereira David, Eugenio Oliveira, and Nelma Moreira. Formal modelling of
emotions in bdi agents. Computational Logic in Multi-Agent Systems: 8th
international Workshop, CLIMA VIII, 2008.
Bibliography
118
[69] Eugenio Oliveira and Luis Sarmento. Emotional advantage for adaptability
and autonomy. In Proceedings of the Second international Joint Conference
on Autonomous Agents and Multiagent Systems, pages 305–312, Melbourne,
Australia, 2003. ACM, New York, NY.
[70] Daniel Moura and Eugenio Oliveira. Fighting fire with agents: an agent coordination model for simulated firefighting. In Proceedings of the 2007 Spring
Simulation Multiconference, pages 71–78, San Diego, CA, 2007. Society for
Computer Simulation International. Volume 2.
[71] Eugnio Oliveira David Pereira and Nelma Moreira. Modelling emotional bdi
agents. In Workshop on Formal Approaches to Multi-Agent Systems (FAMAS
2006), 2006.
[72] Nick Yee. The labor of fun: How video games blur the boundaries of work
and play. Games and Culture, 1:68–71, 2006.
[73] Michael Wooldridge and Nicholas R. Jennings. Pitfalls of agent-oriented development. In Proceedings of the Second International Conference on Autonomous Agents, pages 385 – 391. ACM Press, 1998.
[74] Barney Pell Michael P. Georgeff, M. Tambe Martha E. Pollack, and Michael
Wooldridge. The belief-desire-intention model of agency. In Proceedings of
the 5th international Workshop on intelligent Agents V, Agent theories, Architectures, and Languages, 1999.
[75] Robert Bisson H. Van Dyke Parunak and John Sauter Sven Brueckner,
Robert Matthews. A model of emotions for situated agents. In Proceedings of the Fifth international Joint Conference on Autonomous Agents and
Multiagent Systems, pages 993–995, New York, NY, 2006. ACM.
[76] Amy E. Henninger, Randolph M. Jones, and Eric Chown. Behaviors that
emerge from emotion and cognition: A first evaluation. In Interservice/Industry Training Systems and Education Conference(I/ITSEC), 2002.
[77] Joseph Bates. The role of emotion in believable agents. In Commun, pages
122–125. ACM 37, 1994.
[78] Sara de Freitas and Steve Jarvis. A framework for developing serious games
to meet learner needs. In Interservice/Industry Training, Simulation, and
Education Conference (I/ITSEC), 2006.
Bibliography
119
[79] Xiao Mao and Zheng Li. Implementing emotion-based user-aware e-learning.
In CHI 2009, Spotlight on Works in Progress, 2009.
[80] Hannes Pirker Marc Schrder and Myriam Lamolle. First suggestions for an
emotion annotation and representation language. In Proceedings of LREC’06
Workshop on Corpora for Research on Emotion and Affect, pages 88–92,
Genoa, Italy, 2006.
[81] Kostas Karpouzis Jean-Claude Martin Catherine Pelachaud C. Peter Hannes
Pirker J. Schuller J. Tao Marc Schroder, Laurence Devillers and I. Wilson.
What should a generic emotion markup language be able to represent? In
Second International Conference on Affective Computing andIntelligent Interaction: Lecture Notes in Computer Science. Springer, 2007.
[82] Athanasios S. Drigas, Leyteris G. Koukianakis, and John G. Glentzes. Knowledge engineering and a virtual lab for hellenic cultural heritage. In Proceedings
of the 5th WSEAS Int. Conf. on Artificial Intelligence, 2006.
[83] Michael David and Chen Sande. Serious games: Games that educate, train,
and inform, 2006.
[84] Obe Hostetter. Video games - the necessity of incorporating video games as
part of constructivist learning. James Madison University, 2002. Department
of Educational Technology.
[85] W3C Incubator Group. Elements of an emotionml 1.0. Online, 2008. available
at http://www.w3.org/2005/Incubator/emotion/XGR-emotionml/.
[86] HUMAINE Network of Excellence.
Humaine emotion annotation and representation language (earl).
Online, 2009.
available
html/
at
http://emotion-research.net/projects/humaine/earl/index
?searchterm=emotion%20markup%20language.
[87] Isabela Gasparini Silvia Schiaffino, Analı́a Amandi and Marcelo S. Pimenta.
Personalization in e-learning: the adaptive system vs. the intelligent agent
approaches. In Proceedings of the VIII Brazilian Symposium on Human Factors in Computing Systems, IHC ’08, pages 186–195, Porto Alegre, Brazil,
Brazil, 2008. Sociedade Brasileira de Computatio.
[88] Daniele
Nardi
and
Ronald
Brachman.
An
introduction to description logics.
Online, 2010.
available at
http://www.inf.unibz.it/ franconi/dl/course/dlhb/dlhb-01.pdf.