Download Simulating effects of signage, groups, and crowds on emergent evacuation patterns

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
AI & Soc
DOI 10.1007/s00146-014-0557-4
ORIGINAL ARTICLE
Simulating effects of signage, groups, and crowds on emergent
evacuation patterns
Mei Ling Chu • Paolo Parigi • Jean-Claude Latombe
Kincho H. Law
•
Received: 11 December 2013 / Accepted: 18 August 2014
Springer-Verlag London 2014
Abstract Studies of past emergency events have revealed
that occupants’ behaviors, egress signage system, local
geometry, and environmental constraints affect crowd
movement and govern the building evacuation. In addition
to complying with code and standards, building designers
need to consider the occupants’ social characteristics and
the unique layout of the buildings to design occupantcentric egress systems. This paper describes an agent-based
egress simulation tool, SAFEgress, which incorporates
important human and social behaviors observed by
researchers in safety and disaster management. Agents in
SAFEgress are capable of perceiving building emergency
features in the virtual environment and deciding their
behaviors and navigation. In particular, we describe four
agent behavioral models, namely following familiar exits,
following cues from building features, navigating with
social groups, and following crowds. We use SAFEgress to
study how agents (mimicking building occupants) react to
different signage arrangements in a modeled environment.
M. L. Chu (&) K. H. Law
Department of Civil and Environmental Engineering, Stanford
University, Stanford, CA 94305-4020, USA
e-mail: [email protected]
K. H. Law
e-mail: [email protected]
P. Parigi
Sociology Department, Stanford University, Stanford,
CA 94305-4047, USA
e-mail: [email protected]
J.-C. Latombe
Kumagai Professor Emeritus, Computer Science Department,
Stanford University, Stanford, CA 94305-4010, USA
e-mail: [email protected]
We explore agents’ reactions to cues as an emergent phenomenon, shaped by the interactions among groups and
crowds. Simulation results from the prototype reveal that
different designs of building emergency features and levels
of group interactions can trigger different crowd flow
patterns and affect overall egress performance. By considering the occupants’ perception about the emergency
features using the SAFEgress prototype, engineers,
designers, and facility managers can study the human
factors that may influence an egress situation and, thereby,
improve the design of SAFEgress systems and procedures.
Keywords Crowd simulation Egress simulation Building egress Social agents Social behavior Collective behavior Simulated perception
1 Introduction
We designed a computer model (Social Agent for Egress or
SAFEgress) for studying how agents react to cues in emergency situations. Instead of treating agents as isolated atoms
reacting to emergency scenarios, we embedded them into
social groups, each defined by a unique social structure and
group norm. The agents make decisions considering group
members and neighbors, in addition to individual preferences. Moreover, each agent is equipped with the capabilities
of sensing, reasoning, memorizing, and locomotion to decide
and execute its actions. This setting allows us to explore
reactions to cues as an emergent phenomenon, shaped by the
interactions between individual preferences, group characteristics, and crowd behaviors.
Specifically, we use SAFEgress to study the impacts of
different exit signage systems within the constraints of a
given building layout. Simulation results from our
123
AI & Soc
demonstration indicate that occupants’ exit preferences,
visual perception of the signage system, herding behavior,
and social behavior among groups can lead to very different reactions to cues. The results can be used to suggest
potential improvements in the placement of exit signs in
order to trigger more efficient evacuations from buildings
during emergencies. Furthermore, our model also has
applications outside the field of induced behavioral change.
For instance, SAFEgress can be used to study the effects of
human and social behaviors on collective crowd movement
patterns. Most egress simulation tools assume simplistic
behavioral rules and mostly ignore social behaviors of the
agents (Aguirre et al. 2011; Kuligowski 2011). By modeling agents with social behaviors, SAFEgress addresses
these deficiencies.
This paper is organized as follows: Sect. 2 describes the
related work in modeling human and social behaviors in
egress. Section 3 explains the SAFEgress simulation platform and the key components of the platform. Section 4
describes some examples of plausible egress behaviors in
the current prototype. Section 5 concludes the paper with
discussion.
2 Related work
2.1 Social behaviors during emergencies
A shikake is a mechanism or a device that triggers a
behavioral change. Matsumura (2013) defines a shikake
more precisely using three interrelated factors: (1) a shikake is an embodied trigger for behavioral change; (2) the
trigger is designed to induce a specific behavior; and (3) the
behavior solves a personal or social issue. These factors
highlight that a shikake is a practical and simple mechanism that offers a solution to a (social or personal) problem
(Matsumura 2013). For example, the placing of fly targets
in urinals in airports reduced spillage by 80 % due to the
propensity of men to aim at the fly. In turn, reduced
spillage contributed toward reducing cleaning time and
water consumption (Matsumura and Fruchter 2013). The
simplicity of a shikake rests on the complexity of the
psychological or social mechanism it triggers (Rosenberg
et al. 2013; Salganick et al. 2006). In this paper, we focus
more on the latter, keeping psychological processes in the
background. Before describing how we model the social
behavior of agents, we review the previous literature on
how people react to emergency scenarios.
Post-fire studies have shown that occupants in emergencies do not act randomly nor act in an identical manner
without individual cognitive ability as if they are physical
molecules (Aguirre et al. 1998; Drury et al. 2009; Sime
1983; McPhail 1991). Rather, occupants in emergencies
123
often base their actions on their past experience, social
structures, and perceptions and interactions with others to
define an emergent understanding of the situation. For
example, the affiliative theory (Mawson 2005; Sime 1983)
and place script theory (Tong and Canter 1985) examine
individuals’ behaviors based on their personal knowledge,
risk perceptions, experience, and routines. The emergent
norm theory (ENT) specifies that disasters may lead to
collective behavior through the process of milling and
keynoting (Turner and Killian 1987). Milling is a communication process whereby individuals in a collective
attempt to define the situation, while during keynoting,
leaders emerge, interpret the situation, and make suggestions on what to do next (McPhail 1991). Aguirre et al.
(1998) further applied ENT to explain occupants’ reactions
in the World Trade Center Explosion in 1993 and showed
that social groups and enduring social relationships could
lengthen the time of evacuation.
Emergent norm theory and the prosocial theory suggest
that people continue to maintain group structure and
behave in a prosocial manner during emergencies (Aguirre
et al. 2011). The social identity theory infers that people
have a tendency to categorize themselves into one or more
‘‘in-groups,’’ building their identity in part on their membership in the groups and enforcing boundaries with other
groups (Drury et al. 2009). Moreover, studies in sociology
and psychology suggest that people influence each other’s
behaviors through the spreading of information and emotions (Rydgren 2009; Hoogendoorn et al. 2010).
Researchers in safety and disaster management have
proposed theoretical frameworks that describe the processes of seeking information, interpreting the situation,
assessing the risk, and making decisions specifically in
response to a disaster. For example, Lindell and Perry
(2011) applied the Protective Action Decision Model
(PADM) to examine the disaster response of occupants in
residential fires and study the effect of warning mechanisms on evacuation time. Based on the PADM framework,
Kuligowski (2011) studied the actions taken during the preevacuation period of the 911 WTC (World Trade Centers)
attacks and developed a model to qualitatively describe
how occupants made their decisions to evacuate. Reneke
(2013) proposed the evacuation decision model to predict
the state of the occupants by modeling the level of risk
perception and the effect of knowledge, social influence,
and alarm as they occur over time during the pre-evacuation period. These frameworks and models synthesize
human behaviors in emergencies as process models that
can be systematically analyzed further by incorporating
factors, such as threats, social relationships, and personal
experience, to determine the outcome of evacuation.
In light of prior studies, we conjecture that creating a
shikake for egress will require individual-, group-, and
AI & Soc
crowd-level characteristics. At the individual level, occupants may refer to their past experiences and knowledge
and their perceptions of the situation to decide on their
actions. At the group level, the preexisting social structure
(relationships between group members) and group norms
(expectations of each other’s behavior) affect the behavior
of an individual. Crowd-level behaviors are emergent
phenomena and often follow social norms.
2.2 Current crowd simulation approaches
Different crowd modeling approaches, such as the particle
(Helbing et al. 2000; Moussaı̈d et al. 2011), cellular automata (Burstedde et al. 2001), and agent-based systems (Lin
et al. 2010; Galea et al. 1998; Durupinar et al. 2011; Musse
and Thalmann 2001; Aguirre et al. 2011), have been
adopted into various simulation software to model crowd
movement in virtual environments. Zheng et al. (2009)
have provided detailed reviews of the different simulation
models. The following discussion focuses on the agentbased approach which is adopted in the implementation of
SAFEgress.
Agent-based systems model the crowd as a collection of
autonomous entities known as ‘‘agents’’ to represent the
human occupants. These systems allow emergent phenomena as a result of interactions among the virtual agents.
Many egress models have recently adopted this approach
and proposed different representations of the spatial environment and the agents. One common way of representing
the spatial environment is dividing the space into a 2D
array of cells where each cell contains up to a certain
number of agents (Lin et al. 2010; Galea et al. 1998). While
the grid-based spatial representation benefits from its
computational efficiency, the representation limits agents’
spatial movements and can potentially show an unnatural
checkerboard pattern when crowd density is high. Another
approach is to represent the spatial environment as a continuous space that allows agents to navigate naturally on a
continuous plane while considering constraints imposed by
the physical geometry of the building (Durupinar et al.
2011; Musse and Thalmann 2001). Our simulation framework uses the continuous spatial representation which
allows a wider array of locomotions of the agents as well as
the simulation of high-density crowd scenarios, such as
overcrowding and pushing at exit (Aguirre et al. 2011).
In most agent-based systems, the agent navigation routes
are usually pre-defined by specifying explicitly the origins
and destinations of the occupants (Aguirre et al. 2011;
Turner and Penn 2002). Optimal routes (usually defined in
terms of travel time or distance) are obtained by assuming
that the agents have good, often perfect, knowledge of the
environment. Examples are the way-finding model in
EXODUS (Veeraswamy et al. 2009) and the simulation
model proposed by Kneidl et al. (2013). Other agent-based
systems model an agent’s navigation decision as the outcome of decision-making processes, rather than pre-defined
or optimized routes. For example, ViCrowd (Musse and
Thalmann 2001) is a crowd simulation tool in which crowd
behaviors are modeled as scripted behaviors, as a set of
dynamic behavioral rules using events and reactions, or as
externally controlled behaviors in real time. MASSEgress
(Pan 2006) gauges an agent’s urgency level, evaluate
behavior models represented as decision trees, and invokes
a particular behavior to determine the navigation target.
These models consider agents’ behaviors as a perceptive
and dynamic process subjected to external changes. We
also adopt the perceptive approach in SAFEgress when
updating the agents’ behaviors.
As noted by Kuligowski and Peacock (2005), a wide
variety of computational tools for egress simulation are
available; however, human and crowd behaviors are often
ignored and group effects on evacuation patterns are seldom explored (Challenger et al. 2009; Aguirre et al. 2011).
Only recently have efforts been attempted to incorporate
social behaviors into egress simulations. For example, Tsai
et al. (2011) implemented exit knowledge, families, and
emotional contagion on evacuation and evaluated the
impacts of emotional and informational interactions
between agents. Similarly, Aguirre et al. (2011) described
an agent-based model that attempts to implement the prosocial model in simulating emergency evacuations. Features, such as leaders and followers within a group, have
been implemented to simulate population at a group level
and observe emergent patterns as a result of social relationships. Our model extends the notion of preexisting
social relationships by defining groups with several salient
attributes, such as intimacy level and group influence.
Furthermore, we incorporate the effect of neighboring
crowds on individuals and investigate crowd behaviors,
such as herding, on the evacuation patterns.
2.3 Model of spatial representation in simulations
People’s knowledge and memory of a space has a significant effect on their route choices. For example, when the
desirable destinations (such as the entrance of the building)
are not immediately visible, people refer to external
information (such as signage) or memory of a specific route
(such as following the paths which they traveled before) to
determine their travel directions (Gärling et al. 1986).
Moreover, researchers in environmental and cognitive
psychology have argued that evacuees use their perceptions
to guide their navigation (Gärling et al. 1986; Turner and
Penn 2002). With proper spatial representation of the
environment, Turner and Penn (2002) have shown that
natural human movement can be reproduced in simulations
123
AI & Soc
without the needs to assign the agents with extra information about the location of destination and escape route.
To simulate the spatial cognitive capability of the
agents, a proper representation of the spatial connectivity
that can be used for navigation by the agents is needed
(Turner and Penn 2002). The spatial connectivity is often
represented as a navigation graph or a road map. A variety
of techniques have been proposed to create a navigation
graph from a given building geometry (Latombe 1991).
Many space discretization techniques have been used to
derive a navigation graph, such as Voronoi diagrams, visibility graphs, and approximate cell decomposition (Choset
2005). In our work, we adopt a visibility graph derived
based on the physical geometry and egress features of the
building to represent the spatial connectivity of a floor
(Chu et al. 2014). The visibility graph is used in SAFEgress
primarily as a representation of the continuous space to
allow the agents to perceive possible areas to explore,
rather than as a navigation guide that dictates the movement of the agents.
3 A simulation framework for modeling human
and social behaviors
Situation Data Input Engine
Geometric Engine
Event Recorder
•
•
•
3.1 System architecture
•
The SAFEgress is an agent-based model designed to simulate human and social behaviors as well as emerging
crowd behaviors during evacuations. Figure 1 depicts the
system architecture of SAFEgress. The key modules of the
framework are the Global Database, Crowd Simulation
Engine, and Agent Model, while the supporting sub-modules include the Situation Data Input Engine, Geometric
Engine, Event Recorder, Population Generator, and
Visualizer.
•
123
Visualizer
Fig. 1 System architecture of SAFEgress (Chu and Law 2013)
•
The Global Database stores all the information about
the agent population, the physical geometries, and the
status of emergency situations. It maintains the state
information (such as mental states, behavioral decisions, and locations) of the agents.
Crowd Simulation
Engine
Population
Generator
Agent Model
The SAFEgress is an agent-based model designed to simulate human and social behaviors as well as emerging
crowd behaviors during evacuations. In the following
sections, we first provide an overview of SAFEgress
framework and describe each major module of the system.
We then briefly discuss the spatial representation, followed
by the agent representation and the attributes used to model
occupants in an emergency situation. Details of the system
and the individual components have been described elsewhere (Chu et al. 2014; Chu and Law 2013).
•
Global
Database
•
The Crowd Simulation Engine is the key module of the
system. It interacts closely with the Agent Behavior
Models Database, keeps track of the simulation, and
records and retrieves information from the Global
Database. The generated simulation results are sent to
the Event Recorder and the Visualizer.
The Agent Behavior Models Database contains the
individual, group, and crowd behavioral models.
Besides the default behavioral models, new models
can be created by users to investigate a range of
behaviors under different scenarios.
The Situation Data Input Engine contains the properties
of emergency cues and threats, such as fire alarms,
smoke, and fire, which the virtual agents perceive
during the simulation.
The Geometric Engine maintains the spatial information, such as the physical geometry, exit signs, and
openings about a facility. A virtual 3D model is built
based on the spatial information and is used for
collision avoidance and agent perception, as well as
for the visualization of simulation results.
The Event Recorder stores the simulation results at
each time step. The results can be retrieved for further
analysis, such as identifying congestion areas and exit
usages. The events captured can also be used to
compare with known and archived scenarios.
The Population Generator receives input assumptions
of the agent population and generates the agents using
physical (such as age, mobility, and physical size) and
behavioral profiles. This module can also generate both
pre-defined and random social groups to study different
social behaviors.
The Visualizer, currently implemented using OpenGL,
receives the positions of agents, overlays with the
virtual 3D model, and then dynamically generates and
displays simulation results as 2D/3D visual images.
The modular simulation framework allows investigation
of crowd dynamics and incorporation of different behavioral models. Diverse populations of individuals and
groups can be modeled and emergent collective behaviors
can be simulated. In particular, efficient computational
algorithms (such as detecting proximity and spatial
AI & Soc
locations and the dimensions of the physical objects, such
as walls and doors, the obstacle model is built to enable the
agents to ‘‘sense’’ the physical surrounding and the visible
space. To construct the obstacle model, the boundary surfaces of each 3-dimensional physical obstacle are represented as a set of polygon planes. Using the obstacle
model, an agent performs two basic tests: (1) collision tests
to determine its separating distances from nearby obstacles,
and (2) visibility tests to determine whether any given point
in the virtual space is visible to the agent.
visibility) have been carefully designed to allow simulations with a large number of agents.
3.2 Hierarchical space representation
Local building geometry, spatial arrangement of safety
signage, and occupants’ previous experience and familiarity with the buildings can significantly influence the
choice of egress routes in emergencies. We design a space
model to represent the virtual environment such that the
agents can perform the following tasks:
•
move naturally by avoiding collision with physical
obstacles and walls;
detect visible building features such as exit signs and
door openings;
support cognitive abilities of the agents, such as
reasoning and acquiring knowledge of the building
layouts.
•
•
3.2.2 Sematic representation of building safety features
In an emergency situation, people observe relevant building features such as exits and exit signs to guide them to
safety. These safety features provide additional information
to the agents, such as the possible directions of travel
leading to exit or outlet options. As illustrated in Fig. 2b,
three safety features (namely exits, exit signs, and doors)
are included in the space model.
As shown in Fig. 2, the proposed hierarchical space
model consists of three-layered components: a continuous
movement space, sematic representation of the building
features, and a visibility graph. Each component of the
hierarchical space model is discussed further in the following sections. For computational efficiency, the space
model is built prior to simulation and, once constructed, is
used throughout the simulation, unless changes are made to
the building layout that necessitates an update to the space
model.
•
•
3.2.1 Continuous movement space
The SAFEgress represents the spatial environment as a
continuous space (as shown in Fig. 2a) that the agents
navigate. A typical floor space includes physical obstacles,
such as walls and furniture. Agents navigate the virtual
space and avoid colliding with physical obstacles. Using
the user-inputted building geometry, which describes the
(a)
Exit: The exit objects represent the outlets of the floor.
The agents are equipped to visibly detect the exit
objects. If an agent decides to escape through a
particular exit object, the agent navigates toward the
location of the exit object. Once reaching the exit, the
agent is considered as physically exited from the floor
space. The attributes describing an exit object are its
spatial location and angle of orientation.
Exit Sign: The exit sign objects represent the exit signs
installed in a building as part of the egress system. The
signs can be either directional or non-directional. Nondirectional signs are attraction points for agents to
move close to. A directional exit sign includes
additional navigation direction. As an agent detects
and decides to follow an exit sign, the agent extracts
and follows the directional information as posted on the
sign. The attributes describing the exit sign object
(b)
Obstacles
(c)
Walls
Agent
Door
z
x
y
Door
Sign
Door
Door
Exit
Fig. 2 Three components of the hierarchical space model. a Continuous movement space. b Representation of building safety features.
c Visibility map
123
AI & Soc
Fig. 3 A procedure for generating visibility map (Chu et al. 2014).
a Subdividing the space into square cells and initializing exits as
navigation points. b Adding navigation points to the cells with
•
include its spatial location, angle of orientation, and
optionally, the directional information (such as left or
right).
Door: The door objects are similar to exit objects that
serve as ‘‘attraction points’’ to the agents. Unlike an
exit object that discharges the agent upon arrival, the
agent remains in the floor space and continues to
navigate until reaching an exit object. The attributes
describing a door object are its spatial location and
angle of orientation.
Although the selected building safety features
(namely, exit, exit sign, and door) do not represent all
the possible features that are found in a building, they
are the most salient features pertaining to egress design
and have great influence on people’s evacuation
decisions.
3.2.3 Visibility graph
Navigation during an evacuation is motivated by the
subsequent movements toward closer to the final destination (Gärling et al. 1986; Turner and Penn 2002).
Even with no apparent visual cues in the surroundings,
humans move naturally in a direction that allows them to
move further. To emulate natural human movement, we
represent an obstacle-free space by populating the space
with navigational points. Furthermore, we construct a
visibility map to link the navigational points to represent
the connectivity in the obstacle-free space. As shown in
Fig. 3, the visibility map is constructed using the following procedure:
1.
The continuous space is first discretized into square
cells to form a 2D grid for computational efficiency.
The cells with the building features (such as exits,
doors, and windows) are identified as an initial set of
navigation points (Fig. 3a).
123
maximum visibility zones. c Linking the navigation points that are
visible to each other within a certain radius
2.
3.
For each cell on the 2D grid, we compute the area that
is visible from an agent in that cell (visibility area).
The cells that have the largest visibility area among its
neighboring cells are identified and become navigation
points. Figure 3b illustrates the navigation points
constructed for a floor space.
Edges are added to link the navigation points that are
visible to each other within a certain radius. The
resulting visibility map is a graph that represents the
connectivity of traversal areas in the obstacle-free
space (Fig. 3c). Specifically, Fig. 3c shows the graph
in which the nodes are the locations of the building
safety features and the intermediate navigation points,
and the edges are pairs of nodes that are visible from
their locations.
The full visibility map represents the spatial connectivity of the floor, which is customized based on the
building geometry and locations of the safety features. By
querying the visibility map with its current location, an
agent ‘‘perceives’’ the possible navigation directions in the
virtual space and makes subsequent navigation decisions.
Three basic rules are observed to define the use of the
visibility map by the agents:
Rule #1 An agent can detect the navigational points that
are within the line of sight at each simulation step.
As humans can only perceive their local obstacle-free
surroundings, the virtual agents can access only the ‘‘visible’’ portion of the visibility map to decide their navigation
directions. An agent queries the visibility map with its
current navigation point (determined based on its current
location) to identify any connecting navigation points that
are visible to the agent. Figure 4 illustrates the differences
in the agent’s trajectories with and without the visibility
graph. With the notion of the visibility map as shown in
Fig. 4b, instead of relying on local collision avoidance with
AI & Soc
Initial position
(a)
Initial position
The agent walks randomly towards the
dead ends and walls.
(b)
Fig. 4 Agent’s trajectories navigating space with and without visibility graph. a Agent’s trajectories with visibility graph. b Agent’s trajectories
without visibility graph (relying on collision avoidance)
obstacles that can cause unnatural trajectories (such as
walking toward walls or blockages), the agent navigates the
environment by detecting visible navigational points and
moving with reference to the next navigation points.
Rule #2 An agent chooses intermediate navigation points
based on its navigation destinations and its knowledge of
the building.
When an agent does not have a particular navigation
destination, it chooses randomly one of the navigation
points to explore the space. When the agent has a particular
navigation destination, it selects the next navigation target
based on its knowledge of the building layout. For example, an agent having the knowledge of a familiar exit would
choose among the navigation points the one that is nearest
to the familiar exit (Gärling et al. 1986; Turner and Penn
2002). As illustrated in Fig. 5, the agent, with knowledge
of the main entrance as its familiar exit, can weigh heavily
and choose among the five visible navigation points the
navigation point labeled 1 to move closer to the main
entrance. On the other hand, if an agent does not have prior
knowledge of the spatial layout, unless being influenced by
other information, the agent assigns equal weight to all the
options and chooses a navigation target randomly.
Rule #3 An agent ‘‘memorizes’’ the traveled space to
avoid backtracking.
During the simulation, an agent can memorize the areas
traveled by registering the traveled navigation points in its
cognition module. Less weight will be assigned to the
visible navigation points that it has traveled before. By
doing so, the agent may avoid repeatedly visiting the same
area. This cognitive ability to memorize the previously
traveled areas is particularly important for generating a
natural navigation trajectory in a situation that an agent has
no prior knowledge of the environment and attempts to
explore the surroundings for exit. Figure 6 illustrates the
differences in the trajectories by an agent with and without
memory. As shown in Fig. 6a, the agent with memory
tends to explore new areas with little backtracking. In
Connecting visible navigation points from
agent’s position.
2
Agent
4
3
5
1
Direction to
the main
entrance
Main Entrance
Fig. 5 Illustration of visible navigation points from an agent (Chu
et al. 2014)
contrast, as depicted in Fig. 6b, the agent without memory
moves repeatedly back and forth to the same areas.
With the notion of visibility map, the agents in SAFEgress can perceive the surrounding to (1) identify the
obstacle-free space as visible navigation points; (2) transverse through the visible navigation points and travel to a
particular destination, such as the entrance used to enter the
building, through intermediate navigation points that are
visible to the agents; and (3) construct a working memory
of the spaces that have traveled.
3.3 Agent representation of occupants
In SAFEgress, each individual is modeled as an autonomous agent who interacts with the dynamic environment
and with other agents. Each agent is given a set of static
and dynamic attributes to mimic the occupants. The choice
of the attributes is crucial since they implicitly determine
the range of simulation tests users can perform with
SAFEgress. We select the attributes that are deemed
important as reported by other researchers.
123
AI & Soc
Initial position
Agent moves back and fro in
previously traveled areas.
(a)
Initial position
(b)
Fig. 6 Agent’s trajectories navigating space with and without memory. a Agent’s trajectories with ‘‘memory.’’ b Agent’s trajectories without
‘‘memory’’
agent to exhibit deference behavior (Drury et al. 2009).
The lower the social order, the higher the chance for the
agent to defer decision to other agents when negotiating
the next move. A special agent, such as authority
figures and a safety personnel, may have assigned roles
and is responsible for executing actions, such as sharing
information and giving instructions (Kuligowski 2011).
Table 1 Agents’ static attributes at the individual, group, and crowd
level
Individual
Group
Crowd
Physical profile
Group intimacy level
Social order
Age
Group-seeking
Assigned roles
Gender
Body size
Group leader(s)
Group influence
Travel speed
Personal space
Familiarity
3.3.2 Process model and dynamic attributes
Static attributes are defined prior to the simulation to
specify their population type, experience profile, social
group affiliation, and social traits. The agents’ attributes,
listed in Table 1, can be further categorized into three
levels—individual, group, and crowd—as described below
(with the static attributes shown in bold):
Based on the studies by researchers in disaster management
and fire engineering about occupants’ behaviors during
emergency (Lindell and Perry 2011; Kuligowski 2011), we
implement a five-stage process model (perception, interpretation, decision making, execution, and memorization)
to update the agents’ behaviors. Each stage in the process
model is implemented as an independent computational
module. Table 2 summarizes the dynamic attributes that
describe the perceived information and the states of an
agent at each stage. During the simulation, the dynamic
attribute values are updated at each process stage as
described below (with dynamic attributes shown in bold):
•
•
Known exits
3.3.1 Static attributes
•
•
At the individual level, an agent has a physical profile,
a level of familiarity (Mawson 2005) with the building,
and prior known exits (Sime 1983) of at least one that
the agent enters. The physical profile includes attributes
such as age, gender, body size, travel speed, and
personal space.
At the group level, the attributes defined for social
groups include a group leader (if any), the group
intimacy level (e.g., high intimacy for a family group),
the group-seeking property (describing agents’ willingness to search for missing members), and the group
influence (describing the influence of a member to the
others in the same group) (Aguirre et al. 2011; McPhail
1991). The agents belonging to the same group share
the same group attributes.
At the crowd level, an agent’s social position is defined
by the social order that reflects the likelihood of the
123
The Perception Module updates four attributes such as:
•
•
•
•
•
•
Emergency cues, such as smoke and alarm, that are
visible or audible to the agent.
Visible floor objects, such as doors and signs, that
are visible to the agent.
Visible group members that are visible to the
agent.
Neighboring agents that are visible to and are
located within a certain radius from the agent
The Interpretation Module maps the current knowledge
of the agent into a set of internal thresholds that
describe the urge and well-being of the agent.
The decision-making module invokes the decision tree
modeling the behavior assigned to the agent. Given the
agent’s characteristics and the invoked decision tree, it
looks up the agent’s behavior and determines the long-
AI & Soc
Table 2 Agents’ dynamic attributes updated at different stages
Perception
Interpretation
Decision making
Locomotion
Memory
Emergency cues
Urge
Behavior
Spatial position
Spatial knowledge
Visible floor objects
Well-being
Navigation goal
Visible group member
Navigation point
Neighboring agents
•
•
term navigation goal, such as the familiar exit of the
agent or the location of the group leader, and the
intermediate navigation point given the agent’s
knowledge and location.
The Locomotion Module calculates the agent’s movement toward the navigation target and returns the
updated spatial position of the agents, which are
Cartesian coordinates (x, y, and z) in the continuous
space.
The Memory Module registers the decision made
during the simulation cycle and updates the spatial
knowledge. The spatial knowledge is an array storing
the navigation points that the agents have visited. The
agents remembered the traveled navigation points and
can later refer to the spatial knowledge to avoid
backtracking.
Each stage mimics a cognitive process or an act by an
occupant during evacuation. Collectively, these stages
define the behavioral process of the occupants.
4 Implementing human and social behaviors
During evacuation, occupants may refer to their previous
knowledge of the building, visual perceptions of the floor,
and social cues, such as the presence of group members and
others’ movements, to determine their evacuation routes.
This section describes a number of examples to illustrate
the capability of SAFEgress to simulate some plausible
behaviors exhibited by occupants in emergencies. These
behaviors include following building features, following
familiar exits, group behavior, and herding behavior. In
each example, we discuss the motivation and observation
of the behavior, as well as describe the implementation in
the prototype.
4.1 Following cues from building features
The spatial arrangement of exit signs with different visual
displays is the important factor that can affect the movement pattern (O’Neill 1991; Johnson and Feinberg 1997).
In situations where the occupants are unfamiliar with the
environment, people rely heavily on the information from
the signage to guide their navigation. Therefore, exit signs
should be arranged in a proper way to provide markings of
exits and escape routes in buildings and to assist the
occupants in leaving the buildings effectively in case of
emergency.
In SAFEgress, each agent can decide their navigation
based on the perceived floor objects representing the
building features, such as exit signs and doors as described
in Sect. 3.3.2. At each simulation step, the agents detect
visible floor objects and navigate the space according to the
direction given by the floor objects. Figure 7 illustrates the
process that an agent navigates the space by perceiving and
following the guidance from the visible floor objects and
escaping via visible exits. Initially, the agent chooses to
navigate toward the only visible floor object, which is the
door as shown in Fig. 7a. After exiting the room via the
visible door, the agent detects new floor objects that are the
two exit signs (Sign 1 and Sign 2). As the agent detects
more than one visible objects, the agent weighs each object
according to three criteria: (1) the object type (namely
exits, doors, and signs), (2) the distance of the object from
the agent, and (3) the number of times of prior visits to the
object. Because both objects are ‘‘sign’’ objects and have
not been visited before by the agent, the agent chooses to
navigate toward the nearest sign, Sign 1, which is indicated
in Fig. 7b. Upon arriving at Sign 1, the agent evaluates all
visible objects and chooses to go to Sign 2 (Fig. 7b). As the
agent moves near Sign 2, the agent detects a new floor
object, Exit 1; the agent then weighs all the visible floor
objects, chooses to go to Exit 1, and exits the floor
(Fig. 7c).
We further apply SAFEgress to analyze the effects of
different exit sign arrangements on egress performance.
Figure 8 shows the floor layout of a museum that consists
of several exhibition halls with four main exits (the
entrance, the north exit, the west exit, and the café exit).
The floor space is populated with a total of 360 agents who
have medium level of familiarity and have no prior
knowledge of exits. They exit the floor by following the
cue from floor objects. We model different exit sign
arrangement with the same building model to trigger different navigation patterns of the agents. The effects of
signage arrangements on evacuation outcomes are
123
AI & Soc
(a)
(b)
Exit 1
(c)
Exit 1
Exit 1
Sign 2
Room
Sign 2
Sign 1
Door
Sign 2
Sign 1
Sign 1
Fig. 7 Navigation by following building features. a Agent moves toward visible door. b Upon arriving at Sign 1, the agent chooses to move
toward Sign 2. c Agent navigates toward the detected exit
Cafe
Exit
North
Exit
West
Exit
Entrance
Fig. 8 Building layout and exit locations
compared by (1) changing the number of exit signs and (2)
rearranging the orientation of the exit signs.
The first test studies the effect of additional exit signs on
evacuation performance. Figure 9a shows the initial layout
of exit signs and the trajectories of agents exiting the
building. The total evacuation time is 165 s (averaged over
10 simulation runs). As highlighted in the figure, in this
initial exit sign arrangement, agents take detours and
explore the floor before find their way to exit. With additional exit signs posted, as shown in Fig. 9b, the agents
travel with more direct routes and the evacuation time
takes 119 s (a decrease of 28 % in time compared to that of
initial layout of fewer exit signs).
The second test illustrates how changing the exit orientation can help direct crowd flow. As shown in Fig. 9,
with the sign arrangement in the first test case, agents tend
to exit through the main entrance and cause the congestions
at the main entrance. As shown in Fig. 10, we change the
facing direction of an exit sign (depicted with rectangular
box) in the main aisle. With the proper exit orientation,
more agents perceived the exit sign and its direction and
evacuated through the near exit. As a consequence, the
evacuation time is 89 s, a further improvement of 25 %.
This example clearly illustrates the importance of
123
appropriately arranging exit sign to effectively guide the
crowd for evacuation and alleviate congestion.
Assessing the effectiveness of a signage system is difficult in real setting because this kind of assessment
requires experiments with occupants in the buildings.
Modeling salient safety features in egress simulations
allows designers to improve egress performance by analyzing different evacuation patterns as a result of different
signage systems.
4.2 Following familiar exits
Occupants choose evacuation routes based on their previous experience and knowledge (Mawson 2005; Sime 1983;
Tong and Canter 1985). Occupants who visit the building
regularly may have learned their preferred exits over time
or have knowledge of the nearest exits. They may also have
evacuation drill experience from which they learned the
instructed evacuation routes in case of emergency. To
incorporate the effect of known exits into agents’ route
choices, we make use of the agents’ static parameter,
known exit(s). We model the ‘‘following familiar exits’’
behavior as follows: prior to the simulation, the user
assumes the parameter value of the attribute, known exits,
of the agents, indicating that the agents have knowledge of
one or more known exits. During the simulation, the agents
query the spatial model with the known exits and retrieve
the shortest paths to the known exits. At the decisionmaking stage, the agents choose to move to the visible
navigation points along the shortest paths to get to their
known exits.
Figure 11 shows an example floor plan and evacuation
patterns resulted from assigning different known exits to
200 agents. In Case 1, agents have the knowledge of the
main entrance and exit through the main entrance. The
arrows in Fig. 11a show the emerging crowd flows as
agents travel to the main entrance. In Case 2, agents have
the knowledge of all exits and choose to evacuate through
AI & Soc
Average evacuation time = 165 seconds
Average evacuation time = 119 seconds
Agents explore the
space to search for exits.
Much smoother
trajectories are observed.
Additional
exit signs
(a)
Additional
exit signs
(b)
Fig. 9 Evacuation patterns of evacuation assuming different signage arrangements. a Initial layout of signage arrangement. b Layout with
additional signage
Average evacuation time = 119 seconds
Average evacuation time = 89 seconds
The north-facing exit can only
perceived by the agents who
are located at the north side of
the sign. Most agents exit
through the main entrance,
causing serious congestion.
Modifying exit sign
orientation directing
agents to other exits.
EXIT
EXIT
Flow direction
Flow direction
<2
(a)
>20 ft 2 per person
(b)
Fig. 10 Congestion patterns assuming different signage arrangements. a Originally, the exit directing flow toward main entrance. b Reorientation of exit sign directing flow to other exits
the nearest exit given their initial starting positions. The
arrows in Fig. 11b show the diverging crowd flows as
agents travel to their nearest exits. Besides the differences
in the crowd flow patterns, the assumption of different
known exits also changes the evacuation time significantly.
The average evacuation times over 10 simulation runs are
106 and 70 s for Case 1 and Case 2, respectively. The
longer evacuation time in Case 1 is due to the longer travel
distance and congestion at the main entrance.
4.3 Navigating with social group
During evacuation, members belonging to a group, such as
families and close friends, concerned the safety of their
group members and often seek out and evacuate with the
entire group even when evacuation is urgent (Aguirre et al.
2011; Sime 1983). We model this group behavior using two
group-level static attributes: group separation distance
(measured as the desirable physical distance between
123
AI & Soc
members) and group-seeking (measured as the desirable
percentage of members that are visible). We assign a low
value (average distance of 4ft to each visible group member)
to the group separation distance attribute (i.e., agents try to
maintain close proximity with other group members) and a
high value to the group-seeking attribute (i.e., all group
members have to be visible to the group) to simulate agent
groups with close relationships. Figure 12 shows a comparison of the evacuation patterns of agents with and without
group affiliations by varying the group-seeking attribute.
In the example showing in Fig. 12, we assume all 50
agents evacuate at once. We test the effect of group affiliation on evacuation patterns. The first case assumes each
agent evacuates as an individual through its familiar exit
(which is the nearest exit to the agent). Figure 12a shows
the evacuation pattern of agents without any group affiliation, and the average evacuation time is 29 s (averaged
over 10 simulation runs). In the second case, we test the
effect of group behaviors by assigning all agents with
group affiliation (group size ranges from three to five
Average evacuation time = 106 seconds
Average evacuation time = 70 seconds
Main Entrance
Main Entrance
(a)
(b)
Fig. 11 Evacuation patterns with different exit assignments. a Agents exit through main entrance. b Agent via the nearest exits. Dotted arrows
indicate flow direction
Average evacuation time = 29 seconds
Average evacuation time = 39 seconds
Agents travel
back-and-forth
to seek other
members,
leading to higher
density of
trajectories at the
same area.
(a)
Agents detour to search for the group,
resulting in longer and indirect escape routes.
(b)
Fig. 12 Evacuation patterns with and without group affiliation. a Evacuation as individuals via nearest exits. b Evacuation with group affiliation.
Black squares indicate initial positions of the 50 agents
123
AI & Soc
agents). All groups are assigned with a high group-seeking
value, such that all members in the group have to be visible
to each other before the members in the group start to
evacuate. In this case, as shown in Fig. 12b, agents pace
back and forth, and even detour, as they seek other group
members. In this scenario, the average evacuation time
increases to 39 s (averaged over 10 simulation runs). The
longer evacuation time in the group-seeking scenario is
possibly contributed by longer and indirect routes taken by
the agents as they search for the missing group members.
By varying the value assigned to the group-seeking attributes, we can alter the level of desire for the group to look
for other members. Similarly, by adjusting the group separation distance of the social group, we can simulate different types of groups with different levels of intention to
follow other group members. Depending on the initial
distribution of the group members and their relationships,
group behaviors in egress simulations affect the evacuation
time and the escape routes.
EXIT
EXIT
(a)
EXIT
EXIT
(b)
Fig. 13 Evacuation patterns with and without herding behaviors.
a Agents with individual behaviors exiting via the nearest exits.
b Agents with herding behaviors and with individual behaviors
have an impact on the congestion patterns as other occupants who are unsure or unfamiliar with the situation will
tend to follow the crowd. Herding and overcrowding phenomena emerge as the crowd triggers individuals to exhibit
crowd-following behaviors. By including the perception of
crowd movement, our framework captures the emergence
of crowd-following phenomenon.
4.4 Following crowds
As the first signs of a potential threat are often ambiguous
(Tong and Canter 1985), people may spend a substantial
amount of time to investigate and interact with one another
before deciding how to respond (Sime 1983). The movement of some evacuees toward different exits provides
others with social cues of the availability of alternative
exits. Often, as opposed to moving toward familiar exits,
people may follow social cues and choose the exits preferred by the crowd as they observe others’ actions. We
model the ‘‘following the crowd’’ behavior as follows:
during the simulation, the herding agent (who is seeking to
follow other agents) perceives the space and detects visible
floor objects. At the decision-making stage, the herding
agent assesses, for each visible floor object, the number of
neighbors who are traveling toward the floor object. The
herding agent chooses the visible floor object with the
highest number of neighboring agents traveling toward
because the agent considers the movement of its neighbors
as a social cue to explore potential areas for exits. If there
are no visible floor objects that other agents move to, the
agent then will adopt other navigation strategies, such as
referring to their known exits (as described in Sect. 4.1) or
following the visual cues (as described in Sect. 4.2).
Figure 13 illustrates the differences in agents’ trajectories when 100 agents are with and without crowd-following
behavior. As shown in Fig. 13a, when agents follow only
visual cues, the usage of the two exits is about even. When
half of the agent population (i.e., 50 agents) exhibit crowdfollowing behavior, as shown in Fig. 13b, one of the exits
became more congested. In real situation, the escape routes
taken by the occupants who initiate the evacuation can
5 Discussion
The building geometry unique to each building and the
layout of building emergency features (such as exit signs
and doors) can trigger different navigation decision of the
occupants during egress. SAFEgress allows users to assess
different building geometries and egress systems in a
flexible manner. Furthermore, sensitivity analysis on different agent attributes can be conducted in SAFEgress to
identify and assess the impacts of important social factors
in different physical and environmental settings, as illustrated in the four examples presented in this paper. This
kind of analysis can give insights into architects, building
designers, and facility managers to design user-centric
SAFEgress and improve emergency procedures and training programs.
Our simulation results confirm the needs of incorporating occupants’ perception, previous knowledge, and social
behaviors in egress simulation. In our examples, we show
that different arrangements of exit signs, social settings of
the agents, and prior knowledge and familiarity with the
building could trigger different crowd behaviors and crowd
flow patterns. By embedding individuals into groups, our
model has the capabilities to model occupant behaviors
such as the spreading of information within social groups
and crowds (Rydgren 2009; Hoogendoorn et al. 2010) and
the role of authorities (Kuligowski 2011). In broader terms,
we see our approach to modeling social behavior to be in
line with recent efforts in computational social science to
capture emerging social behaviors using computer simulation and large datasets made available through digital
123
AI & Soc
technology and new forms of communication (Lazer et al.
2009). The described platform represents a step forward
toward incorporating social science knowledge of social
interactions into engineering models that capture human
behaviors.
Acknowledgments This research is partially supported by the
Center for Integrated Facility Engineering at Stanford University and
a ‘‘Custom Research’’ grant through Stanford’s Center for Integrated
Systems from NEC Corporation. The first author is also supported by
the Croucher Foundation Scholarship and the Leavell Fellowship.
References
Aguirre B, Wenger D, Vigo G (1998) A test of the emergent norm
theory of collective behavior. Sociol Forum 13:301–320
Aguirre BE, Torres MR, Gill KB, Hotchkiss HL (2011) Normative
collective behavior in the station building fire. Soc Sci Quart
92(1):100–118
Burstedde C, Kirchner A, Klauck K, Schadschneider A, Zittartz J
(2001) Cellular automaton approach to pedestrian dynamics—
applications. Pedestrian and Evacuation Dynamics. Springer,
Berlin, pp 87–97
Challenger W, Clegg WC, Robinson AM (2009) Understanding
crowd behaviors: Guidance and lessons identified. Technical
Report prepared for UK Cabinet Office, Emergency Planning
College, University of Leeds
Choset HM (2005) Principles of robot motion. MIT Press, Cambridge
Chu ML, Law KH (2013) A Computational Framework Incorporating
Human Behaviors for Egress Simulations. ASCE J Comput Civil
Eng 27(6):699–707
Chu ML, Parigi P, Law KH, Latombe J-C (2014). SAFEgress: a
flexible platform to study the effect of human and social
behaviors on egress performance. Proceedings of 2014 Symposium on Simulation for Architecture and Urban Design,
pp 35–42
Drury J, Cocking C, Reicher S (2009) Everyone for themselves? A
comparative study of crowd solidarity among emergency
survivors. Br J Soc Psychol 48:487–506
Durupinar F, Pelechano N, Allbeck J, Gudukbay U, Badler NI (2011)
How the OCEAN personality model affects the perception of
crowds. IEEE Comput Graph Appl 31(3):22–31
Galea ER, Gwynne S, Owen M, Lawrence PJ, Filipidis L (1998) A
comparison of predictions from the buildingEXODUS evacuation model with experimental data. Proceedings of the first
international symposium on human behavior in fire, pp 711–720
Gärling T, Böök A, Lindberg E (1986) Spatial orientation and
wayfinding in the designed environment: a conceptual analysis
and some suggestions for post-occupancy evaluation. J Archit
Plan Res 3(1):55–64
Helbing D, Farkas I, Vicsek T (2000) Simulating dynamical features
of escape panic. Nature 407:487–490
Hoogendoorn M, Treur J, Van der Wal C, Wissen AV (2010) An
agent-based model for the interplay of information and emotion
in social diffusion. Proceedings of web intelligence and intelligent agent technology (WI-IAT), pp 439–444
Johnson NR, Feinberg WE (1997) The impact of exit instructions and
number of exits in fire emergencies: a computer simulation
investigation. J Environ Psychol 17(2):123–133
Kneidl A, Hartmann D, Borrmann A (2013) A hybrid multi-scale
approach for simulation of pedestrian dynamics. Trans Res Part
C Emerg Technol. http://www.sciencedirect.com/science/article/
pii/S0968090X13000594
123
Kuligowski ED (2011) Terror defeated: occupant sensemaking,
decision-making and protective action in the 2001 World Trade
Center disaster, Ph.D. Thesis, University of Colorado, Boulder
Kuligowski ED, Peacock RD (2005) A review of building evacuation
models. Technical Note 1471, Building and Fire Research
Laboratory, NIST
Latombe J-C (1991) Robot motion planning. Kluwer, Boston
Lazer D et al (2009) Computational social science. Science
323:721–723
Lin Y, Fedchenia I, LaBarre B, Tomastik R (2010). Agent-based
simulation of evacuation: an office building case study.
Pedestrian and Evacuation Dynamics 2008. Springer, Berlin
pp 347–357
Lindell MK, Perry RW (2011) The protective action decision model:
theoretical modifications and additional evidence. Risk Anal
32(4):616–632
Macy M, Flache A (2009) Social dynamics from the bottom up:
Agent-based models of social interaction. In: Hedström P,
Bearman P (eds) The Oxford handbook of analytical sociology.
Oxford University Press, Oxford, pp 245–268
Matsumura N (2013) Shikake as an embodied trigger for behavior
change. In: Proceedings of AAAI Spring Symposium on
Shikakeology: designing triggers for behavior change, pp 62–67
Matsumura N, Fruchter R (2013) Shikake Trigger Categories. In:
Proceedings of AAAI spring symposium on Shikakeology:
designing triggers for behavior change, pp 68–73
Mawson AR (2005) Understanding mass panic and other collective
responses to threat and disaster. Psychiatry 68:95–113
McPhail C (1991) The Myth of the Madding Crowd. Aldine de
Gruyter, New York
Moussaı̈d M, Helbing D, Theraulaz G (2011) How simple rules
determine pedestrian behavior and crowd disasters. In: Proceedings of the National Academy of Sciences of the United States of
America, Apr. 2011
Musse SR, Thalmann D (2001) Hierarchical model for real time
simulation of virtual human crowds. IEEE Trans Vis Comput
Graph 7:152–164
O’Neill MJ (1991) Effects of signage and floor plan configuration on
wayfinding accuracy. Environ Behav 23(5):553–574
Pan X (2006) Computational modeling of human and social behavior
for emergency egress analysis, Ph.D. Thesis, Stanford University
Reneke PA (2013) Evacuation decision model. NIST IR 7914,
National Institute of Standards and Technology
Rosenberg RS, Baughman SL, Bailenson JN (2013) Virtual superheroes: using superpowers in virtual reality to encourage prosocial
behavior. PLoS ONE 8(1):e55003
Rydgren J (2009) Beliefs. In: Hedström P, Bearman P (eds) The
Oxford handbook of analytical sociology. Oxford University
Press, Oxford, pp 72–93
Salganik MJ, Dodds PS, Watts DJ (2006) Experimental study of
inequality and unpredictability in an artificial cultural market.
Science 311:854–856
Sime JD (1983) Affiliative behavior during escape to building exits.
J Environ Psychol 3(1):21–41
Tong D, Canter D (1985) The decision to evacuate: a study of the
motivations which contribute to evacuation in the event of a fire.
Fire Saf J 9:257–265
Tsai J, Fridman N, Bowring E, Brown M, Epstein S, Kaminka G,
Marsella S, Ogden A, Rika I, Sheel A, Taylor ME, Wang X,
Zilka A, Tambe M (2011) ESCAPES: Evacuation simulation
with children, authorities, parents, emotions, and social comparison. In: Proceedings of the 10th international conference on
autonomous agents and multiagent systems, ACM Press,
pp 457–464
Turner R, Killian L (1987) Collective Behavior, Englewood Cliffs,
NJ: Prentice-Hall
AI & Soc
Turner A, Penn A (2002) Encoding natural movement as an agentbased system: an investigation into human pedestrian behaviour
in the built environment. Environ Plan 29(4):473–490
Veeraswamy A, Lawrence P, Galea E (2009) Implementation of
cognitive mapping, spatial representation and way finding
behaviours of people within evacuation modelling tools. 2009
Human Behavior in Fire Symposium. http://gala.gre.ac.uk/1297/
Watts DJ, Strogatz SH (1998) Collective dynamics of ‘small-world’
networks. Nature 393:440–442
Zheng X, Zhong T, Liu M (2009) Modeling crowd evacuation of a
building based on seven methodological approaches. Build
Environ 44(3):437–445
123