Download Emergent Design Studio

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Urban design wikipedia , lookup

Architecture of the United States wikipedia , lookup

Architect-led design–build wikipedia , lookup

Christopher Alexander wikipedia , lookup

Architectural design values wikipedia , lookup

House wikipedia , lookup

Architect wikipedia , lookup

Transcript
Emergent Design:
A Crosscutting Research Program and Curriculum
Integrating Architecture and Artificial Intelligence
Peter Testa (1), Una-May O'Reilly (2), Devyn Weiser (3), Ian Ross (3)
School of Architecture and Planning, Massachusetts Institute of Technology (1)
Artificial Intelligence Lab, Massachusetts Institute of Technology (2)
Emergent Design Group, Massachusetts Institute of Technology (3)
77 Massachusetts Avenue, Cambridge, 02139, USA
Email: [email protected]
ABSTRACT
We describe a design process, Emergent Design, that draws upon techniques and approaches from the
disciplines of Computer Science and Artificial Intelligence in addition to Architecture. The process
focuses on morphology, emphasizing the emergent and adaptive properties of architectural form and
complex organizations. Emergent Design explicitly uses software tools that allow the exploration of
locally-defined, bottom-up, emergent spatial systems. We describe our Emergent Design software,
inspired by concepts from Artificial Life, that is open-source and written in Java. This software is an
integral part of a curriculum to teach Emergent Design that has original content and pedagogical aspects.
Introduction
Emergent Design describes an approach to architectural design that is characterized by several
fundamental principles:
 In order to comprehend the complexity of a contemporary design scenario numerous situational
factors of a scenario must be identified and their inter-relationships must be well understood even though
they may not be well defined.
 An effective solution to a complex design scenario is achieved through a non-linear process of
bottom-up experimentation involving independent, related or progressive investigations into architectural
form and complex organizations. This interactive process increasingly builds a complex solution which
considers the numerous complicated, interdependent relationships of the scenario.
 A complex solution derived in such a bottom-up, investigative style is advantageous because it retains
explicability and has the flexibility to be revised in any respect appropriate to a change or new
understanding of the design scenario.
 Computer software is an excellent means of performing bottom-up architectural experimentation
because, despite a simple specification, a decentralized, emergent software simulation can yield complex
behavior, exploit graphics capability to model organization and pattern, can be written flexibly so that
alternatives can be quickly examined.
In the spring of 1999 at MIT’s School of Architecture we developed a new design and computation studio
with the goal of introducing graduate students to Emergent Design.
In this paper our intent is to:
 Describe the motivation and concepts of Emergent Design.
 Supply an account of how Emergent Design can be actualized for design projects.
 Outline our curriculum, the software, and initial applications.
The paper starts by further describing the motivation and concepts of Emergent Design.
Essential to our notion of Emergent Design is the integration of effective and powerful software tools
from the domain of Artificial Intelligence into the process of exploring spatial relationships through
consideration of the architectural program and agency of primitive design components. Thus, the
description of Emergent Design is followed by an explanation of the software-related steps of the process.
These steps intentionally are looped; that is, the process of using software for creative discovery and
exploration is not linear.
To demonstrate and instantiate Emergent Design the remainder of the paper calls upon an account of
teaching the Emergent Design curriculum.
We set forth the goals and context of the studio then relate how members of design teams used the
Emergent Design software. We present our findings that the concepts of Emergent Design and the design
of software tools provide the students with guidance towards informed and innovative designs. In
addition the findings bear out the decisions behind the software tools design in terms of choosing to write
it in Java, provide flexibility using a general library of classes and methods, and using the world wide web
(WWW) as a studio platform. The merits of the software indicate that our goal of making it both a tool in
design and a means of learning the important concepts of Emergent Design has been accomplished at an
initial level.
The paper concludes with a future work section that explains how we intend to extend and further
improve the system. These changes involve adding an evolutionary algorithm component to the search
process of the Emergent Design software and involving students in a project where spatial development
and form growth takes place in conjunction with environmental factors. The system would additionally
benefit from a application programmer interface that would allow simple design and implementation of an
application's graphical user interface. We also anticipate the extension beyond our current focus on spatial
organization to other software systems that focus on form and structure within a three dimensional
shaping environment.
1.0 Emergent Design as a Decentralized Process
Emergent Design brings a synergy of new and existing ideas about design into focus at a time when
Computer Science technology and Artificial Intelligence research can support the objectives of
Architecture.
We make no claim that Emergent Design is entirely new. For example, architects have always sought to
identify the elements of a problem scenario and understand them syncretically. We think Emergent
Design is unique in exploiting and emphasizing the role of software designed for self-organizing spatial
simulations to do this and to explore solutions. Further, Emergent Design emphasizes decentralized
thinking and the study of a system of entities that are imbued with agency and that are capable of forming
macroscopic levels of organization.
2
In fact, a way of describing Emergent Design is as a decentralized style of thinking about problems. It is a
way of evolving solutions from the bottom-up. It relies on the aid of software simulations for
experimentation. Emergent Design is worthwhile because it produces interesting and innovative design
outcomes, which would not be arrived at by other methods. In complex problem scenario circumstances
Emergent Design will be instrumental in discriminating the issues, relations and consequences of subsolutions as the scenario is conceptually sorted out and the solution incrementally developed.
Emergent Design emphasizes appraising and understanding individual behavior (where an architectural
component is endowed with agency) as being influenced by other individuals in a system. The system
view also incorporates recognition of levels (Resnick, 1994;, Wilensky & Resnick, 1998) and the insight
derived from understanding how complex, collective, macroscopic phenomena arise from the simple,
local interactions of individuals. Architects continually strive to understand the complexity of a system.
The recognition of levels in the system and the phenomena that give rise to the formation of levels
provide system level insights that are influential towards arriving at an adaptive design. Examples of
such complex adaptive systems can be found in both natural and synthetic environments. These forms
may be urban configurations, or spatial and organizational patterns but all are evolved through generative
probing and grouping in the space of possibilities. This algorithmic model of design, recognizes that
complex systems do not follow straight lines of historical narration, but are composed of multiple series
of parallel processes, simultaneous emergences, discontinuities and bifurcations. (Kwinter, 1998)
Emergent Design differs from traditional design approaches which emphasize things in space as
fundamental and time as something that happens to them. In Emergent Design artifacts or designs (e.g.
rooms, hallways, buildings) are viewed as secondary to the processes through which they evolve and
change in time. In this approach the fundamental things in the environment are the processes. Emergent
Design seeks to formulate principles of architecture in this space of processes allowing space and time
(architecture) as we know it to emerge only at a secondary level.
1.1 Architecture and Artificial Life
Numerous concepts in the field of Artificial Life (ALife) (Langton, 1989; Langton et. al., 1991) are
advantageously applicable to an architectural design process that emphasizes system-level and constituent
understanding. In ALife every component of a system, including elements of the environment, is
conceptualized as being capable of agency. Thus, a component of the system may or may not act. Acting
is not necessarily actually undergoing a change of internal or external state. It also encompasses the way
in which a component may exert influence on the state of the environment or other components. Agency
between components implies that their interactions create a level of dynamics and organization.
Organizations that dynamically define themselves on one level can themselves exhibit agency and, thus, a
new level of organization can form as a result of the lower level dynamics. Levels are not necessarily
hierarchically organized. They may, in fact, be recognizable by a particular perspective from which the
entire system is viewed.
The ALife community pursues the study of such systems. They are compelling to study because they have
multiple levels of dynamics and organization. Architectural systems are compelling for the same reason.
Architectural systems have numerous levels and each level is interdependent with others.
For example, an office building can have a level in which the components are moveable and semi-fixed
relative to movement patterns and more stable space defining elements or infrastructures. Different local
organizations (work groups) emerge in response to changing organizational structures and projects.
Contemporary non-hierarchical work environments when studied closely may be effectively
conceptualized as living systems with a myriad of interdependencies between their components and with
numerous levels of interdependent organizations.
3
As shown by the office building example, in general, architectural systems are very complex. One type of
complexity encountered is that small, simple, “local” decisions, when coupled with other similar
decisions, have large, complex, global effects on the outcome of a design. For example, a choice about
locating a work group solely on one floor has ramifications in the choices of circulation allocation, and
work equipment placement. These choices, in turn, influence later choices such as material selection or
building infrastructure assignment. ALife studies systems in a manner that highlights and elucidates the
consequences of locally defined behavior. ALife models are defined in terms of component agency and
decentralized component interactions. The running of an ALife system consists of playing out the local
interactions and presenting views of the complex global behavior that emerges. Thus, the Emergent
Design version of an ALife system is one in which the environment and components are architectural and
spatial in nature. The “running of the system” consists of designating components, defining their agency
in local terms and then watching the macroscopic outcome of their interaction.
Both ALife and Architecture find it necessary to consider the influence of non-determinism in the
outcome of complicated system behavior. ALife simulations can be used by architects to model nondeterminism. The systems can be defined so that certain behavior has only a probability of occurring.
Then different runs of the system will show the result of such non-determinism. In addition, both ALife
and Architecture are very aware of how an outcome is sensitive to details of initial conditions. ALife
simulations can be defined with parameterized initial conditions and run with different parameter values
for the initial conditions. They allow architects to study the impact of initial conditions.
ALife has a powerful capacity to model spatio-temporal phenomena and to represent these conditions via
computer graphics. Visualization conveys actual movement, changes in a modeled environment or, the
influence of spatial proximity. For example, a designer may want to investigate the implication of a
spatial constraint. How does plan morphology and volumetric form interact with distribution of program
requirements, height restrictions, circulation area, and day lighting? Or how will a limited set of forms
interact with a given site boundary? ALife simulations facilitate the interactive investigation of many
possible spatial outcomes. While ALife models abstract the physical into visualization, the computer is
extremely fast and flexible in exploring and modeling a vast design space. These tools can be employed
in parallel with more traditional visualization tools or used directly to produce physical models for
evaluation via CAD/CAM technology.
In summary, key concepts of ALife have relevance to Architecture. In particular, ALife simulations
model:
- agency of components and environment
- emergent levels
- local, simple behaviors giving rise to global, complex organization
- sensitivity to initial conditions
- non-determinism affecting outcome
- abstraction using visualization
We have incorporated an ALife tool designed for architectural investigation into Emergent Design.
1.2 Emergent Design Software Process
Note that Emergent Design is a process that, by high-level description, could theoretically be pursued
without the convenience of software. However, convenience aside, Artificial Life software is what truly
makes Emergent Design powerful. Emergent Design is innovative and advantageous because it
incorporates state-of-the-art software technology plus Artificial Life research concepts into a design
process. Without the aid of software and the speed of simulations that it facilitates, one would not be able
to achieve a fraction of the depth, breadth and richness of emergent designs. The next section of the
paper explains the sub-process within Emergent Design in which software is used.
4
1.3 Using Emergent Design Software
Step 1). Define the goals of an investigation concerning how spatial elements may be configured within
a bounded area (i.e. site). Specify initial conditions (e.g. initial quantity of elements, size of size, scale,
conditions of site). Identify the elements and the relationships among them. State how elements influence
each other and under what conditions they interact. Both the site and its elements can exhibit agency.
This identification forms a functional description of the simulation.
Step 2). Using the existing set of Java class methods in the toolbox, specialize the tool with software
that implements the element and site behavior. Run the simulation from initial conditions and investigate
the outcome. Usually a simulation has non-deterministic behavior so outcomes from multiple runs are
assessed. Based on the outcome, return to either step 1 or 2 to improve or refine.
The process is practical and not complicated. In the first part, goals and initial conditions are defined. It
encourages thinking of elements and site as “active” or imbued with agency. In fact, this style of thinking
is very common. The architect says “this is what happens when a comes into contact with b or, when c
comes close enough to d or when the corner of any element of class e touches the boundary, when the
bounded area is full”, etc. Unfortunately we have noted that our students stop considering agency when
they move from thinking to explicitly designing because they can not actualize behavior or manage to
‘play it out’ in complex, non-linear circumstances. We have found that using the software reverses this
trend.
The second part is the software programming, and running and evaluating the simulations. Multiple runs
can be gathered and analyzed for general properties or one particular run may be scrutinized because it
shows something unanticipated or interesting. It is possible to find that the initial conditions are too
constraining or, conversely, too unconstrained. Often, in view of the results, the architect revises the
initial conditions to try out different relationships and actions. If the outcomes are satisfactory, they are
now used to take the next step in the larger process of solution definition.
Overall, the process demands that the architect incrementally refine the software. It may demand that the
architect make repeated attempts to capture the dynamics and nature of the scenario. This can be arduous,
but it is ultimately rewarding. Architects do not shy away from detailed time-consuming work such as
the building of physical models. Software specification and design may be equally time-consuming but it
is intellectually challenging and has equally satisfying outcomes. The approach to thinking of design
elements and their site as dynamic interacting agents is also provocative and enlivening. Because global
outcomes can not be predicted from the initial conditions, there is an element of suspense and eagerness
to observe the outcome of the simulation runs. The hypothetical exploration of this process is engaging
and, despite simple questions and initial conditions, it yields complicated and interesting results. With
Emergent Design, an architect is empowered with a tool to creatively model potential outcomes that are
based on complicated, interdependent conditions.
2.0 Emergent Design Curriculum
The introduction of computation has begun to fundamentally transform the teaching and practice of
architecture. New processes and tools are necessary to interact with the vast amounts of information
found in complex spatial and material systems characteristic of contemporary building programs and
advanced manufacturing processes. Building adaptive computational models of these dynamical systems
requires interactions among the disciplines of Architecture, Computer Science, and Artificial Intelligence.
The Emergent Design curriculum is structured to respond to these challenges with a model of design and
computation as synergetic processes embedded in and interacting with a dynamic environment. It is
intended to also foster a community among students, researchers and faculty in Architecture, Computer
Science, and Artificial Intelligence.
5
We have introduced the Emergent Design curriculum via a graduate level design studio course. The
course is project-based and emphasizes hands-on experience, collaborative learning and teamwork.
Students work in a wide range of media including computation. Traditionally, computation in design
studios has been limited to the use of closed-source software applications for drafting and rendering.
Instead, in the Emergent Design curriculum computation is introduced during the conceptualization of
projects as a tool for design exploration. Students explore the use of computational tools to develop
complex parameterized models of spatial and material systems. They model and simulate such factors as
patterns of use, development over time, and environmental factors. The students themselves extend the
initially supplied software toolbox. To assist collaboration, the project software is maintained and shared
via the World Wide Web. It is our intent that the toolbox will act as a stimulus for the design of new
tools that may be tested on architectural projects.
2.1 Goals of the Curriculum
The goals of the Emergent Design Curriculum are to educate students in:
 How consideration of emergent spatial organization through investigation of relationships is vital to
high quality design.
 How the design process can be bottom-up, more creative and explorative when computational
simulation tools are integrated into its earlier stages.
 That computer programming, like the mastery of computer-based tools, is not difficult. It is simply
another skill that can be learned, and is an essential technical competence required in the repertoire of a
contemporary architect. It empowers an architect creatively and exploits computation (an inexpensive,
readily accessible resource) as a means of exploring a larger number of possibilities than could be
explored by hand.
2.2 Project-based Design
Project-based architectural design remains the focus of the experimental version of the Emergent Design
studio course we have offered at the MIT Department of Architecture. Each semester our studio focuses
on a project drawn from real world design problems. As instructors, we select a thematic project that
enables students to explore a small number of particular applications of bottom-up, self-organizing or
agent-based models that have computational simulation potential. The architectural project forms the
basis for applying and adapting the software toolbox. By focusing on a complex project, students can
instantiate the principles of Emergent Design. They expand their abstract understanding because any
substantial project embodies many of the issues stressed in Emergent Design. These projects generally
involve architectural problems related to the convergence of several activity programs such as new
working environments open to continuous reorganization wherein dynamic systems interact, or to design
problems related to serial systems and pattern structures such as housing and community design. The
potential also exists with this approach to develop projects focusing on the constructive level of static and
dynamic building assemblies and components that are non-serial and lend themselves to applications of
variable manufacturing and rapid prototyping. In each case one objective is to explore how the
combinatory dynamics among simple building blocks can lead to emergent complexity.
2.3 Teaching Concepts of Artificial Intelligence and Artificial Life
Our teaching approach to Emergent Design is augmented with introductions to Artificial Life and
evolutionary computation. In the course of covering the introductory material we encourage students to
propose innovative applications of both techniques that would enable either explorative design or the
modeling of spatial or material forms. This emphasis on exploiting the concepts for architectural
purposes is helpful to the students. It helps them understand what aspects of their ideas will be difficult to
6
realize (e.g. what is a good fitness function for a genetic algorithm which evolves designs?) and it
engages them by challenging them to connect a set of intuitively interesting and compelling principles to
uses in their specific domain of interest.
Artificial Life is introduced as a growing field in which, at least, two strong research thrusts exist: ALife
systems that investigate complex systems based in the real world, e.g. ecology, chemistry or economics,
etc., and ALife systems that investigate the principles of complex system phenomena by simulating
bottom-up, local-level interactions and analyzing the resultant global emergent outcomes. We give
examples in the domain of cellular automata (Wolfram, 1984), morphogenesis (reaction-diffusion models
and Lindenmeyer systems, Lindenmeyer, Prusinkiewicz, 1989; Prusinkiewicz, Hahn, 1989;
Prusinkiewicz, 1991), and agent-based architectures (Bonabeau & Theraulaz, 1991, Bonabeau, 1997).
The students are challenged to consider architectural outcomes as complex, emergent, dynamical systems
that have self-organizing properties, experience growth or agent-based behavior.
2.4 Teaching Java to Architecture Students
These lectures and our introduction to resources on the World Wide Web set the stage for next teaching
our students Java. They are essential in building enthusiasm towards the often intimidating challenge of
learning a programming language. The simple nature of complex systems in terms of their specification
(i.e. local behavior with interaction considerations) is emphasized because it implies that simple systems
can be written that exhibit powerful, complex phenomena. This indicates that the tools the students are
expected to master (in terms of understanding how they are written rather than just be "turned on and
used") will be manageable.
The curriculum includes laboratories to teach interactive programming in Java applications. This handson experience with programming is largely missing from traditional approaches to architectural
education. It ensures that students are familiarized with the tool issues and also with the power the tools
and programming offer as design process aids.
Among an array of available programming languages (e.g. Mini-Pascal, C) Java was deliberately chosen
for the following reasons:
- it is object-oriented and design is verbalized and realized with the same object-oriented paradigm
- it runs on a variety of platforms based on universality of the Java Byte Code Interpreter
- many Java source examples are freely distributed on the World Wide Web
- it is considered easy to learn and, certainly, a simple subset can be quickly mastered
3.0 Emergent Design Software Toolbox: High Level View
The software toolbox is a collection of Java classes that can be used to generate complex spatial
organizations that exhibit emergent behavior. It is an open source, continually evolving toolbox for
building Architecture-based ALife applications. From the application user’s perspective, the application
runs in a window in which there are two elements.
1.
The display that graphically represents the current state of the simulation or “run”.
2. The Graphical User Interface (GUI) that allows the user to interact with the simulation by altering
display options, changing simulation behaviors, and otherwise modifying the simulation.
The toolbox, as software (i.e. from a programming perspective), provides a general framework that can be
engaged as is, can be supplemented with new classes, or can act as a group of superclasses that will be
specialized. The toolbox software (i.e. classes and methods associated with classes) can be conceptually
divided into three parts: foundation, specialization, and applet. The purpose of abstracting foundation,
specialization and applet software is to enable the implementation of different applications that, despite
their differences, still use a base of common software.
7
The foundation software defines classes that are common to all investigations or applications of the
toolbox. These can be used optionally. Its classes implement the environment of the simulation which is
termed a site. Each simulation engages the foundation software in more or less the same way: it creates
instances of the classes that are needed (which would be roughly the same for all simulations) and
initializes them with values (and resultant properties) that are specific to the inquiry.
The specialized or behavioral software defines aspects that are particular to specific simulations. It
facilitates the definition of tools such as models of growth, or different kinds of dynamic relationships
between objects in the simulation.
Each software toolbox application is run from its applet. The applet is where any Java language utilities
are imported, where global variables are declared, where instances of foundation and behavioral classes
are created and used, and where the emergent states of the simulation are processed and interpreted. The
applet is unique to the application at hand because it creates instances from the foundation classes that are
initialized in a particular manner to define the application. It also specifically uses only the code from the
specialization software that suits the application definition. The specialization code is tailored to the
application by means of parameterization and calling methods in different ways.
In the course of developing the software, both the foundation and specialized code underwent revision
countless times. The foundation classes now appear to be complete. They are adequate for the
simulations that we have written and show no fundamental limitations. Nonetheless, we expect they will
change over time. They will be updated to be more efficient and to improve modularity and abstraction.
It seems unlikely that more classes will be added because at this point the existing classes require only
refinement rather than additions. On the other hand, we expect the specialized software will constantly
expand. This is because of its nature. Each new desired functionality requires the implementation of
specialized behavior in the form of a class and methods. It should be noted that while the specialized
tools are built for a particular simulation, they can still be used for others as well. For example, the notion
of attractivity (which is one of the specialized tools) is general enough such that many different
simulations could characterize at least part of their behavior in terms of this. So, the time spent
implementing new functionality for one simulation should pay off in terms of reuse.
To accommodate the expansion of the toolbox, we have created a web page that contains the complete
specifications for the toolbox Application Programming Interface, as well as all of the source code for it.
Through exposure via the web, we hope that simulations will be built on a wider scale, which will result
in both the expansion and testing of the toolbox.
3.1 Emergent Design Software: Architecture
The Foundation Software: The user interacts with the application’s applet which runs “behind” the
window controlling the display and the GUI. Behind the relatively simple interface of the applet is the
software toolbox comprising foundation and specialization code. The foundation software is
conveniently described in terms of the objects we conceived as integral to any software toolbox
application and the behavior of these objects. In Java, we have implemented the objects in terms of
classes (from which instances can be created) and methods of those classes. For clarity, as we now
describe the foundation objects of an application we will italicize them. (Fig. 1)
In every software toolbox application there is a site. A site represents the two dimensional region of
architectural inquiry. Any polygon can define a site’s extent. A site has no inherent scale - the scale is
determined by the user. A site is defined generally so that it can be conceived in as a variety of ways: for
example, as a large-scale map of states or counties to investigate transcontinental transportation routes,
or, as a dining room to study the layout of furniture.
8
The site can be refined in terms of its sub-parts and in terms of the architectural elements that are imposed
on it. We term a sub-part, in general, a piece and an architectural element, in general, a zone.
A zone is an object that is imposed on the site, rather than a component of it. The bulk of an application
consists of the applet creating zones and enabling their interaction among each other (which results in
zone placement, removal or movement) based on different rules, preferences, and relationships. Some
zones are placed on the site upon initialization. Others are placed during the run, according to the
behavior dictated by the site, its pieces or other zones. A zone may move during a run or even be removed
from the site. A zone can represent a table, room, building, farm, city, or whatever else the architect
wishes to represent on the site. A zone can be modified: one can change its color, translate, scale, and
skew it, or change its type (e.g. "farm"). In terms of its display properties, a zone can be transparent
(either partially or fully), and can lie on top of or below other zones.
The site consists of pieces. As displayed, the site is subdivided into an array of uniformly sized rectangles
(much like a checkerboard). The programmer specifies the dimensions of these rectangles in terms of
pixels (the smallest modifiable units on the computer monitor). Thus, the display may consist of a single
large piece, or as many pieces as there are pixels in that area. By choosing the dimensions of a piece, the
level of granularity for the simulation is chosen. Resolution has implementation consequences. The finer
the resolution, the slower the speed of a run (because of display costs). A piece has its private state. That
is, it can store information about various local properties of the site, as well as information about the zone
superimposed on it.
Spatial organization occurs on the site via agency of the site and zones. A zone can move itself on the site
according to criteria based on local conditions such as zones nearby, the properties of a piece or
conditions of the site. For example, all zones may be initially randomly placed on the site and then
queried in random order to choose to move if they wish. The decision of a zone to move will depend on
its proximity to other zones and the properties of the piece it sits upon. In a different perspective, but one
still generally realizable, the site can direct the incremental, successive arrangement or placement of zones
on its pieces. This direction is according to a set of behavioral directives that are likely to take into
account emerging, cumulative global conditions (i.e. available free space, current density) as well as local
preferences that reflect architectural objectives in terms of relationships.
We have implemented a particular means of property specifications for a site which we call a
siteFunction. A siteFunction is a general means of representing various qualities or quantities that change
across the site and which may be dependent on time. It could, for example, be used to model
environmental factors such as light, sound or movement patterns. A siteFunction is a function of four
variables [x, y, z (3D space which a piece defines), and t (time)]. One can create all sorts of interesting
relationships between functions and other functions, as well as between zones and siteFunctions or vice
versa. By creating feedback loops (where the output of, say, a function is used as input to another
function, or to a zone, whose output could be in turn given as input to the object from which it received
input), one can very easily set the stage for emergent behavior between human creations (zones) and the
environment (siteFunctions).
The foundation software is intentionally flexible, and can be used to model almost anything of a spatial
nature. By defining a group of zones and siteFunctions, and defining ways for them to interact with each
other locally, one can set up the circumstances that lead to extremely complex, unexpected global
behavior. The toolbox can thus be employed in investigations that strive for aesthetics, that predict
development's effect on the environment (and vice versa), or that have other purposes. The ultimate goal
is that the user is only limited by the user’s imagination. One could use ideas from ecology, chemistry,
physics, and other disciplines to formulate the ways in which interaction can take place in the simulation.
9
The Applet Software: The applet uses two foundation classes: display and site. The display class
implements the graphical display of the site and its updating as the run proceeds and emergent dynamics
occur. Information about particular objects or locations in the simulation can be obtained by clicking the
mouse on the desired area in the display. The perimeter is drawn and then color is used to indicate the
properties associated with either pieces or zones of the size. Two property drawing modes are
implemented. In "overlay" mode the selected properties (not all need to be displayed at once) are
transparent colors that appear overlapping. In straight "draw" mode, the color of each new property
obliterates the color of the property below it.
The GUI Software: The graphical user interface (GUI) is typically implemented using the classes in the
javax.swing package. Classes in this package allow the programmer to create various assortments of
screen layouts, buttons, text fields, labels, lists, scroll bars, etc. Most simulations have a unique user
interface, since each simulation has different functionality, and probably has different ways in which the
user is allowed to interact with the simulation. The interfaces are all similar in that they use similar
components (e.g. buttons), but they are different in that the components represent and allow modification
of different quantities.
The Specialization Software: The specialized tools are programs written specifically for a certain
simulation, or in some cases, a certain type of simulation. Whereas the foundation classes are used for all
of the simulations, the specialized tools may not necessarily be universally used. Some simulations may
use no specialized tools. The main function of specialized tools is to modularize behaviors and
procedures that could possibly be used in other simulations. With each new simulation, more new tools
will likely be written and the existing tools will be instantiated in novel ways, suited to the specific
simulation. Thus far, two families of specialized tools have been defined and used.
Rubber stamps can be thought of as composite zones. In many simulations, the elemental building block
is not a simple homogenous area. Rubber stamps allow the user to define larger building blocks in terms
of the most elemental of zones. This saves the user the time of having to buildup the a new zone every
time, and also helps solidify the uniqueness and construction of that particular composite zone.
Attractivity is one way to conceptualize the movement of zones in relation to other zones and the site.
Most simulations involve some sort of movement or repositioning, and in many cases, this can be
expressed quite elegantly in terms of attraction or repulsion (negative attraction). Different types of
simulations often warrant different types of attractivity.
3.2 Case Studies
In the experimental Emergent Design Studio in spring 1999, students proposed new models of housing
and community for the rapidly expanding and increasingly diverse population of California’s Central
Valley. Projects investigated emergence as a strategy for generating complex evolutionary and adaptive
spatial patterns based on new relationships between dwelling, agriculture, and urbanism. Student teams
developed three morphologies: Field, Enclave, and Rhizome. Designs operated at both the community
scale (120 dwellings) of pattern formation and detailed design of a prototype dwelling or basic building
block. A primary objective was to work toward typological diversity and spatial flexibility planned as
combinatorial systems using variable elements.
With respect to the Emergent Design software, each team specified the local spatial relationships that they
desired and worked to author a simulation that satisfied local constraints and behaved according to local
relationships, and by doing so, exhibited coherent emergent global behavior. Emphasis was placed on
procedural knowledge and the dynamics of spatial relationships. The design of these simulations forced
the students to thoroughly examine their specifications and the consequences of them. No distinction was
10
made from the "means" and "ends". The nature of the dynamics of the processes were inseparable from
the spatial layout of the architectural programs. The students were not trying to create a single static
"best" answer to the problems posed to them. Instead, they aimed to explore whole systems of
morphology by examining many different simulation runs.
Team A: FIELD PROJECT
The field morphology presented the issue of integrating urban housing with farmland. Overall housing
density needed to remain fairly low. The existing houses on the site were widely distributed across it, and
the team wanted to add more residences while preserving this spare agricultural aesthetic. They wanted to
minimally disrupt the already established feel of the site while allowing it to support a larger population.
Thus, the team's goal was to determine how such new housing could be aggregated. They wanted to
create a distributed housing system, one where there was not a clear dense-empty distinction.
Furthermore, they would not be satisfied with simplistic regular patterns of housing. In employing an
agent-based, decentralized, emergent strategy, they conceptualized a dynamic simulation in which each
newly placed house would be endowed with a set of constraints and preferences and the house would
behave by changing its location within the site in order to fulfill its preferences. They wanted to appraise
the non-uniform housing aggregations that can arise from easily expressed, naturally suggested
interaction constraints.
In all simulations, schools (2 x 2) are colored red, farms (varied sizes) are colored shades of green, preexisting houses (1 x 1) are colored blue, newly placed houses (1 x 1) are colored various other colors, and
roads (varied sizes) are colored shades of gray. The scale is 1 piece : 33 feet. To give some idea of
proportion, the houses are represented graphically by a one piece by one piece square.
Field (1):
In the first simulation designed by the field team 150 new houses are placed down at random on the site.
The area taken up by these new houses is approximately 1% of the total area of the whole site. Once all of
the new houses have been placed, they react with one another according to an attractivity rule that the
field team wrote for the houses in order to express their preferences regarding proximity to each other.
The rule (which applies to a house in both the x and y directions) is:
- if you are between three and five units (pieces) away from another house, move one unit towards it.
- if you are between one and two units away from another house, move one unit away from it.
- if you are under the influence of multiple houses, the "center of mass" of the collection is determined
and the collection is given agency to follow the same attractivity rule.
The simulation was intended to run indefinitely until stopped by the user. Several patterns of behavior
emerged from the application of this attractivity rule:
1) Based on initial conditions, the houses would form small groups, and would travel "in formation" much
like a flock of birds. That is, although each house in the group would travel relative to the site, the houses
would not move relative to each other. This somewhat strange behavior can be attributed to the fact that
the team specified individual houses would move sequentially. That is, in a group of three houses, house
1 moved first, which in turn affected the movement of house 2 and then house 3 (which would very likely
follow house 1).
2) Sometimes, a group of houses would find a steady state, and simply oscillate between two (or more, if
there was more than two houses in the group) orientations.
11
3) The various groups of houses would sometimes cross paths when travelling across the site. When this
happened, the groups would sometimes merge, and sometimes take houses from each other. These
collisions would also often alter the trajectories of the groups involved.
4) Groups tended to get larger as the simulation went on. The most probable cause of this was that once a
group formed, it (usually) did not spontaneously break apart, since the rule dictates that houses would stay
within two or three units of each other. Groups sometimes (but not often) broke apart when they
encountered the edge of the site, or interacted with another group. Given this behavior, the simulation
could not help but form larger and larger groups. This was not necessarily the goal of the investigation.
5) Groups sometimes became "anchored" to existing (not newly placed) housing. This was likely because
newly placed houses were attracted all types of houses, and since existing houses are stationary, the newly
placed houses within the influence of existing houses would not move out of range of the existing houses.
The aim of the attractivity rule was to produce housing patterns that preserved a distributed pattern. At the
same time the patterns should form micro-communities of small groups of houses. While the houses in a
micro-community were expected to be socially grouped, ample space between them was desired. After
examining a large number of runs of this simulation, it was apparent that the rule achieved the desired
results.
Field (2):
The team's second simulation, although started after Field(1), was a parallel investigation into how new
housing could be distributed in the aforementioned pre-existing farmland site. It explored expressing the
proximity preferences with the mechanism of feedback (using a behavior's output as its next input). As a
simple experiment, the site was tiled with 144 squares each of dimension 10x10, that were each
designated as either a strong attractor, weak attractor, weak repellor, or strong repellor. These
designations were assigned randomly with equiprobability. Houses occupying approximately 2% of the
site were then randomly placed on top of these patches. At a time step, each house would move
according to the strength of the attractor and repellor patches it was close to. Attractor patches made
houses go to the center of the patch at varying speeds (depending on the intensity of the patch), while
repellor patches made houses go radially outward. In addition, each patch would newly update what type
of attractor or repellor it was. This update was the crux of the feedback in the system. The new patch
designation was determined by the patch's most recent state and by the ratio of the overall house density
across the site to the density of houses on the specific patch.
Two instances of feedback were investigated. In one case, if a patch had a density that fell some threshold
below average, it became less repulsive or more attractive by one designation (that is, strong repellors
became weak repellors, weak repellors became weak attractors, etc. Note that nothing would happen to
strong attractors in this case). If a patch had a density that was some threshold above average, then the
opposite would happen: patches would attract less and repel more.
This case demonstrates negative feedback - this is feedback that damps (or lessens) the behavior of the
simulation. Such simulations can usually run forever, with only minimal periodicity (when existent, the
periods are long). A relatively uniform density is maintained across all of the patches, due to the aforesaid
feedback mechanism. In this case, extremes are usually successfully avoided. Few patches have very
small or large densities, and those that do usually take measures to change this. Also, few patches are
strong repellors or attractors; they are usually of the weak kind, since their densities usually do not vary
by that much.
In the other case of feedback, the patch's properties were intensified by the feedback (i.e. positively). If a
particular patch had a high density of houses on it, it would become more strongly attractive, which
12
would tend to draw more houses to/near the center of it which would in turn make it more attractive.
Patches with low densities would become increasingly more repulsive. This case favored extremes.
Patches fell into one of two states indefinitely. Either they became strong attractors that captured all of the
houses in their vicinity, or strong repellors that were empty.
In the field team's case, the negative feedback simulation proved far more useful than the positive
feedback simulation. Negative feedback facilitated the notion of distributed housing, while positive
feedback created large empty expanses of farmland intersticed with densely packed regions of housing contrary to the distributed aesthetic. Although the negative feedback simulation created a distributed
housing condition, the condition it created lacked structure; houses did not form coherent groups.
The notion of feedback could have been explored much more extensively. This simulation was more of an
exploration into general concepts than a specific investigation. One could vary many of the parameters in
the simulation (e.g. the number/size of the patches, how the patches relate to existing structures on the
site, how many types of patches there should be, whether there should be a neutral patch that is neither an
attractor or repellor, introducing indeterminism into how patches change states or how houses move,
setting the initial conditions less equitably, etc.). The notion of feedback could be explored from a
viewpoint more central to housing, instead of patches of land by having each house serve as an attractor
or repellor, depending on how densely populated its neighborhood is. Nonetheless, with the design and
goals the team pursued, the results were satisfactory.
Field (3):
Upon seeing multiple instantiations of the "desired result" in the Field(1) and Field(2) investigations, the
team decided to refine their goals. The results of the first and second investigations created emergent
aggregations that, while non-uniform, were too nondescript. By the aggregations being so amorphous and
distributed, they actually disrupted the initial site conditions more than a more structured aggregation.
This outcome was very much unanticipated. It was only realized after exploring many runs of the
simulations . It was decided that the notion of distributed housing should be combined with some greater
notion of form and directionality.
Thus, in Field(3), the team chose to restrict the placement of houses to a set of designated farms. After
experimenting with a scheme where the designated farms were chosen at random, the team decided the
randomness introduced undesirable isolation. That is, they wanted the designated farms to either touch or
only be separated by a road. Furthermore, they wanted the non-developable area carved out by the nondesignated farms to be somewhat path-like (as opposed to node-like, or central), in order to architecturally
recognize a crop rotation scheme. This was accomplished by choosing a farm at random to be nondevelopable (in contrast to designated as eligible for housing), then having that farm randomly choose one
of its neighbors to be non-developable as well. Control would then be passed to the newly chosen
neighbor farm which would repeat the process. In the case where a non-developable farm was completely
surrounded by other non-developable farms or borders, it passed control back to the farm that chose it.
Control was recursively passed backwards until one of these non-developable farms had a neighbor that
had not been chosen yet. The first farm would be colored the lightest shade of green, with each successive
farm being a darker color of green. The color progression was intended to be suggestive of a possible crop
rotation scheme. Once the area threshold has been reached for non-developable area (which is somewhere
over 50% for the field team), this part of the simulation ends.
In the remaining developable area (colored black, for clarity), houses are placed randomly, ten at a time.
(Fig. 2) After they are placed down, an attractivity rule is run, the houses' movement is determined, and
they move accordingly. In this particular application of the attractivity rule, houses are attracted to
schools (which are stationary) and other houses (both stationary existing ones and movable newly placed
ones). When newly placed houses have moved, they become stationary. This rule was instated so that the
13
simulation did not achieve a predictable result of all newly placed houses being as close as possible to
each other and the schools. Instead, the field team wanted attraction to help achieve something more
subtle. They wanted houses to tend to gather together in loose communities, and tend to be located near
schools, but not always. Thus, each newly placed house was attracted to other houses within a five piece
circular radius (unlike the square radius of their first simulation), and also attracted to the school that was
closest to it (other schools would have no influence over it). The attraction is inversely proportional to
distance, although any type of attraction could be used. (Fig. 3)
Attraction independent of distance and attraction proportional to distance were also tried, but attraction
inversely proportional to distance yielded the best results. With this sort of attraction, zones form very
strong local bonds, and are usually not affected by distant zones. This sort of behavior (strong local
interaction, weak global interaction) is often a key component in bottom-up simulations that exhibit
emergent behavior.
This simulation yielded the most satisfactory results to date. The housing arrangement was relatively
unobtrusive and not formless. Still, the housing collections are not predictable regular patterns, but subtle
arrangements, subject to the behavior instilled in each of the houses. (Fig. 4) It was also recognized that
the experiment might be extended by using the negative feedback mechanisms explored in the second
simulation. Instead of making houses stationary after they have moved once, one might elicit more
complex behavior from the simulation by allowing newly placed houses to move indefinitely, with a new
rule to handle the inevitable dense clustering of houses around each other and schools. Unfortunately,
time did not permit this extension to the investigation. The team felt the result they acquired adequately
set them up to proceed with other aspects of the project design. (Fig. 5, 6)
Team B: ENCLAVE PROJECT
In contrast to the distributed system described above, the goal of the enclave team’s investigation was to
explore methods for working with a clearly bounded site and studying specific housing patterns based
upon a basic building block. The team decided to fix a large number of variables pertaining to the site.
This, they determined, would allow them to study specific housing patterns with more precision and less
distraction. After much experimentation, they decided to restrict their domain of investigation to the study
of grid-like housing patterns that would be contained completely within the site boundaries. This rather
strict global restriction helped them establish a firm context, which they could choose to reinforce or
create a foil to.
The goal of the enclave team was to generate grids of houses with meaningful reference to the site's
borders, each other, and the community space between them. Although the form and arrangement of
houses could vary quite dramatically throughout the site, some key adjacency relations were to be
emphasized throughout the site. The enclave team's concept was to have the site's houses self-organize so
they could study the quality of the emergent patterns. Self-organization was accomplished, first, by
specifying constraints and tendencies that the site must satisfy for house placement. Then the site was
given probabilistic behavior (i.e. a non-deterministic algorithm) to find a satisfying outcome by adding or
removing houses from specific locations on the grid.
In the team's simulations a bifurcated house is used as the atomic housing unit. The prototypical house is
composed of a long larger section (the high-density residential part), attached to a square smaller section
(the more open recreation/agriculture part). All units are oriented parallel to one another in the site. In
terms of color coding graphics, the roads are initially blue, the houses are different shades of gray, and
houses placed or removed due to constraints are other colors. The large section of the bifurcated house is
five by three pieces, and the small section is three by three pieces. The road is three pieces wide. The
scale is 1 piece: 8 feet.
14
Enclave (1):
In the last of an evolving series of simulation designed by the enclave team, houses are sequentially
placed horizontally, from top to bottom along the site's grid. (Fig. 7) They are placed from right to left
(east to west) - this placement order recognizing the existing established town east of the enclave site. All
houses on the top row of the site must have their smaller part oriented downward. This establishes another
form of directionality for the site, and ensures that these houses' open areas are oriented towards the rest
of the community, forming an envelope of sorts along the top border. The remaining placement
orientations are determined randomly.
Instead of dictating a specific number of houses to be placed in the site, the team simply chose a density
preference. Each particular lot on the site has an approximately 70% chance of having a house built on it.
The houses on the site obey several adjacency constraints. To engender the enclave's team idea of
community, they wanted to ensure no house was isolated from all other houses and no group of houses
became too dense. These constraints were satisfied in the following ways: 1) If, after a house had been
placed and its horizontally neighboring lots considered for placement, that house did not have another
house horizontally adjacent to it, a house would be inserted to one side or the other to ensure that all
houses had at least one horizontal neighbor. (2) If four houses were placed down in a row, with no gaps,
and a fifth house was placed down next to the four house group, that fifth house would be removed, in
order to allow for circulation around the various houses. 3) In some cases, the satisfaction of one
constraint would violate another (this was unanticipated, and only noticed after the first simulation was
built and run). Specifically, sometimes when a house was added next to another house with no horizontal
adjacencies, this would create a five or six house wide horizontal adjacency group. In this case, the
enclave team decided to remove the middle house (in the case of five) or one of the middle houses (in the
case of six) to satisfy constraint 2 while not reviolating constraint 1.
The team's design also called for a high level of interaction between the user and the simulation. They
wanted the user to be able to designate where houses would go if they wished (still subject to the
aforesaid constraints). They designed a system in which the user could at any time choose what should
happen to the current empty lot (whether it should remain empty, have an upward oriented residence
placed in it, or have a downward oriented residence placed in it). This system gives the user the ability to
manually design portions of the site, while letting the simulation design other parts. This technique allows
the user to narrow the search space of possible arrangements further by dictating the contents of any
number of lots, and only allowing the simulation control over lots that the user passed by. (Fig. 8)
The final results represented the culmination of a variety of experiments with placement rules and site
specifications. Most outcomes reflected their general goals. The team selected only a few of the selforganized outcomes to use in subsequent steps in their design process. The group profited from the speed
of the computational experiment in terms of generating many outcomes from which to choose from. (Fig.
9, 10)
Team C: RHIZOME PROJECT
The rhizome team sought to design a high density, multi-programmatic building within an existing urban
area. This objective raised many issues. Key among them were: 1) How should the volume of the building
be apportioned among programs? 2) How should a program be distributed within the building? 3) How
should different programs be placed in a relation to each other within the building? 4) How could the
multi-programmatic building seem to develop and expand into its functionality?
The rhizome team found that an agent-based concept (where each program exhibited behavior to satisfy
its preferences) seemed to best express how the desired high density, multi-programmatic building could
evolve. They required an algorithm that would incrementally fill unoccupied space in a building
15
efficiently and according to the principles of program they deemed important. In their simulations, single
dwellings are cyan, family dwellings are blue, offices are purple, and gathering and public spaces are red.
Rhizome (1):
Given the complexity of the relationships that the rhizome team was trying to capture, it was difficult for
them to propose a design for their simulation. After much thought, it was decided that they would codify
desired spatial relations in the form of a preference table (where each program had an affinity rating for
every other program, including itself). Instead of starting with pre-defined volumes and rearranging them
in space to help satisfy their spatial preferences, they began with an empty volume and fill it program by
program, allowing successive program volumes to choose their neighbors according to their preferences.
The rhizome team also specified dimensional constraints. First, they specified a bounding volume in
which all of the programs must reside. They then specified volume ranges for each of the programs; that
is, they dictated how many cubic feet could be dedicated to single dwellings, etc. They also determined
volume ranges for instances of those programs (e.g. the height of a single dwelling should be between 8'
and 10', the width should be between...). The idea behind providing ranges instead of strict numbers was
to enforce general pragmatic constraints, but to carefully not restrict the creativity of the process by
enforcing hard numbers (which would be contrived in any case). All programs were to be rectangular 3D
volumes. (Fig. 11)
The simulation starts at the bottom floor in a randomly chosen place that is adjacent to the border of the
bounding volume. The site was an empty lot surrounded by other high-density buildings. They wanted
their building to be a response of sorts to its surroundings, which is why they chose to build it "from the
outside in." A program is picked according to some preference (in this simulation it is random, but it
could just as easily be set, or chosen according to some distribution). A volume that satisfies the
applicable constraints is chosen, and the program is built. This new zone is now responsible for choosing
the next program to be placed. It chooses according to its affinity preferences. This simulation uses a
rank-based system (i.e. always choose the first choice first), but other systems have been explored as well
(e.g. a percentage based system in which the chance of the first choice being chosen is dependent upon
how strong a choice it is, relative to the other choices). This new program is assigned a permissible
volume and placed adjacent to the zone that chose it. The process continues until all of the global
minimum volume constraints have been reached.
When a volume has been chosen for a new program, an adjacency to the zone that chose it is picked at
random. The new zone can attach one of its corners to any of the choosing zone's corners as well as at
intervals 1/3 of the length of a side of the choosing zone. Some zones allow for stacking; that is, the new
zone may be placed directly above the choosing zone. The 1/3 side length granularity was chosen for two
reasons: 1) Although the simulation did not place circulation spaces, gaps in between different zones
suggested these spaces (which would be placed by the user). To enhance communication among residents
the rhizome team sought to articulate more unconventional, large circulation spaces. By allowing these
1/3 side length offsets, they increased the likelihood of wider, more social circulation spaces. 2) By
imposing this gradation, the user can more clearly see the relationships between adjacent zones. If
gradations were allowed to vary on a pixel level, structures would be infinitely harder to visually evaluate.
The 1/3 side length gradation ensured that all of the gradations would clearly be qualitatively different
from each other.
In many cases, the randomly chosen volume would not fit in the space chosen for it because it would
overlap with some other already placed volume. In this case, the volume tries every possible adjacency
with the choosing volume until it finds one that it will fit in. The 1/3 sidelight gradation ensures there are
only 12 attachment points per zone, no matter what the volume's size is. This makes the check fast and
relatively easy. If the volume can not fit at any of the adjacency points, it shrinks its dimensions by a
16
prespecified decrement value and tries again. If the volume has shrunk to its minimum possible size
allowed by the constraints and it still can not fit, then the choosing volume gives up choosing control to a
neighboring volume.
In some cases, a program's first choice for adjacency has already met the maximum volume constraints.
In this case, the program will go through its affinity list until it finds a program that has not met its
maximum volume constraints.
At present, the simulation has no hard structural constraints. That is, the finished structure may have
zones on the third floor that do not have zones beneath them. This problem is avoided in several ways. 1)
If the void underneath the unsupported zone is small enough, and not isolated from other circulation
areas, the volume below an unsupported zone could be interpreted as circulation area. Alternatively, these
leftover voids could be interpreted as building maintenance or storage areas. 2) If one makes the
minimum volume constraints rather large, then the simulation will have no choice but to completely fill
the bounding volume in order to satisfy these constraints. 3) Horizontal adjacency is almost always
heavily favored to vertical adjacency for architectural reasons (a same-floor relationship is more stronger
than a different-floor relationship). This tends to cause the whole of the first floor to be filled up before
the second floor is constructed. (Fig. 12, 13)
The simulation should, however, in the future implement harder structural constraints, since structural
integrity is imperative to any proposed building. They could take the form of disallowing construction on
a floor, unless the floor below it has no empty space, or of forcing any unsupported structure to place a
zone large enough to support it underneath of it. The simulation could also be extended by increasing the
number of different programs.
3.3 Appraisal of Emergent Design Software
After having observed three teams of students engage the software, it became clear the toolbox was
highly flexible, and students can use it in different ways. First, on a simulation level the three teams
engaged the tool for investigations of different scale. The field team used the software to model housing
aggregation at the site level. The enclave team studied the possible patterns given an atomic housing unit
and the rhizome team investigated the programmatic layout of a single building.
Second, each team found a unique way integrating the tool with the rest of their design process. The field
team, which started using the software first, viewed design/conceptual space as an open system that was
constantly being modified and added to. They came up with many ideas for different simulations to study
different aspects of housing aggregation. Some of the simulations evolved more or less independent of
each other, while others were deeper inquiries into subjects that previous simulations had only touched
upon. The simulations were modified very little by the field team. Instead of trying to tweak parameters
to achieve the desired result, they preferred to try completely new approaches. Many of their simulation
ideas were not implemented due to a lack of time. Computation was used in a parallel way, rather than a
serial one, with the field team. There was no refinement, so to speak, but rather complete reformulation of
the design.
The enclave team was the next team to employ the software. They chose to use the tool on one specific
part of their design. At the start, they had a general idea of what behaviors that they wished to model, but
not a full specification. After implementing what they had, it became clear that they had not properly
accounted for all of the sorts of interactions that could take place. Little by little, the enclave team fleshed
out a full specification of all of the behaviors. Once they finished implementing their specification, they
took the results of their simulation (not a particular spatial arrangement of houses, but rather a set of
spatial arrangements that satisfied the constraints of their simulation) and incorporated them with the rest
of their design. Enclave's use of the tool was far more serial than field's. They had a very narrow domain
17
of investigation, and subjected their tool to many stages of refinement and specialization. The rhizome
team was the last team to engage the software. They had thought extensively about the preferred
relationships between the various programmatic elements in their design, and encoded them in a
"preference table". After collaboration with computer scientists, they arrived at an algorithm which would
try to maximize the satisfaction of these preferences. The rhizome team's interaction with the toolbox was
the cleanest of the three teams. They had the most concrete idea of what they wanted and how to get it
before the implementation started. This resulted in there being few changes to the software once it was
finished. The complexity and richness of the simulation's results reflected the rhizome team's forethought
into the design of the simulation.
3.4 Future Work on the Emergent Design Software
While the toolbox is now sufficiently complete to be used in another studio or to support a course for
architects on programming in Java, there remain fruitful ways to extend it. The toolbox as yet has no
means of discriminating between different simulation runs according to criteria imposed by a user. This
implies an evaluation component would be welcome. A natural choice would be to implement a genetic
algorithm (Holland, 75; Goldberg, 89) that would perform search and optimization in the space of
simulation outcomes. We have, in fact, enhanced the rhizome application with a genetic algorithm since
the spring studio course ended. The fitness function of the genetic algorithm measures the outcome’s
success at fulfilling the preference matrix.
In general, evaluation components are very contentious. Particularly, at the stage of design in which the
toolbox is situated, i.e. well before quantitative criteria are clear. Our strategy is to equip the genetic
algorithm in the toolbox with a user controlled fitness function that can be altered on the fly. It is also
desirable that the user could directly intervene between generations of the genetic algorithm’s search and
choose certain population members to be copied without selection into the next generation.
Another aspect of the software that would benefit from enhancement is its facility for building
applications easily. At present, we rely on explaining the toolbox parts (foundation, specialized, applet) to
the students and covering the classes and methods they will need to build their own application. The
toolbox would become much more accessible if application building were to be provided through a
graphical user interface that generated the key parts of the application code. Then, users would simply
have to code the behavior specific to the new application. This enhancement is not simple and will
require an experienced developer and significant development resources.
We have also already implemented up a separate software tool that investigates emergent surfaces. The
surfaces grow according to a grammar in 3-dimensional space. The bounding volume for surface growth
acts as one constraint on growth while attractors and repellors within it, also influence the growth. This
technique is at its core a 3-dimensional L-system (Lindenmeyer & Prusinkiewicz, 1989). We have
enhanced the L-system by modifying the way in which direction is referenced and adding filters that
surface areas between branches that are grown. At present the tool, named MoSS (Testa & O’Reilly,
1999) is implemented through the application programmer interface of Alias|WaveFront. We would like
to implement a growth system in the toolbox using the lessons learned from MoSS.
4.0 Summary and Conclusions
In summary we have described Emergent Design, a new design process for Architecture. Emergent
Design stresses:

the investigation of a design problem’s elements and their inter-relationships
 a process of solution composition that works bottom-up and is guided by considering the design
elements as a collection of interacting agents that give rise to emergent global coherence
18
 the exploitation of computation in the form of Artificial Life inspired software tools that can explore
possible solutions in terms of dynamics, evolution, and the interaction of a collection of processes.
Central to the definition of Emergent Design is the exploitation of computation and computer software to
explore design possibilities as dynamic agent-based simulations. We have introduced a curriculum
centered on Emergent Design. The curriculum is project and studio based. It stresses interdisciplinary
collaboration and a team-oriented focus on a broad yet concrete design project. It allows students to learn
about and experience diverse roles within a design team. The course emphasizes design as process over
design as a product. This approach is achieved by having the students engage architecture from a
complex and contemporary perspective, as a set of dynamic and interacting material processes, and by
having them learn the value of tool design. Emergent Design, because of its emphasis on design process
and tools, has the potential to change the way design in general is taught on a national and international
level. One of our intentions is to show that traditional, rigidly defined roles such as "architect" and
"programmer" are not as effective and flexible as "designer" roles which are personalized in terms of
different types of information, demands or technical skills. These definitions of specialized designer roles
will lead to more effective design processes and improved design outcomes.
We have described a Java-based, open-source software toolbox that supports the principles of Emergent
Design. A course offered in Spring, 1999 has allowed us to show by case studies how the software both
supported Emergent Design investigations and taught students the working principles of Emergent
Design. The case studies can be considered the first, but not last, successful examples of the Emergent
Design toolbox’s value.
In broader conclusion Emergent Design has already begun to prove itself as a powerful medium of
communication for developing and diffusing transdisciplinary concepts. Computational approaches have
long been appreciated in physics and in the last twenty years have played an ever-increasing role in
chemistry and biology. In our opinion, they are just coming into their own in Architecture. Organization
of architectural systems has reached an unparalleled level of complexity and detail. Modeling of
architectural systems is evolving into an important adjunct of experimental design work. It is clear that
the nature of architectural design is increasingly complex and has many similarities to living systems and
the feedback loops of biological signaling pathways. Just as in biology the tools of computation are
essential to understanding these. The revolution in computer technology enables complex simulations that
were impossible to implement even a decade ago. Effective use of this technology requires substantial
understanding of complex systems throughout all stages of the simulation process. (Kollman, Levin et.
al., 1996). The relationship between simulation, mathematics and design ties architecture to more
universal theories of dynamical systems. This larger theoretical framework and the techniques of
investigation developed in Emergent Design provide a common framework for students in architecture,
engineering and management but also researchers and students in disciplines of fundamental importance
to the future of architecture including mathematics, computer science, artificial intelligence, and the life
sciences. Emergent Design provides an interdisciplinary platform and a new model of design that has the
potential to radically transform design practice and design education in the next decade.
References
Bonabeau, E.W. (1997). "From Classical Models of Morphogenesis to Agent-Based Models of Pattern
Formation". Artificial Life 3: 199-211.
Bonabeau, E. W. and G. Theraulaz (1991). "Why Do We Need Artificial Life?" in Artificial Life, An
Overview, C.G. Langton ed. MIT Press.
Goldberg, D.E. (1989). Genetic Algorithms in Search, Optimization, and Machine Learning. AddisonWesley, USA.
19
Holland, J.H. (1975). Adaptation in Natural and Artificial Systems. The University of Michigan Press,
Ann Arbor, USA.
Kollman, P., and Levin, S., eds. (1996). “Modeling of Biological Systems, A Workshop at the National
Science Foundation March 14 and 15, 1996”. Washington: National Science Foundation.
Kwinter, S. (1998). “The Genealogy of Models: The Hammer and the Song”. ANY, No. 23, New York:
ANY Corporation: 57-62.
Langton, C.G. ed. (1989). Artificial Life. Santa Fe Institute Studies in the Sciences of Complexity, Proc.
Vol. VI. Reading, MA: Addison-Wesley.
Langton, C.G., Taylor, C., Farmer, J.D., and S. Rasmussen, eds. (1991) Artificial Life II. Santa Fe
Institute Studies in the Sciences of Complexity, Proc. Vol. 10. Reading, MA: Addison-Wesley.
Lindenmeyer, A. and P. Prusinkiewicz. (1989). "Developmental Models of Multicellular Organisms: A
Computer Graphics Perspective". In Artificial Life. Santa Fe Institute Studies in the Sciences of
Complexity, Proc. Vol. VI. Reading, MA: Addison-Wesley.
Prusinkiewicz, P. and J. Hahn (1989). Lindenmeyer Systems, Fractals and Plants. Springer-Verlag
Lecture Notes in Biomathematics, No. 79.
Resnick, M. (1994). "Learning About Life". Artificial Life Journal. Vol. 1. No. 1, 2., pp. 229-241.
Testa, P. and O’Reilly, U. (1999). “MoSS: Morphogenic Surface Structures”. Greenwich 2000
Conference Proceedings. London: Interscience Communications (in press).
Wilensky, U. and Resnick, M. (1998). "Thinking in Levels: A Dynamic Systems Perspective to Making
Sense of the World". Journal of Science Education and Technology. Vol. 8, No. 2.
S. Wolfram. (1984). Computer Software in Science and Mathematics. Scientific American, 251 (3): 188203.
20