Download Putting some (artificial) life into models of musical creativity

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Cross-cultural differences in decision-making wikipedia , lookup

Social learning in animals wikipedia , lookup

Genetic algorithm wikipedia , lookup

Play (activity) wikipedia , lookup

Altruism (biology) wikipedia , lookup

Other (philosophy) wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

William Clancey wikipedia , lookup

Cognitive model wikipedia , lookup

Neuroeconomics wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Artificial general intelligence wikipedia , lookup

Artificial intelligence wikipedia , lookup

Social perception wikipedia , lookup

Sociobiology wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Natural computing wikipedia , lookup

Agent-based model wikipedia , lookup

Transcript
Putting some (artificial) life into models of musical creativity
PETER M. TODD
Center for Adaptive Behavior and Cognition
Max Planck Institute for Human Development
Berlin
Creating music is a social activity. My composition teachers impressed on me the
feeling that, if you cut a tree down in the forest and there's no-one else there to hear it, then
it’s not (fully) music; but if you have an audience listening and responding to the tree-felling,
then it can be a symphony. If we want to build artificial systems that can help us to create
music—or, even more, that can attempt to create music on their own—we should strive to
include the social element in those systems. The artificial intelligence approach to musical
creativity has often been a solitary affair, constructing lone monolithic systems that come up
with music by themselves. Instead, can we build a more socially-motivated group of
interacting artificial agents, who then create music in this social context? The answer is
yes—but to do so, we need to move away from the standard conception of artificial
intelligence, and enter the world of artificial life.
The study of artificial life (or alife) aims at uncovering the principles of living systems
in general—not just as they are manifested here on Earth—including the ways that organisms
adapt to and behave in their physical and social environments. To explore such questions,
alife researchers build simulations of organisms or agents “living” in artificial environments
that may contain resources like food and water, hazards like predators or poisons, and other
agents that provide opportunities for fighting, mating, or other types of interactions. These
models are often simplified down to just those features that are essential to answer some
question of interest—for instance, if researchers wanted to study how signalling can reduce
conflict, agents might just have the abilities to generate and perceive signals, to fight and
move away, and to guard territories, but not to eat or reproduce. This agent-based modeling
approach from artificial life provides a rich framework within with to build systems of
socially interacting individuals. The question now is, what components are needed in these
models to explore the social creation of music?
There are at least three ways we can go about adapting alife models of interacting
agents to the task of music composition. First, we can construct models of artificial agents
going about their business in their simulated world—say moving around, looking for food,
avoiding bumping into rocks and each other—and as they behave, we convert some aspect of
their behavior into sound and listen to them. These agents are not musical in and of
themselves; rather, we are applying sonification to their behavior patterns to see (or hear)
what emerges. Their social interactions will affect the music we hear, but the music being
produced will not affect their social interactions, nor anything else about their lives; instead,
the music is a side-effect of whatever the agents are doing.
A second, more directly musical approach is to let each individual produce its own
music—its own song, for instance—as it goes about its existence, and to use this music to
determine the survival or reproduction of each agent. Then over time the songs present in the
population can evolve as more successful songs lead to greater survival and reproduction of
the individuals singing those songs, and hence to more copies of versions of those songs in the
next generation. This artificial evolutionary process can lead to more complex or interesting
pieces of music if allowed to go on long enough. In models of this type, music production is
intrinsic to each individual, rather than merely being a consequence of its normal behavior as
in the previous approach, and the music an individual produces has material consequences for
its own life in turn, so that in some sense the music “matters” to the agents.
However, this is not yet really social creation of music, because the music produced by
an individual is not heard and reacted to by other individuals in the population, but instead is
evaluated by some external top-down critic. This critic can be an artificially-designed judge,
such as an expert system looking for particular melodic or harmonic developments. Or it can
be a human user, listening to songs one at a time or to the whole “orchestra” composed of the
population at once, and rewarding those individuals who produce more pleasing songs (or
orchestral parts) with more offspring. So, although a population of individuals is creating
music here, each individual still (as in the sonification case) remains blissfully unaware of
what the others are singing, and the truly social element remains lacking from the musical
process.
The third approach to using alife models for music composition finally gets at actual
social interaction on the basis of the music created by individuals. In this case, agents produce
musical signals that are heard and reacted to by other agents, influencing for instance the
songs that they themselves sing, or their proclivity to mate, or their vigilance in defending
their territory. Consequently, the music created in this system affects the behavior of the
agents living in this system, giving it a social role. This role is not necessarily the one that this
music would have in the human social world—that is, the agents are creating music that is
meaningful and effective for their own world, but perhaps not for ours. However, because this
system creates music through a social process that is richer than that in the previous two lesssocial approaches, it could be that the creative products have the potential to be more
musically interesting to us, too, as a result.
While we will of course not be able to prove which of these three approaches is most
useful for creative music composition—and in the end, it will come down to creative use of
any of these systems by a human composer—we can look at examples of each to get a sense
of what they are capable of producing. There are relatively few examples of the first category,
sonification of the behavior of simulated agents. Toshio Iwai (1992) created a system called
Music Insects that incorporates a small set of insect-like creatures moving over a twodimensional landscape onto which a user can place patches of different colors. When an insect
crosses a patch of a particular color, it plays a particular associated note. Thus, once an
“environment” of color-note patches has been set up, the movements of the insects are
translated into sound. By appropriate placement of patches and choice of behavioral
parameters of the insects (e.g., their speed and timbre), different musical performances can be
created.
In a related but more abstract vein, Miranda (2001), Bilotta and Pantano (2001), and
others have explored “musification” of the dynamic spatial patterns created by cellular
automata. In a cellular automaton, cells (or locations) in a grid (e.g., a two-dimensional
environment) can have different states (e.g., the “on” state could be interpreted as “this cell
contains an agent”), and the states of cells at one point in time affect the states of nearby cells
at the next point in time (e.g., an “on” cell at time t can make a neighboring cell turn “on” at
time t+1). As different cells in a two-dimensional field are turned on by the states of
neighboring cells according to particular production rules, the overall activity pattern of the
cells in this “world” can be converted to sound by further musification rules. Because cellular
automata are commonly used to study the creation of complexity and dynamic patterns, their
behavior can produce interesting musical patterns as well when sonified.
Considerably more examples can be found for the second, evolutionary, approach to
using alife models in music composition (for a review, see Todd & Werner, 1999). Several
composers and computer scientists have made systems in which a population of musical
agents has been reduced to its bare bones, or rather genes: each individual is simply a musical
phrase or passage, mapped more or less directly from the individual’s genotype (genetic
representation). These genotypes are in turn used in an artificial evolutionary system (e.g., a
genetic algorithm) that reproduces modified (mutated and shuffled) versions of the musical
passages in the population's next generation, according to how “fit” each particular individual
is. Fitness can be determined by a human listener, as in Biles’s (1994) GenJam system for
evolving jazz solos (with higher fitness being assigned to solos that sound better), or by an
artificial critic, as in Spector and Alpern’s (1995) use of a hybrid rule-based and neural
network critic to assess evolving jazz responses (giving higher fitness to responses that more
closely match learned examples or rules). When human critics are used, these evolutionary
systems can produce pleasing and sometimes surprising music, but usually after many
tiresome generations of feedback. Fixed artificial critics take the human out of the loop, but
have had little musical success so far.
What would happen if we unfix the critics and make them be other agents in the
artificial world? This is the central idea of the third, still mostly unexplored,
approach—individuals become both producers and receivers of music. Todd and Werner
(1999) created a population of males that sing brief note sequences and females that listen to
the males and choose a mate from among them based on how well the male songs match their
personal preferences. Offspring were then created with a combination of the traits of their
parents, and over time both songs and preferences coevolved to explore regions of “melody
space” without any human intervention. In Berry’s Gakki-mon Planet (2001), animated
creatures that “walk, eat, mate, play music, die and evolve” populate a graphically-rendered
world. Here again, each individual's music is used to determine with whom it will mate, based
on sound similarity. Human users can also intervene by grabbing creatures and bringing them
together to increase the chance that they will mate and produce new but musically related
offspring. Finally, Miranda (in press) has explored the consequences of a society of agents
interacting in “communication games”, attempting to imitate the musical output of one
another. Over time, the society builds up a repertoire of common musical (or vocal) phrases
through their interactions, creating a sort of effective language which, when extended, could
provide the basis for composition.
To date, most of the systems incorporating artificial life methods into music
composition have been exploratory, testing how useful these ideas may be in the creative
process; complete compositions based on these techniques have been rare. Some of these
techniques are well-developed enough that we should see their use by composers increasing,
but truly social alife models of the third approach remain to be studied in depth. This third
category holds the promise not only of providing interesting new creative methods for
composers, but also of giving us insights into the nature of music creation itself as a social
process.
Address for correspondence :
PETER M. TODD
Center for Adaptive Behavior and Cognition
Max Planck Institute for Human Development
Berlin, Germany
Email: [email protected]
WWW: http://www-abc.mpib-berlin.mpg.de/users/ptodd
References
Berry, R. (2001). Unfinished symphonies: Sounds of 3 1/2 worlds. In E. Bilotta, E.R.
Miranda, P. Pantano, and P.M. Todd (Eds.), ALMMA 2001: Proceedings of the workshop on
artificial life models for musical applications (pp. 51-64). Cosenza, Italy: Editoriale Bios.
Biles, J.A. (1994) GenJam: A genetic algorithm for generating jazz solos. In Proceedings of
the 1994 International Computer Music Conference (pp. 131-137). San Francisco:
International Computer Music Association.
Bilotta, E., and Pantano, P. (2001). Artificial life music tells of complexity. In E. Bilotta, E.R.
Miranda, P. Pantano, and P.M. Todd (Eds.), ALMMA 2001: Proceedings of the workshop on
artificial life models for musical applications (pp. 17-28). Cosenza, Italy: Editoriale Bios.
Iwai, T. (1992). Music insects. Installation at the Exploratorium, San Francisco, USA.
http://www.iamas.ac.jp/%7Eiwai/artworks/music_insects.html (Commercially available as
SimTunes in the SimMania package from Maxis.)
Miranda, E.R. (in press). Emergent sound repertoires in virtual societies. Computer Music
Journal.
Miranda, E.R. (2001). Evolving cellular automata music: From sound synthesis to
composition. In E. Bilotta, E.R. Miranda, P. Pantano, and P.M. Todd (Eds.), ALMMA 2001:
Proceedings of the workshop on artificial life models for musical applications (pp. 87-98).
Cosenza, Italy: Editoriale Bios.
Spector, L., and Alpern, A. (1995). Induction and recapitulation of deep musical structure. In
Working Notes of the IJCAI-95 Workshop on Artificial Intelligence and Music (pp. 41-48).
Todd, P.M., and Werner, G.M. (1999). Frankensteinian methods for evolutionary music
composition. In N. Griffith and P.M. Todd (Eds.), Musical networks: Parallel distributed
perception and performance (pp. 313-339). Cambridge, MA: MIT Press/Bradford Books.