Download 1. Introduction

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
Transcript
Conference Paper
Presented at the 26th Annual Conference of the Society for Electro-Acoustic Music in the United
States
January 20–22, 2011
University of Miami Frost School of Music
Miami, Florida
Papers presented at SEAMUS 2011 have been blindly peer reviewed by members of the paper selection
committee on the basis of a submitted abstract. The paper presented here is reproduced directly from the
author’s or authors’ manuscript without editing or revision by the conference committee.
Emotional Communication in ComputerGenerated Music: Experimenting with
Affective Algorithms
Alison Mattek1
1
Bregman Music and Audio Research Studio, Dartmouth College, Hanover, NH, USA
[email protected]
ABSTRACT
Emotional expression has been a goal of many composers from antiquity to present day.
However, the introduction of computers into the composition process has caused a shift in
aesthetics, and the goals of music-making have deviated from the traditional ideas of
emotional communication, especially in the electro-acoustic tradition.
As a result, an
emotional gap exists between musical ideas produced via computer algorithms and musical
ideas produced via traditional methods (i.e. the human hand). This paper proposes algorithmic
models that can begin to narrow this gap by exploiting established relationships between
musical features and perceived emotion.
1.
INTRODUCTION
2.
The art of music can be examined
through
two
overarching
BACKGROUND
In order to formulate a theory on how
perspectives:
to generate emotional music with computers,
Music, as a
we must look at the historical relationship
psychology and mathematics.
form of emotional expression and therapy, is
between music and emotion.
deeply rooted in the field of psychology.
analyze how this relationship changed once
However, music is also an extraordinary
computers
physical phenomenon that results from the
composition process.
organization of numbers (frequencies) and
2.1. Music and Emotion
possesses
many
properties.
alike
have
underlying
mathematical
Composers and music theorists
reflected
on
both
of
these
perspectives.
This study examines the aesthetic
tendencies of algorithmic computer music and
traditionally composed Western music.
The
Western music tradition is deeply rooted in
emotional expression and affective response.
Conversely, the development of computer
music promotes an aesthetic approach that
emphasizes the mathematical properties of
music. This is mainly because computers are
capable when it comes to producing and
analyzing numbers but are lacking when it
comes to subjective ideas such as ineffable
emotion.
This gap between traditional music
and
computer
music
is
currently
narrowed by current research.
being
Empirical
studies that link specific musical features to
correlating affective responses have been
applied to music performance algorithms.
These
performance
algorithms
have
successfully produced affective responses in
listeners. This study proposes a theory for
expressing emotion in computer assisted
algorithmic composition.
were
Discussion
We will also
introduced
on
the
into
the
relationship
between music and emotions has existed
since the philosophical treaties of antiquity.
In Emotion and Meaning in Music, Lenard A.
Meyer tells us “from Plato down to the most
recent discussions of aesthetics and the
meaning of music, philosophers and critics
have, with few exceptions, affirmed their
belief
in the ability of music
to evoke
emotional responses in listeners (p. 6).” [1] A
clear example of the influence of perceived
emotion on the intent of Western composers
can be found in the Baroque tradition of the
Doctrine of Affections. Baroque composer and
theorist, Nicola Vicentino wrote in his treatise
Ancient Music Adapted to Modern Practice
that “the composer’s sole obligation is to
animate the words, and, with harmony, to
represent their passions—now harsh, now
sweet, now cheerful, now sad—in accordance
with their subject matter.” [2] During the
Romantic
period,
Wagner
exploited
the
connection between music and emotion when
composing his leitmotivs.
chromatic
themes
or
He “often used
unusual
harmonic
progressions to evoke conditions of pain,
such as love and death (p. 477).” [3]
Mattek
Affective Algorithmic Composition
This overview gives a few examples of
Algorithmic techniques include, rule-based
how composers depended on the relationship
systems, Markov chains, L-Systems, and other
between
perceived
mathematical models, all of which focus on
emotions in order to communicate to their
the numerical aspects of music, but ignore the
audience. Even though styles and ideologies
psychological consequences of these number
changed dramatically throughout the course
patterns.
musical
features
and
of western music, certain principles regarding
the relationship between musical features and
2.3. Affective Performance Algorithms
affective response remained consistent. When
analyzing Western compositions, we will find
that simple, diatonic and major melodies and
harmonies are associated with positive affect
and complex, chromatic, and minor melodies
are associated with negative affect. This rule
is
not
universal,
given
the
nature
of
perception is that it varies depending on the
individual. However, it provides a convincing
basis
as
to
how
humans
communicate
emotional meaning through the art of music.
systematically evoke emotional responses by
creating certain patterns. This has been
shown in music performance algorithms
created by Livingstone, et al. [5] The
Computational Music Emotion Rule System
(CMERS) was the first system to possess realtime music emotion modification capability.
CMERS is essentially a system that reads in a
before the performance occurs. Most
The introduction of algorithms and
the
emotional responses, it is possible to
score. The system modifies certain features
Composition
to
aspects of music affect our perceptions and
MIDI score and outputs a performance of that
2.2. The Aesthetics of Algorithmic
computers
By understanding how the numerical
composition
process
formalized an aesthetic approach to music
that did not aim to evoke any specific
modifications are then done via real-time
filters. Each filter pertains to a specific rule,
and implements the rule according to the
emotion that is trying to be conveyed.
emotions in the listener. Although algorithmic
techniques have been used in composition for
centuries [4], what distinguishes many of the
algorithmic composers in the late twentieth
century
from
their
predecessors
is
their
attempt to remove the human element from
the composition process. The results of this
approach are both noteworthy and interesting,
but the music written by computers is isolated
from
music
written
by
humans,
mainly
because it lacks any emotional component.
SEAMUS 2011 • University of Miami Frost School of Music
Miami, Florida • January 20–22
Page 3 of 8
Mattek
Affective Algorithmic Composition
by algorithms can be in turn used for music
composition by algorithms.
This study maps emotional perception
on a two dimensional axis of valence and
arousal, as was done in the study for CMERS.
The study revolving around CMERS suggested
a hypothesis that distinguishes features that
determine changes in valence as opposed to
features that determine changes in arousal.
Namely, arousal is strongly influenced by
tempo and loudness, and valence is strongly
influenced by mode and harmonic complexity.
This
study
will
focus
on
the
manipulation of two specific features: tempo
Figure 1 – Two Dimensional Emotional Space
and harmonic complexity. However, harmonic
complexity is closely tied with mode, so these
CMERS
utilizes
a
2-Dimensional
Emotional Space to categorize emotions (see
Figure
1).
This
space
generates
four
quadrants, which can be loosely defined as
Angry, Happy, Tender, and Sad, respectively.
CMERS could successfully alter the perceived
emotion of a work regardless of the initial
emotion
of
a
work,
causing
significant
changes in both valence and arousal.
two features are both discussed.
The relationship between tempo and
arousal is a sensible conclusion. Fast tempos
are associated with energy. In states of high
energy, individuals experience fast thoughts,
high heart rates, and tend to move more [6].
This
relationship
physical
that
designed
algorithmic
Computational Music Emotion Rule System
(CMERS), have drawn specific connections
musical
perception.
to
reasons why tempo has such a significant
association
between
harmonic
complexity and valence is very well described
musical performance systems, such as the
between
feature
affect on level of perceived arousal.
HYPOTHESIS
Studies
musical
experience may be one of the
The
3.
of
features
and
emotional
These connections are not only
valid in the performance domain, but also in
the domain of composition.
The rules that
have been set in place for music performance
in
Leonard
Meyer’s
Meaning in Music.
book
Emotion
and
Specifically, Meyer spells
out the association between the minor mode
and a negative affective response. There are
two significant points that Meyer makes
regarding the relationship between the minor
mode and negative affect: (1) the two most
stable tones of the scale, the tonic and the
dominant, have additional “leading” tones in a
minor key. The proximity of these active tones
SEAMUS 2011 • University of Miami Frost School of Music
Miami, Florida • January 20–22
Page 4 of 8
Mattek
Affective Algorithmic Composition
to the stable tones “makes the delay in the
valence, medium-low valence, medium-high
arrival
valence, and high valence.
of
intensely
a
substantive tone particularly
felt”
(2)
corresponding to high valence, or positive
possesses a greater repertory of tones than
affect, contained the smallest repertory of
other modes, specifically the major mode.
pitches.
This means that there is a lesser probability of
corresponding to low valence, or negative
any one tone being reached, which therefore
affect, contained the largest repertory of
causes
pitches. Table 1 shows the four pitch spaces.
minor
the
mode
minor
The pitch space
mode
the
[1];
to
be
more
ambiguous. This is true from both a melodic
and harmonic standpoint.
Conversely,
These
pitches
the
were
pitch
space
organized
by
various rhythms and tempos according to
The minor mode can be associated
where the composition could be identified on
with harmonic complexity because it contains
the arousal axis. Four different rhythm spaces
a larger repertory of tones, and consequently
were
a larger repertory of chords.
Therefore, the
positions along the arousal axis of the two-
minor mode is harmonically more complex
dimensional emotional space: high arousal,
than the major.
medium-high arousal, medium-low arousal,
In this sense, harmonic
created
to
represent
four
general
complexity and mode cannot be considered
and low arousal.
separate from one another.
adjusted in each rhythm space were related to
Based on the above speculations, a
generative
computer
algorithm
tempo and rhythm.
The variables that were
Tempo was fastest for
can
high arousal and slowest for low arousal.
theoretically manipulate the repertory of tones
High arousal also contained faster rhythms,
in order to affect the perceived valence and
such as eighth notes. Low arousal contained
can manipulate the tempo to affect the
slower rhythms, such as whole notes. Table 2
perceived arousal of a generated composition.
shows each rhythm space, its corresponding
arousal,
4.
and
its
tempo
and
rhythmic
characteristics.
IMPLEMENTATION
All combinations of pitch spaces and
The excerpts for this
study were
rhythm spaces resulted in a total of sixteen
generated in the athenaCL environment. The
combinations.
athenaCL system is a composition tool that
outputted to MIDI files, and converted to wave
was written in Python by Christopher Ariza [7].
files in Logic Pro 9. The instrument used to
perform the MIDI files was a general MIDI
4.1. Algorithmic Excerpts
piano.
In this study, four different pitch
spaces were created in order to represent four
4.2. Listening Tests
general positions on the valence axis of the
two-dimensional
emotional
The final sequences were
space:
low
Twenty-two individuals volunteered to
take a listening test for this study.
SEAMUS 2011 • University of Miami Frost School of Music
Miami, Florida • January 20–22
Page 5 of 8
All
Mattek
Affective Algorithmic Composition
subjects
had
experienced
training,
and
twenty
formal
were
university music students.
music
enrolled
5.
as
Prior to the
listening tests, subjects were given a handout
that described the 2-dimensional emotional
space and the difference between perceived
and felt emotion [8].
RESULTS
The
results
of
the
listening
test
showed that on average, listeners considered
excerpts with higher harmonic complexity to
have a more negative perceived emotion, and
excerpts with a lower harmonic complexity to
have a more positive perceived emotion,
relatively.
Additionally, excerpts with faster
tempos and rhythms were perceived to have a
higher emotional arousal, and excerpts with
slower tempos and rhythms were perceived to
have a lower emotional arousal.
interesting,
because
the
This is
excerpts
were
composed by an entity devoid of emotion.
Figure 2 - Listening Test Interface
The listening test graphic user interface was
implemented in MATLAB.
Figure 2 shows a
screen shot of the listening test interface. The
blue axes represent arousal and valence. The
number on the right represents how many
excerpts the subject has listened to.
All
subjects heard the excerpts played in a
random order.
The subjects listened to each of the
Figure 3 - Average and Expected Results
sixteen excerpts generated by the previously
described algorithms.
After hearing each
Figure 3 shows a graph with the average
excerpt once, the subject selected a point with
response for each of the excerpts (in black)
the mouse on the 2-dimensional emotional
and the expected result (in red) connected
space that corresponded with the emotion the
with a line.
subject felt the excerpt was trying to convey.
SEAMUS 2011 • University of Miami Frost School of Music
Miami, Florida • January 20–22
Page 6 of 8
Mattek
6.
Affective Algorithmic Composition
DISCUSSION
algorithm.
The results of the listening test show a
correlation between tempo and arousal and
between harmonic complexity and valence.
These two variables alone were able to create
a significant difference in perceived emotion.
It is important to look at the results of each
excerpt relative to one another. In every case,
excerpts with a faster tempo were perceived
as having a higher arousal.
except
one,
harmonically
excerpts
complex
In every case
that
were
were
more
perceived
as
having a lower valence. The one exception is
excerpt 6, which was rated as having a lower
perceived valence than expected.
There are
two possible reasons for this: (1) Even though
the harmonies were not complex, most of
them were minor in quality.
All of the
excerpts consisted of randomly generated
chords, and by chance, even though this
excerpt was in C Major, there were more
minor
harmonies
generated
than
major
harmonies. Because mode effects valence as
well as harmonic complexity, a perceived
minor mode would cause a lower perceived
valence. (2) According to Leonard Meyer, slow
tempo can sometimes affect valence [1].
Because works in minor keys are technically
harder
in
beginner’s
literature,
beginner
musicians associate minor modes, and thus
negative valence, with slow tempos.
One can also see from the results that
the most it was easier for the algorithms to
convey emotions in the upper right quadrant.
Perhaps this is because positive emotions are
often expressed with simple harmonies, which
can
be
easily
emulated
by
a
On
the
converse,
negative
emotions are more complex to us, and in our
minds perhaps require more details and
expression than this particular algorithm had
to offer, in order to be convincing.
This
study
suggests
that
by
the
manipulation of two specific musical features,
tempo
and
algorithmically
successfully
harmonic
generated
convey
complexity,
music
emotion
dimensional emotional space.
on
can
the
2-
This concept
provides composers with a useful tool if they
choose to use their music to communicate
emotions.
Composers of computer-assisted
algorithmic music can utilize many different
complex algorithms to structure their pieces,
but can still adjust the affective response by
manipulating these two parameters.
Emotional communication is by no
means the only purpose of composition. This
study does not propose that
composers
should
means
adhere
to
traditional
of
communicating emotion in music. This study
merely suggests a tool for composers who
wish to use computer generated material, but
would still like to convey emotion in their
music.
Although both tempo and harmonic
complexity have a strong effect on arousal
and valence respectively, the concept of
affective response to music extends beyond
the scope of these two features.
Arguably,
every musical feature contributes to the
listener’s
affective
response,
and
listener’s
affective
response
is
different than the next.
computer
SEAMUS 2011 • University of Miami Frost School of Music
Miami, Florida • January 20–22
Page 7 of 8
every
slightly
Mattek
7.
Affective Algorithmic Composition
ACKNOWLEDGEMENTS
Composition: AthenaCL. Diss. New York
University, 2005. Print.
Thanks to Colby Leider, my undergraduate
advisor, who helped me with this research.
[8] Gabrielsson, A. 2002. “Perceived Emotion
Thanks also to everyone at the University of
and Felt Emotion: Same or Different?”
Miami who participated in the listening tests.
Musicae Scientae (Special Issue 20012002): pp. 123-148.
8.
REFERENCES
[1] Meyer, Leonard A. Emotion and Meaning in
Music.
University
of
Chicago
Press,
Chicago, IL: 1956.
[2] Vincento, Nicola. Ancient Music Adapted
to Modern Practice, trans. Maria Rika
Maniates, ed. Claude V. Palisca. Yale
University Press, New Haven, CT: 1989.
[3] Bonds, Mark Evan. A History of Music in
Western
Culture.
Printice
Hall,
Upper
Saddle River, NJ: 2006.
[4] Burns,
Kristine
Development
of
H.
The
History
Algorithms
in
and
Music
Composition. Diss. Ball State University:
1994.
[5] Livingstone, S.R., Ralf Muhlberger, Andrew
R. Brown, William F. Thompson. 2010.
“Changing
Musical
Emotion:
A
Computational Rule System for Modify
Score and Performance.” Computer Music
Journal. 34:1, pp. 41-64.
[6] Gillis,
Rod.
Understanding
Psychology.
2005.
[7] Ariza, Christopher. An Open Design for
Computer-Aided
Algorithmic
Music
SEAMUS 2011 • University of Miami Frost School of Music
Miami, Florida • January 20–22
Page 8 of 8