Download Interactive Advertising Jingles: Using Music Generation for Sound

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Interactive Advertising Jingles:
Using Music Generation for Sound Branding*1
Gilbert Beyer, Max Meier
LMU Munich
Programming and Software Engineering
Oettingenstr. 67, 80538 Munich, Germany
[email protected]
[email protected]
Abstract. Music in advertising is an intensely used social engineering
technique and a commonplace to today’s customers. From an advertising
perspective, the benefit of music is that it is a strong vehicle to convey a
memorable message to the target group. Mainly used as jingle in radio and
television commercials or as background component in shopping environments,
sounds were put in the context of interaction in the internet and digital outdoor
media. Yet, there have been only very limited attempts to control music within
the interactive experience. This paper describes an approach to make
advertising jingles and sounds be controlled by the interaction, utilizing a novel
technique of music generation based on soft constraints. The paper starts with a
vision on the future development of sound branding. That followed, we present
our concept of interactive advertising jingles and describe the design and
techniques of our prototype system for music generation. We close with
implications of our present results on further work.
Keywords: music generation, sound branding, sonic mnemonics, city
soundscapes, advertising displays, algorithmic composition, soft constraints.
1
A Vision of Future Sound Branding
The use of music within advertising media began in the 20th century with jingles in
radio and television commercials as well as music playing statically in the
background at the point-of-sale or sales events. With the emergence of the internet,
sounds used in the branding context became attached to interactive elements. Sound
in such dynamic and interactive media either appear as sound-logo (a short auditory
message which is the acoustic pendant to a visual logo, and thus is often presented at
the beginning or the end of a commercial), as advertising jingle (tunes that are often
played along with lyrics, to convey an advertising slogan) or as background music
[15]. In the case of interactive media and product design, acoustic signals or so-called
*
This work has been partially sponsored by the EC project REFLECT, IST-2007-215893.
sound objects are connected to certain events like a mouse click on a graphical
element or to activities like the closing of a car door [3].
In the last years, a trend to interactive advertising media can be identified. Casual
games in the web constitute an example, while in the out-of-home domain most often
interactive plays, i.e. invitations to less structured activities, also imply creative or
participatory elements [1]. Such media often focus on manipulating visual objects
that are usually constituent parts of the brand identity. Acoustic events do not appear
at all or play only a secondary role by supplementing the visual interaction or
statically playing in the background. Nevertheless, the identity of many brands is
defined by both a visual and acoustic appearance. On the other hand, beyond the
advertising context interactive music systems have become increasingly popular: with
social music games like Guitar Hero popular songs can be re-played together, and
easy-to-use musical interfaces like Yamaha’s Tenori-On give everyone the possibility
for musical expression, even without having any musical knowledge.
We estimate that these trends will combine in the future, producing new
advertising media that will enable customers not only to play with, but also
manipulate and shape brand melodies by means of interactive control mechanisms.
Generally this may include as well sound logos, jingles and background songs. The
vehicles for such applications can be interactive displays in shopping malls and
outdoor environments, the internet, mobile devices or gaming consoles. In this work,
we present an approach which brings together interactive advertising and interactive
music systems with a high focus on integrating a given acoustic brand identity in a
suitable way.
2
Related Work
In recent years, many concepts and products making use of music in interactive
advertising systems emerged. In the following we present related work focusing on
sound branding in general, the combination of interaction and music composition,
technical solutions for music generation and for dealing with dynamically changing
preferences.
Of special interest to our work are theoretical works on sound branding and
interactive sound branding. There exist many articles on specific sound branding
issues in classical and digital media, but they do not cover the field of usercontrollable brand music. No work so far focused on how to control the brand music
itself within the interactive experience, while the same is often done with visual
elements of the brand identity. For a general survey on the topic of sound branding we
refer to [3] and [15]. A good overview on algorithmic composition is provided by [4]
and [13]. Examples for interactive music composition and generation systems are
Electroplankton [8] or Cyber Composer [7]. To our knowledge, there is currently no
work describing the combination of music generation and interactive advertisements.
For interacting with music, out of a great many of possibilities, the interactive system
Light Tracer, that invites users to creative activities in physical space, at the same
time transforming these interactions into music, is relevant for the interaction concept
of our system [11].
Our approach for generating music is based on a reasoning-technique called soft
constraints which allows dealing with soft and concurrent problems in an easy way.
Bistarelli et. Al. [2] introduced a very general and abstract theory of soft constraints
based on semirings. Building on this work, in [6] monoidal soft constraints were
introduced, a soft-constraint formalism particularly well-suited to multi-criteria
optimization problems with dynamically changing user preferences. Soft constraints
have successfully been applied to problems such as optimizing software-defined
radios [18] or orchestrating services [17]. We introduced a soft-constraint based
system for music therapy in [5], giving us basic proof of concept in composing music
with this technique.
3
Interactive Advertising Jingles
The functions of music in advertising are manifold. Sounds are used to gain or hold
the attention of the listener [9,10], to influence the mood of consumers, to structure
the time of an ad or to persuade consumers by using rhetorical elements like rhythm,
repetition, narrative, identification or location [14]. The benefits are a more effective
information reception, memorization and an enhanced user experience by the use of
multisensoric branding [15], as well as the fact that the acoustic sensory channel is
harder to ignore by the audience. But brand melodies are also subject to specific
requirements. For example, the characteristics of an effective sound logo can be listed
as distinctiveness, memorability, flexibility, conciseness and brand fit [15]. These are
strong constraints an interactive advertisement has to conform to.
To achieve this, we make use of a novel technique that allows generating music
in real-time with respect to certain preferences that express ‘how the music should
sound’. Several preferences are derived from user interaction, for example ‘high and
fast notes’. These are combined with additional preferences expressing that the
resulting melodies should comply with a brand’s distinct acoustic identity. Since a
certain amount of control over the music is assigned to the user, it is inherently not
possible to exactly play a given melody note by note. Nevertheless, it is possible to
generate melodies which are similar to it by using note pitches as well as tonal and
rhythmic patterns appearing in the brand’s distinct melody. This way, melodies can be
generated considering both interactivity and brand recognition.
4
Prototype for Interactive Advertising Jingles
In order to test our approach, we designed a sample scenario in the out-of-home
domain and set up a prototype consisting of a sensor framework to collect information
about the user, a wall of luminous plasma displays showing a graphical application
adapting to the users’ movements and the resulting music, and a software framework
that realizes the music generation. The scenario and components of our prototype are
described in the following:
4.1
Sample Scenario
As an example scenario, we developed an interactive advertising installation for an
imaginary soft drink using an ‘underwater-theme’ (see Figure 1). Our application
consists of a display and an audio system installed at a preferably quiet public place.
When no one is standing in front of the display, only a simple background (a seafloor)
can be seen and no music can be heard. As soon as someone enters the interaction
zone, his silhouette appears on the screen like in an abstract mirror image, visualized
by small water bubbles ascending from the person’s shape.
Fig. 1. Prototype for a Music Generating Advertisement
Depending on the passer-by’s movements, notes are played and visualized with
bigger colored bubbles such that each note corresponds to one bubble. This way, an
initial implicit interaction is performed with visual and acoustic feedback to the user’s
movements. When someone gets attracted and starts to explicitly interact with the
system, he can realize the connection between his movements and the notes he hears:
When the movements become faster, the notes will also play faster – not moving at all
leads to silence. Moving the upper parts of the body (e.g. arms) plays higher notes
and, vice-versa, moving lower parts (e.g. legs) leads to lower notes. This is also being
reflected in the visualization on the display: the note bubbles ascend from a position
corresponding to their note pitch. The resulting melodies do not only fit to the
person’s movements; they are furthermore being generated in a way that they comply
with the company’s brand melody. After a certain time, additional background music
is played and accompanies the user’s melodies. The background music’s notes are
also visualized by colored bubbles ascending from a bottle sticking in the sand. This
way, the product becomes involved in an unobtrusive, yet very relevant way.
4.2
Interaction and Visualization
We see a vision-based sensing framework to be the most convenient technology in
our advertising scenario, as it allows collecting information about the passerby’s
movements and gestures, and not least supports implicit interaction. For testing
reasons, the sensing is currently realized using marker-based techniques in our first
experiments (Touchless SDK [16]). We attach two colored markers to a user’s hands
for analyzing his movements and derive two parameters: the total amount of
movement (controlling the amount of notes played) and the average position of the
markers (controlling pitch). This concept can easily be extended to more markers. Our
long-term goal is to use markerless body tracking involving the detection of
individual body parts.
The visualization of passers-by and the generated music is realized using a so-called
particle system. Being widely used in computer graphics, particle systems are capable
of simulating ‘fuzzy’ objects with a large number of individual particles (e.g. fire,
smoke or water). In our application, we make use of the Mercury Particle Engine [12]
which is directly based on Microsoft’s computer gaming framework XNA and allows
developing applications for e.g. Windows or the Xbox 360 gaming console.
We decided to use a particle system for visualizing persons because we wanted
to create a very abstract and fuzzy representation. The user should feel like
controlling his portrait rather than playing another character as in a computer game. In
our sample scenario, we use water bubbles for visualizing a person’s shape, but many
other alternatives based on particles are imaginable. Not only natural phenomena can
be simulated; it is of course also possible to create unrealistic ‘freaked out’ effects.
Fig. 2. Abstract Visualization of a Person and Note Visualization
Every particle emerges from a so-called emitter which can be a single point as well as
a complex geometric object. In our application, we use a dynamically changing
polygon based on a passer-by’s contour as an emitter for the small water bubbles. The
bigger colored water bubbles representing playing notes are emitted from a single
point which is moved according to the passer-by’s horizontal position and the note’s
pitch such that high pitches emerge from higher positions and vice-versa. When a
particle has been emitted, several parameters are modified during its lifetime: the
water bubbles become bigger and fade out over time, and they are furthermore being
accelerated upwards in order to let them ascend to the water surface.
4.3
Music Generation
For generating music in our system, we use a novel approach for algorithmic
composition based on soft constraints [5]. With this technique, music can be
interactively generated in real-time by defining certain ‘preferences’ which express
‘how the music should sound’. As an example, a typical preference for a single
instrument is ‘fast notes with a high pitch’. Besides preferences for single
instruments, it is also possible to coordinate multiple instruments with additional
preferences. These global preferences typically involve harmonic or rhythmic
relations between several instruments, e.g. ‘play together in harmonic intervals and in
a similar rhythm’. All preferences can also be generated dynamically while playing,
for example based on user interaction: this way, music can be composed interactively
in real-time by continually defining preferences which reflect ‘how well the music
matches the user interaction’.
In our application, we derive two parameters from a user’s movements: the total
amount of movement (corresponding to the rate of played notes) and the average
vertical position of all movements (corresponding to pitch). Based on these two
parameters, preferences are generated reflecting the desired speed and pitch. For
example, when the user is moving fast in the upper areas of his body (e.g. mostly with
his arms), the music should also be fast and have a rather high pitch. Vice-versa,
when the user is moving slowly and rather in the lower areas (e.g. with his legs), the
music should be slow with a low pitch. In our application, the music should fit the
user interaction on the one hand, but we also want it to fit to a given sound brand on
the other hand. This is realized with an additional preference reflecting ‘how well the
music matches a jingle’s distinctive melody’. This preference is generated based on a
timed transition model representing the jingle’s note pitches and rhythmic patterns as
well as transitions between notes (e.g. ‘C is often followed by E or another C’). To
sum it up, we have preferences based on user interaction as well as preferences
reflecting the similarity to a jingle and - in most cases - these preferences will be
concurrent among each other. Soft constraints are very appropriate for dealing with
such problems and allow accommodating several concurrent preferences in an easy
yet expressive way.
When the preferences have been stated, a soft constraint solver can be employed
for computing the best possible notes with respect to all preferences: the notes should
fit the user interaction and a given distinct brand melody. Furthermore, it is also
possible to coordinate several instruments among each other with additional global
preferences (see Figure 3). In our case, we define a global constraint which
maximizes the amount of musical harmony between the interactive instrument and the
background music. We use a soft constraint solver which was originally prototyped in
Maude [6] and that we later implemented in an improved version in C#, making it
possible to use it in a soft real-time environment.
Fig. 3. Concurrent Preferences
With our approach, it is possible to generate music which strongly resembles a given
melody but also fits to body movements as described above. For more detailed
information about composing music with soft constraints, we refer to [5]; we are also
about to publish a general approach for interactively composing music similar to
given melodies.
5
Prospects and Future Work
In this paper we have shown an approach of future sound branding applications,
where advertising jingles and sounds are controlled by the interaction. To make sure
that the resulting music complies with both the requirements of interactivity and
brand recognition, we made use of a novel technique of music generation based on
soft constraints. First tests with our prototype with large advertising displays showed
that composing music with soft constraint solving works quite well, giving the user
the impression of control over the music while producing quite recognizable sound.
The next step is to enhance the sensor framework of the prototype with markerless
body tracking and, that followed, to conduct user studies on different types of
interactive advertising music (sound logos, jingles, background songs) on their
recognition value. A further goal is to investigate if functions of music, as described
for classical forms of music in advertising, also exist in interactive advertisements.
References
1.
2.
3.
Adamowsky, N.: Homo Ludens – Wahle enterprise: Zur Verbindung von Spiel, Technik
und den Künsten. In: Poser, S., Zachmann, K. (eds.), Peter Lang Verlag, Frankfurt am
Main (2003)
Bistarelli, S., Montanari, U., Rossi, F.: Semiring-based constraint satisfaction and
optimization. Journal of the ACM, vol. 44(2), pp. 201–236 (1997)
Bronner, K., Hirt, R.: Audio Branding – Entwicklung, Anwendung, Wirkung akustischer
Identitäten in Werbung, Medien und Gesellschaft. Nomos Verlagsg., Baden-Baden (2009)
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
Essl, K.: Algorithmic composition. In: Collins, N., d’Escrivan, J. (eds.) Cambridge
Companion to Electronic Music. Cambridge University Press, Cambridge (2007)
Hölzl, M., Denker, G., Meier, M., Wirsing, M.: Constraint-Muse: A Soft-Constraint Based
System for Music Therapy. In: Proc. of Third International Conference on Algebra and
Coalgebra in Computer Science (CALCO’09), pp. 423–432. Springer, Udine (2009)
Hölzl, M., Meier, M., Wirsing, M.: Which soft constraints do you prefer? In: Proc. of
Workshop on Rewriting Logic and its Applications (WRLA 2008). Budapest (2008)
Ip, H., Law, K. Kwong, B.: Cyber Composer: Hand Gesture-Driven Intelligent Music
Composition and Generation. In: Proc. of 11th International Multimedia Modelling
Conference (MMM'05), pp.46–52, Melbourne (2005)
Iwai, T., Indies Zero and Nintendo: Electroplankton. Music Game for Nintendo DS.
Released in (2005)
Kellaris, J., Cox, A., Cox, D.: The Effect of Background Music on Ad Processing: A
Contingency Explanation. Journal of Marketing, vol. 57 (4), pages 114–125 (1993)
Kroeber-Riel, W., Esch, F.-R.: Strategie und Technik der Werbung. Verhaltenswissensch.
Ansätze. Diller, H., Köhler, R. (eds.). Vol. 6, Kohlhammer, (2004)
Light Tracer project website (2010), http://lighttracer.darcy.co.nz/
Mercury Particle Engine website (2010), http://mpe.codeplex.com/
Nierhaus, G.: Algorithmic Composition - Paradigms of Automated Music Generation.
Springer, Heidelberg (2008)
Scott, L.: Understanding Jingles and Needledrop: A Rhetorical Approach to Music in
Advertising. Journal of Consumer Research: An Interdisciplinary Quarterly, vol. 17 (2),
pages 223–36 (1990)
Steiner, P.: Sound Branding – Grundlagen der akustischen Markenführung. Gabler,
Wiesbaden (2009)
Touchless SDK website (2010), http://www.codeplex.com/touchless
Wirsing, M., Clark, A., Gilmore, S., Hölzl, M., Knapp, A., Koch, N., Schroeder, A.:
Semantic-Based Development of Service-Oriented Systems. In: Najm, E., Pradat-Peyre,
J.-F., Donzeau-Gouge, V.V. (eds.) FORTE 2006. LNCS, vol. 4229, pp. 24–45. Springer,
Heidelberg (2006)
Wirsing, M., Denker, G., Talcott, C., Poggio, A., Briesemeister, L.: A Rewriting Logic
Framework for Soft Constraints. In: Proc. of Workshop on Rewriting Logic and its
Applications (WRLA 2006). Vienna (2006)