Download Brainwave Music - Media Arts and Technology

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
MAT 200B, Winter 2010
Brainwave Music
History, Overview, Artistic and Clinical Aspects
Ivana Anđelković
University of California, Santa Barbara
1. Introduction
Term brainwave is used to describe electrical activity measured along the scalp, produced by
firing of neurons in the brain. Sonification of such activity is being attempted since the beginning of
20th century in both scientific and artistic pursuits. This paper reviews the history of brainwave music,
rough categorization and description of methods for generating such music and its application in
clinical environment.
2. History of brainwave music
Following initial experiments on animals in the 19th century, human brainwaves were first
recorded and measured by a German psychiatrist Hans Berger in 1924, who named these electrical
measurements the "electroencephalogram" (EEG). Berger’s instrument for neural activity detection was
a rather crude Edelmann string galvanometer used to record electrocardiograms. His research on both
patients with skull imperfections or trepanations, and patients with intact skulls continued over next
five
years
and
results
were
first
published
in
1929.
in
the
article
entitled
“On
the
Electroencephalogram of Man". However, this outstanding scientific contribution did not gain publicity
until E.D. Adrian and B.H.C. Matthews verified Berger’s results in 1934. Furthermore, Adrian and
Mathews immediately recognized the benefits of sonifying electrical brain signals and were the first
1
MAT 200B, Winter 2010
ones to successfully conduct such an experiment [2]: ”It is sometimes an advantage to be able to hear
as well as see the rhythm1; although its frequency is only 10 a second it can be made audible by using
a horn loud speaker with the diaphragm set very close to the pole pieces and a condenser in parallel
to cut out high frequencies.”
It was not until 1965. that brain waves were used in artistic pursuits, when Alvin Lucier, an
American composer, presented “Music for Solo Performer” at the Rose Art Museum (Waltham,
Massachusetts) with encouragement from John Cage and technical assistance from physicist Edmond
Dewan. The piece was realized by amplifying alpha waves and using resulting signals to control and
play percussion instruments. Lucier, however, did not continue to use EEG in his compositions.
Advent of voltage controlled devices and in particular popularization of Moog synthesizer in the
1960s provided an easier way to control and modify sound through biophysical interfaces. Richard
Teitelbaum, one of the most notable composers of that period who used EEG controlled synthesizers,
performed “Spacecraft” for the first time in 1967. with his electronic music group Musica Elettronica
Viva (MEV). In this piece, various biological signals were used as sound control sources.
In 1970, David Rosenboom’s brainwave controlled interactive performance “Ecology of the
Skin”, took place in Automation House, New York. In pursuit of cybernetic biofeedback artistic systems
research, Rosenboom founded Laboratory of Experimental Aesthetics at York University in Toronto,
which served as a workplace for prominent artists such as John Cage, David Behrman, LaMonte Young,
and Marian Zazeela. Whereas Teitelbaum’s and Lucier’s works relied on idiosyncratic EEG controlled
music instruments, Rosenboom extended the idea into the musical syntax by addressing meaningful
feature extraction from EEG data [3]. Up to date, he has conducted extensive research on
psychological and neurological foundation of aesthetic experience, computer music software and braincomputer interfaces.
During the 1970s, Manford Eaton was building electronic circuits to experiment with biological
signals at Orcus Research in Kansas City. In France, scientist Roger Lafosse and one of the pioneers of
musique concrète Pierre Henry, built the device Corticalart and used it in series of performances. The
1
Term Berger Rhythm is used for alpha brain waves which fall within a frequency range from 8Hz to
13Hz
2
MAT 200B, Winter 2010
device, similar to EEG, transmited brain waves to seven generators of electronic sounds which Pierre
Henry manually manipulated to create musical improvisations.
Significant milestone occured during the 1970s when UCLA computer scientist Jaques Vidal first
coined the expression brain-computer interface (BCI) for his research in biocybernetics and humancomputer interaction. Nevertheless, after the initial burst of creative activity, field of brainwave art
laid dormant until the 1990s, when sufficently powerful computers allowed further experiments.
In the 1990s, scientists Benjamin Knapp and Hugh Lusted built BioMuse – a system that receives
and processes signals from main sources of bodily electrical activity: muscles, eye movement, heart
and brainwaves. Although the technology was primarily indented to solve real-world problems, new
media artist Atau Tanaka was commissioned in 1992 to use it music composition and performance.
3. Approaches to translating neural activity into music
According to their frequency, brain waves are classified into five main categories:
Type:
Frequency:
Normally occur in:
Delta
up to 4 Hz
deep sleep, babies
Theta
4 Hz – 8 Hz
young children, drowsiness, hypnosis
Alpha
8 Hz – 12 Hz
relaxed, alert state of consciousness, eyes closed
Betta
12 Hz – 30Hz
active, busy or anxious thinking
Gamma
30 Hz – 80Hz
higher cognitive activity, motor functions
3
MAT 200B, Winter 2010
3.1. Music generating methods
In clinical usage, it can be sufficient to examine raw EEG data and reveal brain malfunctions.
When these raw waves are made audible by a simplest method used in the first experiments in early
20th century, which is by allowing the waves to vibrate the membrane of loudspeakers, nothing but a
filtered noise can be perceived. Therefore, to be utilized in brainwave music and BCI research, signals
have to undergo sophisticated quantitative analysis. Techniques commonly used are power spectrum
analysis, spectral centroid, Hjorth, event related potential (ERP) and correlation, to name but a few.
Once EEG signals are processed, various methods for generating EEG dependant music can be
applied. Those can be classified into two categories:
1 – Parameter mapping approach
This approach entails usage of mathematical and statistical techniques to translate
physiological electrical signals into sounds characterized by pitch, duration, timbre and intensity. To do
so, it is important to understand the nature and meaning of signals i.e. neural processes, and the
nature of music that is to be produced. An extremely difficult problem, not yet solved - synthesizing
melody by means of simply thinking of one, would require implementation of parameter mapping. Much
research in the field of cognitive science is needed in order to unambiguously interpret biological
signals. Consequently, there is currently no set of formal rules in the usage of this approach, and the
decisions made in the process depend on the desired outcome.
A simple example of parameter mapping approach usage is Alvin Lucier’s “Music for Solo
Performer” where strong alpha waves were translated into increased sound intensity and temporal
density. More recently, a group of Chinese scientists proposed sonification rules based on the scale –
free2 phenomenon embodied in both neural networks and sound [5]. According to these rules, period of
EEG waveform is mapped to the duration of the note, average power to the music intensity and the
amplitude to the music pitch.
2
Scale-free phenomenon occurs in networks where degree distribution follows a power law which
describes a relationship between two quantities – frequency and size of an event. The relationship has
a power-law distribution when the frequency of the event decreases at a greater rate than the size
increases.
4
MAT 200B, Winter 2010
2 – Event triggering approach
In this approach, EEG data is continuously scanned for changes in the characteristic features of
a signal which, when detected, act as event triggers in music generation process. Some of those
characteristics commonly being employed are local voltage maxima and minima, time and voltage
differences between them. This approach allows detection of singular events like spikes on one side
and repetitive activity on the other – the latter being appealing for translation into musical form since
it can be perceived as a rhythmic musical pattern. In practice, event triggering method is generally
used in order to control brain-computer music instruments and sometimes combined with parameter
mapping to enrich acoustical content.
An example of the method is a study by Mick Grierson [6] (Goldsmith College, UK) in which he
allowed series of notes to be displayed on a computer screen, while the subject is focused on a single
note at a time. When that particular note appears on the screen, subject’s brain reacts and a change in
EEG signal is detected, which triggers the note to be played.
In the Brain Controlled Piano System by Eduardo Miranda [4], the most prominent frequency
and the complexity of the signal extracted from processed EEG data served as event triggering
information. The first one activated generative rules for algorithmic music composition while the latter
controlled tempo of the music.
3.2. Biofeedback paradigm
One of the variations in methodology used to musically express one’s neural activity is the
reliance on biofeedback. Biofeedback is a process in which bodily functions are measured, information
is conveyed to the subject, which then raises his awareness and gives the possibility of conscious
control over the same bodily functions.
Biofeedback signal generated as a response to an event can be positive if it amplifies the input
signal or negative if it dampens it. In clinical applications, goal of biofeedback is often calming the
subject’s activity and therefore negative feedback is desirable while positive can possibly lead to
highly dangerous situations. In contrast, from the musical perspective, unstable activity with
5
MAT 200B, Winter 2010
unpredictable outcome is often preferred over calming one, since it introduces greater dynamics into
musical narrative.
Although in musical practice performers often do respond to event triggered feedbacks, goaloriented connotations of biofeedback are lost. Instead, such practice simply serves as a basis to control
a sound source. However, Rosenboom’s work on the subject implies the existence of potential for
wider and more effective use of biofeedback in music, by viewing it as a dynamic system that allows
organism to evolve rather than a static system that leads to equilibrium. [8]
4. Biofeedback music therapy
The power of music to heal has been recognized for centuries in various civilizations, and music
therapy is known to improve psychological and physiological health of individual. However, scientific
research on the neurological basis of music therapy is a nascent field, with a great growth potential
considering the advancements in development of instruments such as fMRI, used for evaluation of brain
activity. Up to date research shows that five factors contribute to the effect of music therapy:
attention modulation, emotion, cognition, behavior and communication modulation [7].
Furthermore, biofeedback therapy in which patient learns to control his brain activity as a
response to the real-time feedback, reportedly achieves positive results in treatments of anxiety,
attention deficit disorder, epilepsy, autism and more. Commonly, feedback information is conveyed to
a patient in a form of visual and auditory displays combined. Influence on brain rhythms via solely
auditory feedback has been explored in only a few cases, but a study by Hinterberger and Baier [9]
suggests it is possible.
Relatively new approach that combines traditional music therapy and auditory biofeedback is
Brain Music Treatment (BMT) [11] developed in 1990s at the Moscow Medical Academy. Group of
neurophysiologists, clinicians and mathematicians led by Dr. Ya I. Levin developed an algorithm for
translating brain waves into music, which experimentally provided optimal therapeutic results. Specific
underlying mechanisms of such therapy are yet to be discovered, but positive initial results have been
6
MAT 200B, Winter 2010
reported in patients with insomnia, where sleep patterns were improved by reducing anxiety. This
method involves converting aspects of person’s EEG activity into music files recorded on a CD, which
patient then plays on a regular basis for the duration of a treatment over several months.
5. Conclusion
It is common to scientifically observe natural phenomena via visual perception. However,
auditory system is more sensitive to temporal changes, and multidimensional datasets can be perceived
simultaneously when presented both visually and audibly. As an illustration, notable contribution to the
art of scientific listening was the invention of stethoscope in 1816, an instrument that is still
extensively used as a diagnosis tool in medicine.
Moreover, a large number of clinical studies have shown striking evidence that auditory rhythm
and music can be effectively harnessed for specific therapeutic purposes. Considering the
demonstrated effectiveness of both traditional music therapy and relatively new biofeedback therapy,
combination of the two approaches could also yield positive results. Yet, there has been very little
evidence regarding the effectiveness of a brain music therapy, and much research needed in the field
of music cognition depends on development of sophisticated instruments for examining the neural
activity.
7
MAT 200B, Winter 2010
References
1. Andrew Brouse (2004), A Young Person's Guide to Brainwave Music - Forty years of audio from the
human EEG
2. Adrian E, Matthews B (1934), The Berger rhythms: potential changes from the occipital lobes in
man
3. Simon Emmerson (2007), Living Electronic Music
4. Eduardo Miranda, Andrew Brouse (2005), Toward Direct Brain-Computer Musical Interfaces
5. Dan Wu, Chao-Yi Li, De-Zhong Yao (2009), Scale-Free Music of the Brain
6. Mick Grierson (2008), Composing With Brainwaves: Minimal Trial P300 Recognition as an indication
of Subjective Preference for the Control of a Musical Insturment
7. Stefan Koelsch (2009), A Neuroscientific Perspective on Music Therapy
8. David Rosenboom (1997), Extended Musical interface with the Human Nervous System
9. Thilo Hinterberger, Gerold Baier (2005), Parametric Orchestral Sonification of EEG in Real Time
10. Gerold Baier, Thomas Hermann, Sonification: listen to brain activity
11. Galina Mindlin, James R. Evans (2008), Chapter 9: “Brain Music Treatment” of a book Introduction
to Quantitative EEG and Neurofeedback
8