Download Functional segregation of the temporal lobes into highly

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Neuroplasticity wikipedia , lookup

Connectome wikipedia , lookup

Eyeblink conditioning wikipedia , lookup

Metastability in the brain wikipedia , lookup

Emotion perception wikipedia , lookup

Feature detection (nervous system) wikipedia , lookup

Dual consciousness wikipedia , lookup

History of neuroimaging wikipedia , lookup

Temporoparietal junction wikipedia , lookup

Cortical cooling wikipedia , lookup

Broca's area wikipedia , lookup

Functional magnetic resonance imaging wikipedia , lookup

Sound localization wikipedia , lookup

Neurocomputational speech processing wikipedia , lookup

Lateralization of brain function wikipedia , lookup

Human brain wikipedia , lookup

Neural coding wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Neurolinguistics wikipedia , lookup

Perception of infrasound wikipedia , lookup

Neural correlates of consciousness wikipedia , lookup

Neuroesthetics wikipedia , lookup

Music-related memory wikipedia , lookup

Speech perception wikipedia , lookup

Music psychology wikipedia , lookup

Aging brain wikipedia , lookup

Embodied language processing wikipedia , lookup

Sensory cue wikipedia , lookup

Perception wikipedia , lookup

Affective neuroscience wikipedia , lookup

Neuroanatomy of memory wikipedia , lookup

Temporal lobe epilepsy wikipedia , lookup

Emotional lateralization wikipedia , lookup

Cognitive neuroscience of music wikipedia , lookup

Time perception wikipedia , lookup

Inferior temporal gyrus wikipedia , lookup

Transcript
NeuroImage 20 (2003) 1944 –1954
www.elsevier.com/locate/ynimg
Functional segregation of the temporal lobes into highly differentiated
subsystems for auditory perception: an auditory rapid
event-related fMRI-task
Karsten Specht* and Jürgen Reul
fMRI Section, Department of Neuroradiology, Medical Center Bonn, 53119 Bonn, Germany
Received 30 January 2003; revised 25 July 2003; accepted 28 July 2003
Abstract
With this study, we explored the blood oxygen level-dependent responses within the temporal lobe to short auditory stimuli of different
classes. To address this issue, we performed an attentive listening event-related fMRI study, where subjects were required to concentrate
during the presentation of different types of stimuli. Because the order of stimuli was randomized and not predictable for the subject, the
observed differences between the stimuli types were interpreted as an automatic effect and were not affected by attention. We used three
types of stimuli: tones, sounds of animals and instruments, and words. We found in all cases bilateral activations of the primary and
secondary auditory cortex. The strength and lateralization depended on the type of stimulus. The tone trials led to the weakest and smallest
activations. The perception of sounds increased the activated network bilaterally into the superior temporal sulcus mainly on the right and
the perception of words led to the highest activation within the left superior temporal sulcus as well as in left inferior frontal gyrus. Within
the left temporal sulcus, we were able to distinguish between different subsystems, showing an extending activation from posterior to
anterior for speech and speechlike information. Whereas posterior parts were involved in analyzing the complex auditory structure of sounds
and speech, the middle and anterior parts responded strongest only in the perception of speech. In summary, a functional segregation of the
temporal lobes into several subsystems responsible for auditory processing was visible. A lateralization for verbal stimuli to the left and
sounds to the right was already detectable when short stimuli were used.
© 2003 Elsevier Inc. All rights reserved.
Keywords: Auditory perception; Laterality; Brain mapping; Speech processing; Superior temporal sulcus
Introduction
In recent years, auditory perception has been investigated
by numerous authors using verbal as well as nonverbal
presentations (Belin et al., 1998, 2000, 2002; Binder et al.,
1995, 1997, 2000; Celsis et al., 1999; Engelien et al., 1995;
Hall et al., 2000; Hugdahl et al., 1999, 2000; Hugdahl,
2000; Jancke et al., 1999, 2002; Mazoyer et al., 1993; Scott
et al., 2000; Tzourio et al., 1997; Wise et al., 2001; Zatorre
et al., 1992, 2002; Zatorre and Belin, 2001). All studies
demonstrated consistently the flow of cognitive signal processing from the primary auditory cortex into different parts
* Corresponding author. Institute of Medicine, Research Center Jülich,
52425 Jülich, Germany. Fax: ⫹49-0-1212-5-369-31-621.
E-mail address: [email protected] (K. Specht).
1053-8119/$ – see front matter © 2003 Elsevier Inc. All rights reserved.
doi:10.1016/j.neuroimage.2003.07.034
of the temporal lobes, either on the left or right hemisphere,
depending on the specified task and spectral characteristics
of the trials used. In most of these studies the subjects were
asked to perform tasks, such as detecting target cues or
modulating their attention. The number of activated cortical
areas and their significances and extensions were correlated
with the design of the tasks. In addition to this, various
studies showed that the level of significance depended also
on the attentional effort (Hall et al., 2000; Hugdahl et al.,
2000; Jancke et al., 1999; Tzourio et al., 1997). The level of
lateralisation was also the focus of some imaging studies.
Engelien et al. (1995) and Tzourio et al. (1997) reported a
lateralization to the right when the subjects had to attend to
nonverbal stimuli. In agreement with that, Binder et al.
(1995) showed in a study, using tone, phonetic, and semantic decisions, that the language areas were strongly lateral-
K. Specht, J. Reul / NeuroImage 20 (2003) 1944 –1954
ized on the left, whereas the tone-decision task activated the
right auditory cortex to a higher extent. In agreement with
other language studies (Frost et al., 1999; Kent, 1998; Price
et al., 1999; Specht et al., 2003; Wise et al., 2001), Binder
claimed four left-sided, distinct cortical language areas: the
temporal lobe, comprising the superior temporal sulcus
(STS) and middle and inferior temporal gyrus; the prefrontal region, including Broca’s area in the inferior frontal
gyrus; the angular gyrus; and, last, the posterior cingulate
gyrus and precuneus. The clear lateralization of language
processing is also supported by analyses of brain morphologies (Binder et al., 1996; Hugdahl et al., 1998; Jancke et
al., 1994; Jancke and Steinmetz, 1993), focusing specifically on the asymmetry of the planum temporale, which is
generally larger in the left temporal lobe, especially in
right-handed subjects. This cortical area is part of the secondary auditory cortex and is located at the posterior end of
the superior temporal gyrus. It is assumed that this area
plays a major role in analyzing speech sounds.
Only a few functional studies have investigated lateralization processes using passive listening, i.e., unattended
auditory perception while performing a task, unrelated to
the sound stimulation, or attentive listening, i.e., attentive
perception of the sound stimulation without a task. Tervaniemi (Tervaniemi et al., 1999, 2000) investigated the automated lateralization of sound processing using phonemes
and chords while the subjects performed a visual task. They
found an automated cortical response to the chords within
the right superior temporal gyrus (STG), whereas the phonemes activated the left superior and middle temporal gyrus
(MTG) to a higher extent. These results are in line with a
study by Hugdahl and coworkers (Hugdahl et al., 1999),
where CV-syllables and sounds of musical instruments were
used. Zatorre as well as Belin (Belin et al., 1998; Zatorre
and Belin, 2001) investigated in an attentive study the spectral and temporal processing in the auditory cortex. They
observed a higher temporal resolution, relevant for speech
perception, in the left anterior STG and higher spectral
resolution in right anterior STG and STS.
Recent functional studies (Belin et al., 2000, 2002, 2000;
Jancke et al., 2002; Mummery et al., 1999; Scott et al.,
2000; Wise et al., 2001), investigating the auditory perception of speech in more detail, are claiming that the middle
part of the left superior temporal sulcus (STS) is responsible
for the perception of speech or speechlike sounds. Wise
(Wise et al., 2001) as well as Mummery (Mummery et al.,
1999) especially describe the left lateralized increased activity in the posterior STS in the context of word perception.
Scott (Scott et al., 2000) claimed that the anterior STS is
responsible for intelligible speech, whereas the mid-STS
also responds when a phonetic information, like a pseudoword,
is presented. Based on this, they proposed a left anterior
temporal pathway for speech comprehension (Scott et al.,
2000). Studies of the auditory system in primates also show
similar findings, claiming a ventral stream in the lateral belt
of STG for identifying species-specific vocalization (Raus-
1945
checker, 1998; Rauschecker and Tian, 2000). Conclusively,
the left superior temporal sulcus seems to play a major role
in the analysis of speech and speech like auditory signals. It
may be assumed that the STS contains neurons optimized
for speech perception, with respect to temporal auditory
characteristics and the presence of phonetic cues in posterior and mid-STS and the linguistic content in anterior STS.
Because most of the auditory-based functional imaging
studies in humans had a blocked structure of the paradigm,
the evoked responses to short auditory stimuli, collected
with a reasonably good temporal resolution, are still at
issue. Furthermore, in a typical block design, having several
ON/OFF conditions, the OFF conditions are often silent
resting conditions. These awake resting states can probably
influence the differences between the tasks by having an
own, consistent pattern of activation. Therefore, a welldefined baseline is still a matter of issue (see Gusnard et al.,
2001, for an overview about this topic and further references). Binder and coworkers (Binder et al., 1999) demonstrated, for example, the existence of ongoing conceptual
processing during conscious, resting states, which could
only be interrupted by an explicit task performance, a difficult task when studying attentive listening. In addition to
this, it is obvious that in blocked studies with passive or
attentive listening tasks, the level of activation could potentially be confounded by different levels of attention, because
the subjects are free to attend to one type of stimuli more
than to others. The studies, using an attentional modulation
paradigm, demonstrate the wide range in which subjects
could modulate their activations in the auditory system just
by changing their attentional effort (Hall et al., 2000; Hugdahl et al., 2000; Jancke et al., 1999; Pugh et al., 1996;
Tzourio et al., 1997).
To circumvent those problems and to get a good temporal resolution, we explored the blood oxygen level-dependent (BOLD) response, evoked by the perception of tones,
sounds, or words, by using a rapid event-related design,
where the subjects could not determine to which class the
next stimulus would belong. With such a design, which is
substantially different to that used in other auditory functional imaging studies (Binder et al., 2000; Celsis et al.,
1999; Jancke et al., 2002; Mummery et al., 1999; Scott et
al., 2000; Wise et al., 2001) the activations are less confounded by those attentional interactions, because the trials
are presented randomly and rapidly, and attention could be
held constant across stimulus presentations.
Based on the different aspects of auditory perception and
language processing, we designed a study where pure tones,
sounds from animals and musical instruments (Hugdahl et
al., 1999), and words were used. We expected that the pure
tones would activate mainly the superior temporal gyrus in
both hemispheres. The sounds, having a more complex
spectral sound characteristic, would activate the primary
and secondary auditory cortex to a higher extent, and a
lateralization to the right hemisphere was also expected
(Engelien et al., 1995; Hugdahl et al., 1999). Finally, the
1946
K. Specht, J. Reul / NeuroImage 20 (2003) 1944 –1954
words would additionally activate the language related areas, i.e., parts of the left temporal lobe and the left inferior
frontal gyrus (Broca, Wernicke’s and adjacent areas).
2.5 s, TE 50ms, 90° flip angle, FOV 220 ⫻ 220 mm2, 64 ⫻
64 matrix. This resulted in a voxel size of 3.44 ⫻ 3.44 ⫻ 4.4
mm3 in an ascending slice order, including a 0.4-mm gap
between slices.
Material and methods
Preprocessing and statistical analysis
The 12 healthy subjects (10 males and 2 females, mean
age ⫽ 32) were right-handed, as determined by consistent
right-hand preferences for all items of a standard handedness inventory (Annett, 1970). The study was conducted in
accordance with the Declaration of Helsinki and the subjects gave informed consent according to the institutional
guidelines. The subjects were instructed to listen attentively
to the aurally presented stimuli. To investigate the lateralization effects, we used three types of auditory stimuli. First,
pure tones with a frequency range of 400 –1600 Hz (in the
following referred as tone condition); second, sounds of
animals and instruments (e.g., cow and piano; in the following referred as sound condition); and, third, words with
one or two syllables (abstract and concrete, mixed occurrence frequency, no semantic relations, in the following
referred as word or speech condition). The words were
pronounced by a professional male speaker. The stimuli
were adjusted according to loudness and presentation length
(mean duration ⫽ 2 s). The order of stimuli was pseudorandomized and arranged as a single-session, event-related
paradigm, containing 43 events of each type. The functional
imaging session lasted ⬃11 min. According to the rules of
stochastic designs (Friston et al., 1999), we also included 42
additional null events. On average, the distance between
two stimuli of the same type was about 15 s; the shortest
interval between two stimuli of any type was 3.75 s (1.5
scans). The stimuli were delivered using MR-compatible
headphones.
Image analysis was performed on an Intel PC, Pentium 3
(Intel Corporation) running under Windows 2000 (Microsoft Corporation) using SPM99 (Friston et al., 1995,
1996, 2000; http://www.fil.ion.ucl.ac.uk/spm) based on
MATLAB v5.3 (Mathworks Inc.). For reaching maximum
signal equilibrium, the first three images of each session
were rejected in the subsequent analysis. To prevent artifacts due to the acquisition time of a single EPI volume, we
performed, prior to the movement correction, a slice-timing
procedure. Here, the temporal delays between the acquisitions of different slices of the same volume were corrected
by using the 12th slice of the EPI volumes as reference. To
correct for head movements during the imaging session the
functional images were realigned to the first image of a
session and an averaged image across the session was calculated afterwards. The anatomical 3D scan was coregistered with the averaged EPI image. Both the functional and
anatomical scans were normalized into a stereotactical reference space, defined by a template from the Montreal
Neurological Institute (MNI) using linear and nonlinear
transformations. The anatomical images were resampled to
a cubic voxel size of 1.5 mm and the functional scans to a
cubic voxel size of 4 mm. The normalized functional scans
were spatially smoothed by a Gaussian kernel of 8 mm
FWHM to accommodate for intersubject variation in brain
anatomy and to increase the signal-to-noise ratio in the
images.
A SPM99 group analysis was performed to detect areas
of significant changes in brain activity in and between the
three experimental task conditions as specified by a fixed
effects group model. A set of windowed Fourier basis functions was used, containing eight sine and cosine functions
with different frequencies and a window length of 20 s. This
set of basis functions enables one to detect BOLD responses
without specific assumptions about the shape of the expected BOLD signal. To test for significant effects, we
calculated F statistics on a voxel-by-voxel basis. All reported areas exceeded a significance threshold of P ⬍ 0.05,
corrected for multiple comparisons and having at least five
significant voxels in the main contrasts and three voxels in
the difference contrasts.
We further investigated the fitted time courses in a volume of interest (VOI) analysis. We extracted the time
course from anatomical shaped regions in the temporal lobe
(Fig. 1), covering the transverse temporal gyrus (TTG),
planum temporale (PT), superior temporal gyrus (STG),
superior temporal sulcus (STS), or middle temporal gyrus
(MTG). The regions STG and STS were additionally subdivided into anterior, middle, and posterior parts. The def-
Data acquisition
Functional MR images were acquired using a 1.5-T Siemens MRI system (Siemens Symphony, Erlangen). The
subject’s heads were restrained with additional padding
between the headphone and the head coil. The slices for the
functional imaging were positioned with reference to a
high-resolution anatomical image of the entire brain, obtained by using a strongly T1-weighted gradient echo pulse
sequence (MPR; magnetization-prepared, rapid-acquisition
gradient echo). The parameters for the anatomical sequence
were as follows: TR 11.08 ms, TE 4.3 ms, 15° flip angle,
one excitation per phase encoding step, FOV 230 mm, 200
⫻ 256 matrix, 128 sagittal slices with 1.48 mm single slice
thickness. For functional imaging, 256 images were acquired, and each contained 24 axial slices, which were
oriented in the anterior–posterior commissure (AC-PC)
plane, covering the most of the brain, always including the
whole temporal and frontal lobes. The parameters of the
functional sequence were as follows: gradient echo EPI, TR
K. Specht, J. Reul / NeuroImage 20 (2003) 1944 –1954
1947
1948
K. Specht, J. Reul / NeuroImage 20 (2003) 1944 –1954
Fig. 3. Significant activation foci for the sound/word comparison using an F contrast, projected onto the lateral surface, as well as fitted BOLD response for
different locations. The analysis is thresholded at P corrected ⫽ 0.05. The red curves representing the BOLD response during attentive perception of pure tones,
the blue one shows the response during sound perception, and green curve the BOLD response for word perception.
initions of the VOI were based on an averaged anatomical
image, containing all normalized anatomical scans of the
investigated subjects. From each region, and for each trial
and each subject separately, we extracted the detected signal
changes and performed statistical comparisons (paired t
tests) between the hemispheres and the conditions. Only
voxels with a positive signal change were considered in this
VOI analysis.
Fig. 1. Projection of the left-hemispheric volume of interests (VOI) onto a lateral surface of a single subject. Displayed are the superior temporal gyrus (STG),
superior temporal sulcus (STS), middle temporal gyrus (MTG), transverse temporal gyrus (TTG), and planum temporal (PT). Most parts of the VOIs for STS,
TTG, and PT are below the lateral surface.
Fig. 4. Significant activation foci for the sound/tone comparison using an F statistic. The analysis is thresholded at P corrected ⫽ 0.05 and projected onto the
lateral surface.
K. Specht, J. Reul / NeuroImage 20 (2003) 1944 –1954
Fig. 2. Significant activation foci for the three main F contrasts obtained in
the fixed-effects group analysis thresholded at P corrected ⫽ 0.05.
Results
We found significant BOLD responses within the primary and secondary auditory cortices (BA 41/42) of both
hemispheres to all types of stimuli. The responses to tones
as well as to the sound stimuli are stronger within the right
auditory cortex, with an increasing cluster size and significance level from tones to sounds. In addition to the activations found during tone perception, the sound trials showed
significant activation areas in the superior temporal sulcus,
supplementary motor area, (SMA, BA 6), bilaterally in the
precentral gyrus (BA 6), and the right lingual gyrus (BA18).
The intensity of the detected BOLD response during the
attentive perception of words was comparable in both hemispheres, but this condition involved more regions in the left
hemisphere. In this condition, the more extended activations
were seen in the posterior part of the superior temporal
gyrus, the superior temporal sulcus, the middle temporal
gyrus, and the inferior frontal gyrus (Fig. 2). Again, there
were significant activations in the premotor system present,
comprising of the SMA and cingulate gyrus of the left
hemisphere as well as bilaterally in the precentral gyrus (BA
6) (Table 1). Contrasting the sound and word perception by
using an F statistic, we found six areas of significant (P ⫽
0.05 corrected) differences. Whereas the primary and secondary auditory cortex of the right hemisphere showed
slightly less activity during speech perception, two areas in
the junction of the left superior temporal sulcus and middle
temporal gyrus, as well as inferior frontal gyrus (BA 47),
were more active during the speech trials rather than the
sound perception trials (Fig. 3). Last, we found a significant
difference within the brainstem, ventral to the right inferior
colliculus (Table 2).
The results of this most interesting contrast were also
improved by masking exclusively the sound contrast with
the word contrast (P ⫽ 0.05) and vice versa. The sound
1949
perception; exclusively masked with the word perception,
showed at a corrected P level of 0.05 a significant cluster in
the right primary and secondary auditory cortex, which were
not activated during the word perception. On the other hand,
the word trials activated significantly the posterior and middle part of the left superior temporal sulcus (STS) and
inferior frontal gyrus (IFG), adjacent to Broca’s area, which
were not significant during sound perception.
Similarly to the comparison between the sound and the
word trials, the main differences between tone and sound
perception were also found in the superior temporal sulcus
in both hemispheres. But, in contrast to the sound/word
comparison, only the posterior part of STS on the left
hemisphere showed a significant increase in activity during
sound perception in contrast to tones. On the other hand,
significant differences between the two nonverbal trials
were detected nearly in the whole superior temporal sulcus
of the right hemisphere, which responded more strongly to
sounds than to tones. In addition to these differences in the
temporal lobe, we found increased activity also within the
SMA during the sound trials (Fig. 4).
Finally, the comparison between the word and the tone
trials uncovered significant differences bilaterally in the
whole superior temporal sulcus and right middle temporal
gyrus as well as in the left inferior, middle, and medial
frontal gyrus.
These results were confirmed by volume of interest
(VOI) analyses, containing regional analyses of the transverse temporal gyrus (TTG), planum temporale (PT), superior temporal gyrus (STG), superior temporal sulcus (STS),
and middle temporal gyrus (MTG). First, we compared for
each region and trial the detected signals in the left hemisphere with those from the corresponding area of the right
side. Significant asymmetries on the left were found for the
word trials in the transverse temporal gyrus (P ⬍ 0.004) and
planum temporale (P ⬍ 0.002). A lateralization to the right
was detected for the sound condition in the middle temporal
gyrus (P ⬍ 0.043). We further calculated for each VOI the
comparisons between the three conditions. Except for the
test for higher signals during the tone rather than the sound
condition, we found significant differences in all comparisons (see Table 3). The sound condition led to significantly
higher signal changes bilaterally in the primary and secondary auditory cortex (TTG and PT) in contrast to words. Also
the tone trials showed higher signal changes in the right
primary auditory cortex than in the word trials. The test for
stronger BOLD signals during the sound rather than the tone
condition showed significant differences within the left primary auditory cortex (TTG, PT, and STG) and right STG,
STS, and MTG. The word condition led to higher signals
bilaterally within the STG, STS, and MTG compared to
tones and the STS and bilateral MTG in comparison with
sounds. The subdivision of the VOIs from the STS and STG
into anterior, middle and posterior portions underlines the
differentiated pattern of activations (see Table 4).
Comparing the signal changes detected for the word
1950
K. Specht, J. Reul / NeuroImage 20 (2003) 1944 –1954
Table 1
Significant activation foci for the three main F-contrasts obtained in the fixed-effects group analysis
Condition
Tones
Sounds
Words
Statistical values
Coordinates
Anatomical location
Cluster level
Pcorrected
F value
x
y
z
Hemisphere
Structure
Brodmann
area
183
0,000
217
0,000
335
0,000
336
0,000
19
14
21
0,000
0,000
0,000
269
0,000
0,000
0,000
0,000
0,000
0,000
0,000
0,000
0,000
0,000
0,000
0,000
0,000
0,000
0,000
0,000
0,000
0,001
(11, 75)
(7, 73)
(11, 17)
(6, 99)
(4, 34)
(17, 09)
(10, 23)
(14, 35)
(6, 45)
(3, 79)
(4, 40)
(3, 21)
(2, 80)
(2, 75)
(11, 99)
(8, 05)
(4, 77)
(11, 53)
(10, 11)
(7, 47)
(4, 08)
(2, 58)
(3, 46)
(2, 70)
(3, 30)
(3, 23)
(2, 86)
(3, 08)
(3, 05)
(2, 90)
(2, 71)
(2, 57)
51
63
ⴚ51
⫺67
⫺59
55
63
ⴚ55
⫺59
⫺63
51
0
20
24
59
59
44
ⴚ63
⫺63
⫺55
ⴚ4
⫺8
ⴚ28
⫺40
ⴚ44
ⴚ8
4
ⴚ8
51
ⴚ28
ⴚ55
⫺55
ⴚ16
⫺23
ⴚ19
⫺23
⫺35
ⴚ16
⫺23
ⴚ19
⫺35
⫺47
6
10
ⴚ82
⫺70
ⴚ12
⫺1
⫺39
ⴚ16
⫺31
⫺1
14
21
ⴚ90
⫺86
17
ⴚ17
⫺17
ⴚ44
6
ⴚ63
ⴚ6
2
1
5
1
5
9
1
5
1
9
2
33
47
ⴚ6
0
ⴚ3
⫺10
6
ⴚ3
2
⫺10
47
36
ⴚ9
⫺9
21
ⴚ23
⫺19
50
37
ⴚ14
41
37
Right
Right
Left
Left
Left
Right
Right
Left
Left
Left
Right
Left
Right
Right
Right
Right
Right
Left
Left
Left
Left
Left
Left
Left
Left
Left brainstem
Right brainstem
Left
Right
Left
Left
Left
Transverse temporal gyrus (TTG)/Heschl’s gyrus
Superior temporal gyrus (STG)
Transverse temporal gyrus (TTG)/Heschl’s gyrus
Superior temporal gyrus (STG)
Superior temporal gyrus (STG)
Transverse temporal gyrus (TTG)/Heschl’s gyrus
Superior temporal gyrus (STG)
Transverse temporal gyrus (TTG)/Heschl’s gyrus
Superior temporal gyrus (STG)/planum temporale
Middle temporal gyrus (MTG)
Middle frontal gyrus/precentral gyrus
Medial frontal gyrus (MdFG)/SMA
Lingual gyrus
Lingual gyrus
Superior temporal sulcus (STS)
Superior temporal sulcus (STS)
Superior temporal gyrus (STG)
Superior temporal sulcus (STS)
Superior temporal sulcus (STS)
Middle temporal gyrus (MTG)
Medial frontal gyrus (MdFG) / SMA
Cingulate gyrus
Inferior occipital gyrus
Inferior occipital gyrus
Middle frontal gyrus
Pons
Pons
Precuneus
Middle frontal gyrus precentral gyrus
Fusiform gyrus
Precentral gyrus
Precentral gyrus
41/42
22
41/42
22
22
41/42
22
41/42
22
21
6
6
18
18
21/22
21/22
22/41
21/22
21/22
21
6
32
18
18
46
400
32
7
28
11
12
11
12
5
7
6
19
6
6
Note. Pcorrected ⫽ 0.05, only clusters with at least five voxels are reported. Table shows at most three local maxima per cluster with cluster size, F value,
coordinates and anatomical location, including Brodmann areas. The primary maximum per cluster is set in bold.
trials to those of the other trials, the anterior and posterior
portions of left STG and left STS showed significantly
higher activation. The middle part of STG showed also
significantly increased activity during the sound trials compared to the pure tone trials. In the right hemisphere, only
the comparisons between the sounds and the tones showed
significant differences in the whole STS.
Discussion
This event-related fMRI study highlights different, separable processes during auditory perception of short verbal
and nonverbal auditory signals and allows a functional segregation of temporal lobe structures. The main issue of the
study was to explore whether there are already differences
in the hemodynamic responses in the auditory system and
the adjacent areas in the temporal lobe under the condition
of short, attentional nonforced stimulation. It is well known
that the temporal lobe is predominately involved in the
analysis of auditory signals. A functional segregation has
already been discussed in the literature, but those studies
were mainly based on PET or blocked functional MRI
studies. Therefore we were interested in exploring whether
this functional differentiation is already present in the case
of short single trials. In order to address this question, we
presented three different types of auditory stimuli, pure
tones, sounds of animals and instruments, and spoken German words. The crucial point in studying auditory perception is the design of the study, which should limit the
possible interaction between stimulus material and level of
attention to a minimum, because the subjects are free to
attend to one type of stimuli more than to others, which
could bias the results. Therefore, the stimuli were rapidly
presented according to a stochastic, event-related paradigm,
where the order of stimuli was randomized and not predictable for the subjects. Attentional interaction was suppressed
and auditory perception could be examined.
K. Specht, J. Reul / NeuroImage 20 (2003) 1944 –1954
1951
Table 2
Significant differences between the three types of stimuli investigated with F-statistics within the fixed-effects group analysis
Condition
Statistical values
Cluster level
Sounds
vs tones
F value
x
y
z
Hemisphere
Structure
Brodmann area
3
3
16
0,000
0,000
0,000
0,000
0,002
0,000
0,000
0,000
0,000
0,000
0,000
0,000
0,000
0,000
0,000
0,001
0,005
0,006
0,008
0,000
(2, 92)
(2, 85)
(2, 84)
(2, 73)
(2, 53)
(2, 64)
(2, 62)
(5, 88)
(5, 39)
(4, 71)
(5, 32)
(4, 57)
(4, 45)
(3, 20)
(2, 70)
(2, 53)
(2, 43)
(2, 42)
(2, 39)
(3, 31)
55
63
63
ⴚ63
⫺63
ⴚ63
0
ⴚ63
⫺63
⫺55
55
59
63
ⴚ48
44
ⴚ8
⫺8
ⴚ44
ⴚ51
59
ⴚ1
⫺16
⫺20
ⴚ35
⫺47
ⴚ23
6
ⴚ12
⫺35
⫺1
ⴚ20
⫺1
⫺16
19
ⴚ39
14
6
35
13
ⴚ30
ⴚ13
1
⫺9
2
⫺1
5
51
ⴚ6
2
⫺10
ⴚ6
⫺10
⫺9
ⴚ4
6
47
51
ⴚ8
25
13
Superior temporal sulcus (STS)
Superior temporal gyrus (STG)
Middle temporal gyrus (MTG)
Superior temporal sulcus (STS)
Middle temporal gyrus (MTG)
Superior temporal gyrus (STG)
Superior frontal gyrus
Superior temporal sulcus (STS)
Superior temporal sulcus (STS)
Superior temporal sulcus (STS)
Superior temporal sulcus (STS)
Superior temporal gyrus (STG)
Middle temporal gyrus (MTG)
Inferior frontal gyrus (IFG)
Superior temporal sulcus (STS)
Medial frontal gyrus (MdFG) / SMA
Medial frontal gyrus (MdFG)
Middle frontal gyrus
Inferior frontal gyrus
GTT
BA
BA
BA
BA
BA
BA
BA
BA
BA
BA
BA
BA
BA
BA
BA
BA
BA
BA
BA
BA
21/22
22
21
21/22
21
22
6
21/22
21/22
21/22
21
22
21
47
41
6
6
47
9
42
13
14
14
8
3
3
0,000
0,000
0,000
0,000
0,001
0,009
(3, 12)
(3, 03)
(2, 97)
(2, 88)
(2, 57)
(2, 39)
8
ⴚ63
59
ⴚ59
ⴚ51
ⴚ44
ⴚ36
ⴚ31
ⴚ12
ⴚ12
ⴚ19
19
ⴚ18
ⴚ2
1
ⴚ6
5
ⴚ4
Right
Right
Right
Left
Left
Left
Left
Left
Left
Left
Right
Right
Right
Left
Right
Left
Left
Left
Left
Right
Right
Brainstem
Left
Right
Left
Left
Left
Midbrain
Superior temporal sulcus (STS)
Superior temporal gyrus (STG)
Superior temporal sulcus (STS)
Superior temporal gyrus
Inferior frontal gyrus
BA
BA
BA
BA
BA
21/22
22
21
41
47
29
5
6
159
112
25
7
10
Words
vs sounds
Anatomical location
Pcorrected
10
Words
vs tones
Coordinates
Note. Pcorrected ⫽ 0.05, only clusters with at least three voxels are reported. Table shows at most three local maxima per cluster with cluster size, F value,
coordinates and anatomical location, including Brodmann areas. The primary maximum per cluster is set in bold.
However, all three types of auditory signals showed the
expected activations in the auditory system and the level of
activation varied with the complexity of the stimuli used.
Whereas the pure tones led to focal bilateral activations
within the primary and secondary auditory cortex, the sound
perception showed a pattern of activation, which also involved posterior parts of the right superior temporal gyrus
(STG, BA 22) and superior temporal sulcus, caused by its
complex spectral characteristics. Finally, the perception of
intelligible words led to an additional increase of activity
within the left hemisphere. The involvement of Broca’s and
adjacent areas (BA 44, 45, 47) as well as the whole superior
and transverse temporal gyrus (BA 22, 41, 42) and middle
temporal gyrus (BA 21) indicate that the perception of
familiar words already activates language related processes,
such as accessing semantical and lexical knowledge (BA 22,
47) and speech production (BA 44/45) (Binder et al., 1997;
Mazoyer et al., 1993; Price et al., 1996).
To explore functional segregation, we studied the different levels of activation evoked by different types of auditory
stimuli. First of all, we found the expected lateralization
between the different stimuli classes, which is in good
agreement with previous studies, using tone and environmental sounds (Binder et al., 1995; Celsis et al., 1999;
Engelien et al., 1995; Tzourio et al., 1997). Comparing the
sound perception with the word perception revealed stronger activation for the sound perception in the right primary
auditory cortex as well as the right planum temporale. In
contrast to that, words led to higher activations within two
distinct regions of the left superior temporal sulcus, the
middle temporal (BA 21), and the inferior frontal gyrus (BA
44). This finding was confirmed by our VOI analyses, where
all parts of the left STS showed significant higher activations during the word than the sound or tone trials (Table 4).
Comparing the fitted responses within the middle and the
posterior portion of STS (Fig. 3), Scott (Scott et al., 2000)
as well as Belin’s (Belin et al., 2000) hypothesis of distinct
subsystems within the temporal lobe is confirmed. This
functional segregation is already detectable by this rapid
stimulation with short trials. Compared to the tone trials, the
posterior portion of the left STS responded significantly to
both the sounds and the words, but significantly stronger to
the words. In contrast to that, the middle part of the left
superior temporal sulcus showed significant differences
only between the speech and sound tasks, but not between
the sound and the tone tasks. The anterior part of STS was
not significantly activated in the voxelwise statistic but
revealed a significantly increased BOLD response during
the word compared to sound trials in our VOI analysis.
These results uncovered a functional segregation in the
1952
K. Specht, J. Reul / NeuroImage 20 (2003) 1944 –1954
Table 3
Significant differences of BOLD signals in a volume of interest analysis
of different temporal lobe structures
TTG
PT
STG
STS
MTG
Left hemisphere
Sounds ⬎ tones
Sounds ⬎ words
Words ⬎ tones
Words ⬎ sounds
Tones ⬎ sounds
Tones ⬎ words
0.029
0.044
0.023
0.025
0.025
⬍0.001
0.001
0.047
⬍0.001
0.002
Right hemisphere
Sounds ⬎ tones
Sounds ⬎ words
Words ⬎ tones
Words ⬎ sounds
Tones ⬎ sounds
Tones ⬎ words
0.015
0.012
0.016
0.021
0.001
⬍0.001
⬍0.001
0.037
0.008
0.015
Note. P ⫽ 0.05 in a paired t test between conditions. Abbreviations:
TTG, transverse temporal gyrus; PT, planum temporale; STG, superior
temporal gyrus; STS, superior temporal sulcus; MTG, middle temporal
gyrus.
temporal lobe. The posterior STS responded to both speech
and also complex auditory features, and the middle and
anterior STS only to speech. Recent studies (Belin et al.,
2000, 2002; Binder et al., 2000; Jancke et al., 2002; Scott et
al., 2000; Wise et al., 2001; Zatorre and Belin, 2001) support this view of an extending activation from posterior to
anterior (Scott et al., 2000) and dorsal to ventral (Binder et
al., 1995, 1997, 2000) for auditory processing in the left
temporal lobe. We therefore suggest that the left posterior
STS is predominantly sensitive to complex auditory-temporal features present in speech as well as nonspeech stimuli,
and the mid-STS responds mainly to the acoustic shape of
human voices, i.e., phonetic cues, such as real words,
pseudowords, reversed speeech, or phonemes. Finally, the
anterior STS is responsible for the processing of intelligible
speech. Furthermore, data from auditory cortical systems of
primates provide a ventral “what” stream in the temporal
lobe (Rauschecker, 1998; Rauschecker and Tian, 2000).
Moreover, the pattern of significant differences within
the left STS between the speech and sound trials was very
similar to that which occurs in the comparison between
sounds and tones in the right hemisphere (see Figs. 3 and 4).
Here, the perception of environmental and nameable sounds
seems to lead to a similar extension of activations from
posterior to anterior in the right temporal lobe. It could be
assumed that the right STG and STS plays the same crucial
role in the analysis of nonspeech sounds like the left STS for
speech perception (Binder et al., 1995, 1997, 2000; Engelien et al., 1995; Tervaniemi et al., 1999, 2000; Tzourio et
al., 1997; Zatorre and Belin, 2001). An additional finding
was that already the perception of tones led to a higher
BOLD signal in the right primary auditory cortex than the
perception of words (see Table 3 and Fig. 3).
This tendency to the right for the nonverbal stimuli on
the level of the primary and secondary auditory cortex is the
counterpart for the significant leftward asymmetry in the
planum temporale for the word trials. In accordance with the
literature (see Binder et al., 2000; Fiez et al., 1996; Hugdahl,
2000; Jancke et al., 2002; Specht et al., 2003; Tzourio et al.,
1998, for further references), this demonstrates that the left
PT has higher sensitivity than the opposite area for the
presence of phonetic cues. However, the data also demonstrate that the BOLD signal detected during the perception
of the sounds is still larger. This underlines that PT is not
specialized for phonetic analysis per se but is involved in
different types of acoustic analyses, especially in the presence of rapidly changing cues (Jancke et al., 2002). Further,
recent studies are also demonstrating that left PT is involved
in attentive and attentional modulated listening, irrespective
of the used stimuli (Binder et al., 1996; Jancke et al., 2003).
Combining the results from PT with those from the STS,
one can hypothesize a stream of increasing sensitivity for
the presence of speech from PT over posterior and mid-STS
to anterior STS. The significant leftward asymmetry of the
PT confirms the high, but not exclusively, sensitivity of this
area for these phonetic cues. In contrast to that, the posterior
part of the left STS responded more strongly to words than
sounds, especially the middle and anterior parts. At this
point, one has to distinguish between sensitivity and selectivity, because PT and most parts of STG and STS responded bilaterally to all trials. The crucial point is that the
left STS showed a higher sensitivity to words, whereas the
right STS was more sensitive to nonverbal cues.
Table 4
Significant differences of BOLD signals in a volume of interest analysis
of the superior temporal gyrus (STG) and sulcus (STS), subdivided into
anterior, middle, and posterior parts
STG
STS
Anterior Middle Posterior Anterior Middle Posterior
Left hemisphere
Sounds ⬎ tones
Sounds ⬎ words
Words ⬎ tones
Words ⬎ sounds
Tones ⬎ sounds
Tones ⬎ words
0.019 0.035
0.005
0.013
0.002 0.002
0.046
0.008
⬍0.001
0.003 ⬍0.001
0.018
Right hemisphere
Sounds ⬎ tones
0.004
0.010 0.037
Sounds ⬎ words
Words ⬎ tones ⬍0.001 ⬍0.001 0.049
Words ⬎ sounds
Tones ⬎ sounds
Tones ⬎ words
0.004
0.009
0.037
⬍0.001 ⬍0.001
0.001
Note. P ⫽ 0.05 in a paired t test between conditions.
K. Specht, J. Reul / NeuroImage 20 (2003) 1944 –1954
Finally, there were other important differences between
the tasks. In addition to the temporal lobes, the comparison
of the word as well as the sound trials with the tone task also
showed significant differences within the premotor system,
comprising the SMA (BA 6). These activations are known
from studies using expressive as well as receptive tasks
(Binder et al., 1997). In this circumstance, it is remarkable
that this system was already activated when no response to
these short stimuli was required. In addition to other studies
using similar listening tasks with speech and nonspeech
trials (Binder et al., 2000; Mummery et al., 1999; Scott et
al., 2000), the comparisons between the word and tone trials
as well as the word and sound trials (see Table 2 and Fig. 3)
uncovered also left frontal activations in BA47, which is
usually associated with semantic and lexical tasks (Binder et
al., 1997; Mazoyer et al., 1993). The presence of this activity could potentially be explained by the experimental
designs, because our event-related design could also uncover temporally short activations in the initial phase of
speech perception, which is not possible in studies using a
blocked design. Therefore, this activation demonstrates the
automated processing of speech, even when no task was
requested. Because only BA47 is activated, we interpreted
this as an automated access to semantic and lexical information in relation to the word being heard.
Conclusion
To summarize, this study demonstrates that the perception of auditory stimuli, presented in a rapid and randomized
event-related design, leads to a highly differentiated pattern
of activation. By using stimuli of different classes, we were
able to uncover a posterior–anterior stream of speech processing in the left temporal lobe, comprising the planum
temporal and the superior temporal sulcus. This posterior–
anterior stream is predominantly characterized by an increasing sensitivity to speech or, more generally, to the
presence of phonetic cues. Furthermore, the same structures
of the right temporal lobe seem to play a comparable crucial
role in the processing of nonphonetic trials.
Acknowledgments
We thank Professor Kenneth Hugdahl from the University of Bergen, Norway, for very useful discussions about
auditory perception and his comments about this article and
Ralph Schnitker from the University of Technology,
RWTH-Aachen, for lending us some of his auditory stimuli.
References
Annett, M., 1970. A classification of hand preference by association analysis. Br. J. Psychol. 61, 303–321.
1953
Belin, P., Zatorre, R.J., Ahad, P., 2002. Human temporal-lobe response to
vocal sounds. Brain Res. Cogn Brain Res. 13, 17–26.
Belin, P., Zatorre, R.J., Lafaille, P., Ahad, P., Pike, B., 2000. Voiceselective areas in human auditory cortex. Nature 403, 309 –312.
Belin, P., Zilbovicius, M., Crozier, S., Thivard, L., Fontaine, A., Masure,
M.C., Samson, Y., 1998. Lateralization of speech and auditory temporal processing. J. Cogn. Neurosci. 10, 536 –540.
Binder, J.R., Frost, J.A., Hammeke, T.A., Bellgowan, P.S., Rao, S.M., Cox,
R.W., 1999. Conceptual processing during the conscious resting state.
A functional MRI study. J. Cogn. Neurosci. 11, 80 –95.
Binder, J.R., Frost, J.A., Hammeke, T.A., Bellgowan, P.S., Springer, J.A.,
Kaufman, J.N., Possing, E.T., 2000. Human temporal lobe activation
by speech and nonspeech sounds. Cereb. Cortex 10, 512–528.
Binder, J.R., Frost, J.A., Hammeke, T.A., Cox, R.W., Rao, S.M., Prieto, T.,
1997. Human brain language areas identified by functional magnetic
resonance imaging. J. Neurosci. 17, 353–362.
Binder, J.R., Frost, J.A., Hammeke, T.A., Rao, S.M., Cox, R.W., 1996.
Function of the left planum temporale in auditory and linguistic processing. Brain 119 (4), 1239 –1247.
Binder, J.R., Rao, S.M., Hammeke, T.A., Frost, J.A., Bandettini, P.A.,
Jesmanowicz, A., Hyde, J.S., 1995. Lateralized human brain language
systems demonstrated by task subtraction functional magnetic resonance imaging. Arch. Neurol. 52, 593– 601.
Celsis, P., Boulanouar, K., Doyon, B., Ranjeva, J.P., Berry, I., Nespoulous,
J.L., Chollet, F., 1999. Differential fMRI responses in the left posterior
superior temporal gyrus and left supramarginal gyrus to habituation and
change detection in syllables and tones. NeuroImage 9, 135–144.
Engelien, A., Silbersweig, D., Stem, E., Huber, W., Doring, W., Frith, C.,
Frackowiak, R.S., 1995. The functional anatomy of recovery from
auditory agnosia. A PET study of sound categorization in a neurological patient and normal controls. Brain 118 (6), 1395–1409.
Fiez, J.A., Raichle, M.E., Balota, D.A., Tallal, P., Petersen, S.E., 1996.
PET activation of posterior temporal regions during auditory word
presentation and verb generation. Cereb. Cortex 6, 1–10.
Friston, K.J., Holmes, A., Poline, J.B., Price, C.J., Frith, C.D., 1996.
Detecting activations in PET and fMRI: levels of inference and power.
Neuroimage 4, 223–235.
Friston, K.J., Holmes, A.P., Poline, J.B., Grasby, P.J., Williams, S.C.,
Frackowiak, R.S., Turner, R., 1995. Analysis of fMRI time-series
revisited. Neuroimage 2, 45–53.
Friston, K.J., Mechelli, A., Turner, R., Price, C.J., 2000. Nonlinear responses in fMRI: the balloon model, volterra kernels, and other hemodynamics [In Process Citation]. Neuroimage 12, 466 – 477.
Friston, K.J., Zarahn, E., Josephs, O., Henson, R.N., Dale, A.M., 1999.
Stochastic designs in event-related fMRI. Neuroimage 10, 607– 619.
Frost, J.A., Binder, J.R., Springer, J.A., Hammeke, T.A., Bellgowan, P.S.,
Rao, S.M., Cox, R.W., 1999. Language processing is strongly left
lateralized in both sexes: evidence from functional MRI. Brain 122 (2),
199 –208.
Gusnard, D.A., Raichle, M.E., Raichle, M.E., 2001. Searching for a baseline: functional imaging and the resting human brain. Nat. Rev. Neurosci 2, 685– 694.
Hall, D.A., Haggard, M.P., Akeroyd, M.A., Summerfield, A.Q., Palmer,
A.R., Elliott, M.R., Bowtell, R.W., 2000. Modulation and task effects
in auditory processing measured using fMRI. Hum. Brain Mapp. 10,
107–119.
Hugdahl, K., 2000. Lateralization of cognitive processes in the brain. Acta
Psychol. (Amst.) 105, 211–235.
Hugdahl, K., Bronnick, K., Kyllingsbaek, S., Law, I., Gade, A., Paulson,
O.B., 1999. Brain activation during dichotic presentations of consonant-vowel and musical instrument stimuli: a 15O-PET study. Neuropsychologia 37, 431– 440.
Hugdahl, K., Heiervang, E., Nordby, H., Smievoll, A.I., Steinmetz, H.,
Stevenson, J., Lund, A., 1998. Central auditory processing, MRI morphometry and brain laterality: applications to dyslexia. Scand. Audiol.
Suppl. 49, 26 –34.
1954
K. Specht, J. Reul / NeuroImage 20 (2003) 1944 –1954
Hugdahl, K., Law, L., Kyllingsbaek, S., Bronnick, K., Gade, A., Paulson,
O.B., 2000. Effects of attention on dichotic listening: an 15O-PET
study. Hum. Brain Mapp. 10, 87–97.
Jancke, L., Mirzazade, S., Shah, N.J., 1999. Attention modulates activity in
the primary and the secondary auditory cortex: a functional magnetic
resonance imaging study in human subjects. Neurosci. Lett. 266, 125–
128.
Jancke, L., Schlaug, G., Huang, Y., Steinmetz, H., 1994. Asymmetry of the
planum parietale. NeuroReport 5, 1161–1163.
Jancke, L., Specht, K., Shah, J.N., Hugdahl, K., 2003. Focused attention in
a simple dichotic listening task: an fMRI experiment. Brain Res. Cogn.
Brain Res. 16, 257–266.
Jancke, L., Steinmetz, H., 1993. Auditory lateralization and planum temporale asymmetry. NeuroReport 5, 169 –172.
Jancke, L., Wustenberg, T., Scheich, H., Heinze, H.J., 2002. Phonetic
perception and the temporal cortex. Neuroimage 15, 733–746.
Kent, R.D., 1998. Neuroimaging studies of brain activation for language,
with an emphasis on functional magnetic resonance imaging: a review.
Folia Phoniatr. Logop. 50, 291–304.
Mazoyer, B.M., Tzourio, N., Frak, V., Syrota, A., Murayama, N., Levrier,
O., 1993. The cortical representation of speech. J. Cogn. Neurosci. 5,
467– 469.
Mummery, C.J., Ashburner, J., Scott, S.K., Wise, R.J., 1999. Functional
neuroimaging of speech perception in six normal and two aphasic
subjects. J. Acoust. Soc. Am. 106, 449 – 457.
Price, C.J., Green, D.W., von Studnitz, R., 1999. A functional imaging
study of translation and language switching. Brain 122 (12), 2221–
2235.
Price, C.J., Wise, R.J., Warburton, E.A., Moore, C.J., Howard, D., Patterson, K., Frackowiak, R.S., Friston, K.J., 1996. Hearing and saving. The
functional neuro-anatomy of auditory word processing. Brain 119 (3),
919 –931.
Pugh, K.R., Shaywitz, B.A., Shaywitz, S.E., Fulbright, R.K., Byrd, D.,
Skudlarski, P., Shankweiler, D.P., Katz, L., Constable, R.T., Fletcher,
J., Lacadie, C., Marchione, K., Gore, J.C., 1996. Auditory selective
attention: an fMRI investigation. NeuroImage 4, 159 –173.
Rauschecker, J.P., 1998. Parallel processing in the auditory cortex of
primates. Audiol. Neurootol. 3, 86 –103.
Rauschecker, J.P., Tian, B., 2000. Mechanisms and streams for processing
of “what” and “where” in auditory cortex. Proc. Natl. Acad. Sci. USA
97, 11800 –11806.
Scott, S.K., Blank, C.C., Rosen, S., Wise, R.J., 2000. Identification of a
pathway for intelligible speech in the left temporal lobe. Brain 123
(12), 2400 –2406.
Specht, K., Holtel, C., Zahn, R., Herzog, H., Krause, B.J., Mottaghy, F.M.,
Radermacher, I., Schmidt, D., Tellmann, L., Weis, S., Willmes, K.,
Huber, W., 2003. Lexical decision of nonwords and pseudowords in
humans: a positron emission tomography study. Neurosci. Lett. 345,
177–181.
Tervaniemi, M., Kujala, A., Alho, K., Virtanen, J., Ilmoniemi, R.J., Naatanen, R., 1999. Functional specialization of the human auditory cortex
in processing phonetic and musical sounds: a magnetoencephalographic (MEG) study. NeuroImage 9, 330 –336.
Tervaniemi, M., Medvedev, S.V., Alho, K., Pakhomov, S.V., Roudas,
M.S., Van Zuijen, T.L., Naatanen, R., 2000. Lateralized automatic
auditory processing of phonetic versus musical information: a PET
study. Hum. Brain Mapp. 10, 74 –79.
Tzourio, N., Massioui, F.E., Crivello, F., Joliot, M., Renault, B., Mazoyer,
B., 1997. Functional anatomy of human auditory attention studied with
PET. NeuroImage 5, 63–77.
Tzourio, N., Nkanga-Ngila, B., Mazoyer, B., 1998. Left planum temporale
surface correlates with functional dominance during story listening.
NeuroReport 9, 829 – 833.
Wise, R.J., Scott, S.K., Blank, S.C., Mummery, C.J., Murphy, K., Warburton, E.A., 2001. Separate neural subsystems within ‘Wernicke’s
area’. Brain 124, 83–95.
Zatorre, R.J., Belin, P., 2001. Spectral and temporal processing in human
auditory cortex. Cereb. Cortex 11, 946 –953.
Zatorre, R.J., Belin, P., Penhune, V.B., 2002. Structure and function of
auditory cortex: musci and speech. Trends Cogn. Sci. 6, 37– 46.
Zatorre, R.J., Evans, A.C., Meyer, E., Gjedde, A., 1992. Lateralization of
phonetic and pitch discrimination in speech processing. Science 256,
846 – 849.