Download Semantic monitoring of words with emotional connotation

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Neuroscience of sex differences wikipedia , lookup

Transcript
Journal of the International Neuropsychological Society (2002), 8, 607–622.
Copyright © 2002 INS. Published by Cambridge University Press. Printed in the USA.
DOI: 10.1017.S1355617702801394
Semantic monitoring of words with emotional
connotation during f MRI: Contribution of
anterior left frontal cortex
BRUCE CROSSON,1,7 M. ALLISON CATO,1 JOSEPH R. SADEK,1 DIDEM GÖKÇAY,2
RUSSELL M. BAUER,1 IRA S. FISCHLER,3 LEEZA MARON,1 KAUNDINYA GOPINATH,4
EDWARD J. AUERBACH,5 SAMUEL R. BROWD,6 and RICHARD W. BRIGGS 5
McKnight Brain Institute of the University of Florida and Departments of 1 Clinical and Health Psychology,
2
Computer Information Sciences and Engineering, 3 Psychology, 4 Nuclear and Radiological Engineering, 5 Radiology,
6
Neuroscience University of Florida Health Science Center, Gainesville, Florida
7
Veterans Affairs RR&D Brain Rehabilitation Research Center, VA Medical Center, Gainesville, Florida
(Received January 26, 2001; Revised August 27, 2001; Accepted September 5, 2001)
Abstract
Previous studies showed that cortex in the anterior portions of the left frontal and temporal lobes participates in
generating words with emotional connotations and processing pictures with emotional content. If these cortices
process the semantic attribute of emotional connotation, they should be active whenever processing emotional
connotation, without respect to modality of input or mode of output. Thus, we hypothesized that they would activate
during monitoring of words with emotional connotations. Sixteen normal subjects performed semantic monitoring
of words with emotional connotations, animal names, and implement names during f MRI. Cortex in the anterior left
frontal lobe demonstrated significant activity for monitoring words with emotional connotations compared to
monitoring tone sequences, animal names, or implement names. Together, the current and previous results implicate
cortex in the anterior left frontal lobe in semantic processing of emotional connotation, consistent with connections
of this cortex to paralimbic association areas. Current findings also indicate that neural substrates for processing
emotional connotation are independent of substrates for processing the categories of living and nonliving things.
(JINS, 2002, 8, 607– 622)
Keywords: Frontal lobe, Temporal lobe, Limbic system, Functional magnetic resonance imaging, Emotion,
Semantics
tinguishing amongst living and nonliving things, respectively (e.g., Hart & Gordon, 1992; Warrington & McCarthy,
1983). Cortex in the ventral temporal lobe, referred to as
the “ventral visual stream,” is critical for analyzing visual
form (Ungerleider & Mishkin, 1982) and frequently is damaged in patients having category-specific deficits for living
things (e.g., Ferreira et al., 1997; Warrington & Shallice,
1984). However, functional neuroimaging experiments have
produced conflicting results regarding the role of the ventral visual stream in processing visual form as a semantic
attribute. Thompson-Schill et al. (1999) asserted that processing verbal information about living items engaged cortex in the fusiform gyrus (i.e., in the ventral visual stream)
because of the importance of visual form in defining living
things, but verbal processing of nonliving items engaged
cortex of the fusiform gyrus only when subjects attended to
INTRODUCTION
Attributes are features that distinguish members of semantic categories from each other. Inability to process particular semantic attributes after brain injury may contribute to
category-specific semantic impairments. The most common category-specific deficits involve living versus nonliving things, or sometimes more specifically animals versus
implements (e.g., Caramazza & Shelton, 1998; Damasio
et al., 1996; Hillis & Caramazza, 1991; Sacchett & Humphreys, 1992; Tranel et al., 1997). The semantic attributes
of visual form and function are thought important for dis-
Reprint requests to: Bruce Crosson, Ph.D., Department of Clinical and
Health Psychology, University of Florida Health Science Center, Box
100165, Gainesville, FL 32610-0165. E-mail: [email protected]
607
608
visual form. On the other hand, Chao et al. (1999) demonstrated that the medial fusiform gyrus is more sensitive to
nonliving things, and the lateral fusiform gyrus to living
things, even when items were presented as written names
and no attempt was explicitly made to analyze visual form.
One problem with the latter study was that differences in
location of activity for nonliving and living things (Chao
et al., 1999) could occur either because their visual attributes
required separate perceptual mechanisms, or because a visual semantic system in the fusiform gyrus is topographically divided by category. Indeed, findings of previous
studies have supported the existence of a separate visual
and verbal semantic systems (e.g., Hart & Gordon, 1992;
McCarthy & Warrington, 1988). Because cortex in the ventral visual stream could play a fundamental role in object
identification (i.e., visual semantics) in addition to processing visual attributes, attributes other than visual form must
be studied to ascertain the existence of neural substrates for
semantic attributes apart from the distinction of semantic
categories. Emotional connotation is an ideal semantic
attribute to investigate neural substrates. First, emotional
connotation varies between objects within traditional semantic categories, such as living and nonliving things. For
example, within the category of nonliving things, a cake
has a positive emotional connotation, a coffin has a negative emotional connotation, and hammer has a neutral emotional connotation. This feature of emotional connotation
allows investigators to rule out common semantic categories as a confounding factor in the study of emotional connotation. Second, our knowledge of emotional processing
systems in the brain allows us to identify cortical regions
that have access to emotional information, which should be
good candidates for processing the semantic attribute of
emotional connotation. Thus, cortices with access to the
limbic system are good candidates for processing emotional connotation. Indeed, cortices in the more anterior
portions of the left frontal and temporal lobes, which have
access to limbic structures and0or paralimbic association
areas (Pandya & Yeterian, 1986), show increased activity
when subjects generate words with emotional connotations
(Crosson et al., 1999). Activity in the more anterior portions of the right or both temporal lobes (Canli et al., 1998;
Lane et al., 1999) and in the more anterior portion of the
left frontal lobe (Lane et al., 1997; Paradiso et al., 1999)
appears when viewing pictures with emotional content. Not
only are these cortices characterized by connections to limbic structures, they are outside areas usually implicated in
the more central aspects of either visual or verbal semantic
processing. Thus, activity in the more anterior portions of
the left frontal and temporal lobes cannot be attributed to
fundamental semantic processes in either the visual or verbal modalities.
However, while these findings regarding generating words
and viewing pictures with emotional connotations suggest
that cortices in the anterior left frontal and temporal lobes
are involved in processing the semantic attribute of emotional connotation, one other criterion must be met in order
B. Crosson et al.
to confirm this hypothesis: The comprehension of words
with emotional connotations must demonstrate involvement of the same cortices. Considerable evidence indicates
that spoken words, heard words, and seen objects may have
separate lexical or visual form representations (e.g., Ellis &
Young, 1988). Each of these lexical and visual forms access
stored semantic information to understand or to convey
meaning. If the contribution of a cortical structure is based
on representation of a particular kind of lexical or visual
form then it should be uniquely active when that kind of
lexical or visual form is processed. On the other hand, if the
contribution of a cortical structure is based on the representation of semantic information, it will be engaged whenever
the specific kind of semantic representations that it supports are processed. Thus, cortices that process the semantic attribute of emotional connotation should be active
whenever emotional connotation is processed, without respect to the source of lexical or visual input, or without
respect to mode of output. It follows that if cortices in the
anterior left frontal and temporal lobes are important for
processing emotional connotation as a semantic attribute,
they should demonstrate increased activity when recognizing spoken words with emotional connotations, as well as
when generating words with emotional connotations or viewing pictures with emotional content.
To test this hypothesis, neurologically normal subjects
performed three semantic monitoring tasks during f MRI of
left-hemisphere activity: monitoring of nouns with emotional connotations, monitoring of implement names, and
monitoring of animal names. The four-spiral pulse sequence which we used to acquire our data allowed us to
collect only nine functional imaging slices, which precluded a whole-brain acquisition. We chose to cover the
entire left hemisphere with these nine slices because of the
dominance of the left hemisphere for processing words,
with or without emotional connotations. For example, lesion data suggest that difficulty processing emotional connotation through linguistic channels is a function of linguistic
competence of the left hemisphere rather than a function of
the right hemisphere (Heilman et al., 1993). Further, the
data of Ali and Cimino (1997) indicate that words tachistoscopically presented to the left hemisphere (right visual field)
are more accurately perceived than words presented to the
right hemisphere (left visual field) for positive, negative, or
neutral emotional connotation. The magnitude of this effect
is greater than within hemisphere differences in accuracy
between word types. Each of the three semantic monitoring
tasks in the present experiment was alternated with a tone
monitoring task during separate imaging runs. It was hypothesized that monitoring of nouns with emotional connotations would demonstrate greater activity than the tone
monitoring control task in the more anterior portions of the
left frontal and temporal lobes, while monitoring of implement or animal names would not show such changes in
comparison to the tone monitoring control task. One weakness in our previous study of generation of words with emotional connotations (Crosson et al., 1999) was that no
Semantic monitoring of emotional connotation
systematic attempt was made to exclude the distinction between living and nonliving categories as a potential confound. Thus, in the current study, words with emotional
connotations equally represented living and nonliving things,
and items for the two semantic monitoring comparison tasks
were chosen from prototypical living items (animal names)
and prototypical nonliving items (tool names). If the processing of emotional connotation in the more anterior portions of the left frontal and temporal lobes is orthogonal to
the living–nonliving distinction, processing of emotional
connotation should show greater activity at these locations
than monitoring of either implement or animal names. Finally, since words in all three semantic monitoring tasks
were selected for their ability to evoke visual imagery, we
hypothesized all semantic monitoring tasks would show activity in the left fusiform gyrus compared to the tone monitoring task.
MATERIALS AND METHODS
Research Participants
Sixteen strongly right-handed healthy participants (8 male,
8 female) took part in the study. Subjects’ ages ranged from
21 to 37 years (M 5 26.3 years, SD 5 5.0); education ranged
from 12 to 24 years (M 5 17.7 years, SD 5 3.0). Edinburgh
Handedness Inventory laterality quotients (Oldfield, 1971)
ranged from 53 to 100 (M 5 82.8, SD 5 18.6), indicating
strong right handedness. All participants gave written informed consent according to procedures established by the
Health Center Institutional Review Board at the University
of Florida.
Experimental Tasks and
Manipulation Check
Participants alternated between a semantic monitoring task
and a tone monitoring task during three separate f MRI imaging runs. In the semantic monitoring tasks, subjects monitored one specific type of word for two relevant semantic
characteristics. During the control task, subjects monitored
tone sequences for the presence of two or more highpitched tones. Each functional imaging run consisted of 8
cycles of alternation between semantic monitoring and tone
monitoring. Half cycles lasted 28 s, and stimuli were presented at precise intervals using a computerized playback
system. Stimuli were presented binaurally with volume individually adjusted for optimal hearing during scanning
(Kenwood KR-A4070 amplifier, Realistic 31-2005 Ten band
stereo frequency equalizer, JBL 16 W speaker, and acoustically insulated air conduction transducer with foam insert
earphones). Stimuli in the semantic monitoring tasks were
spoken English nouns. Nine spoken English nouns were
presented in each 28-s half cycle, for a total of 72 nouns
across all 8 cycles. Three types of words were presented in
separate experimental runs: words with emotional connota-
609
tions, animal names, and implement names. No words were
duplicated between or within the different experimental runs.
To ensure the presence of implied emotional content,
nouns with emotional connotations were selected from the
Affective Norms for Emotional Words (ANEW; Bradley
et al., 1988). ANEW words were rated for emotional valence and arousal by a large sample of normal participants
similar to participants from the current study. Ratings for
emotional valence were made on a 9-point scale with 1
representing the most emotionally negative valence, 9 representing the most emotionally positive valence, and 5 representing emotionally neutral. ANEW ratings for emotional
arousal were made on a 9-point scale, with 1 representing
minimally arousing, 9 representing maximally arousing, and
5 representing a moderate level of arousal. Using the valence and arousal ratings from ANEW, nouns with negative
connotations were chosen from words with valence ratings
less than 3.5 and with arousal ratings above 5. Nouns with
positive connotations were selected from words with valence ratings higher than 6.5 and arousal ratings above 5.
All nouns were selected for high imageability (above 3,
range 1–5) because stimuli for the comparison tasks, animal and tool names, were highly imageable.
The purpose of the present experiment was to study lefthemisphere activity during comprehension of a semantic
attribute, emotional connotation, which was orthogonal to
common semantic categories. The most common categoryspecific deficit after brain damage is living and nonliving
things (e.g., Caramazza & Shelton, 1998; Damasio et al.,
1996; Hillis & Caramazza, 1991; Sacchett & Humphreys,
1992; Tranel et al., 1997). Thus, to ensure that emotional
connotation would not be confounded with the distinction
between living and nonliving categories, words with emotional connotations consisted of equal numbers of living
and nonliving items. Further, since our intention was to
study emotional connotation as a general semantic attribute,
equal numbers of words with positive emotional valence
and negative emotional valence were selected. To further
ensure that processing of emotional connotation was independent of the living-nonliving distinction, protypical living items (animals names) and nonliving items (implement
names) were selected for semantic monitoring comparison
tasks.
Based on these considerations, equal numbers of words
with emotional connotations were selected from the following possibilities: positive emotional connotation–living, positive emotional connotation–nonliving, negative emotional
connotation–living, and negative emotional connotation–
nonliving. There was no overlap between the list of words
with emotional connotations and the animal or implement
lists. Equal numbers of animal names were selected from
the following possibilities: water dwelling–two legs or less;
water dwelling–more than two legs; land dwelling–two legs
or less; land dwelling–more than two legs. Implement names
were equally selected from the following groups: used
primarily outdoors–requires two hands; used primarily
outdoors–requires one hand only; used primarily indoors–
610
requires two hands; used primarily indoors–requires one
hand only. Because animal names generally are less commonly used than tool names in English, because all words
with emotional connotation were selected from the ANEW
list, and because the corpus of words piloted for each task
had to be so large (n 5 99 per task), word lists of animal
names, implement names, and words with emotional connotations could not be matched for frequency of usage in
the English language. Frequency of usage in the English
language was calculated on the basis of the norms of Francis and Kucera (1982); when a word was not represented in
this corpus, it was assigned a value of zero. Average frequency of usage per million words for each category was as
follows: animal mean 5 3.25, SD 5 5.69; implement mean 5
11.10, SD 5 19.90; words with emotional connotation
mean 5 31.06, SD 5 61.06. However, when possible words
from Toglia and Battig’s (1978) norms were selected (34 of
72 words for animal names; 37 of 72 words for implement
names; 45 of 72 words with emotional connotation). All
these words had imagery, concreteness, and familiarity ratings above the midpoint of the rating scale (4 on a 7-point
scale), indicating that words had high imageability, concreteness, and familiarity.
Using a semantic monitoring task similar to that of Binder
et al. (1997, 1999) and Demonet et al. (1992), subjects monitored words for two designated criteria. For words with
emotional connotations, subjects pressed a button when the
item was a nonliving thing and had a negative emotional
connotation (e.g., vomit and tornado would be targets; roach,
puppy, and cake would be nontargets). Negative emotional
connotation was selected to enhance participants’ attention
to emotional attributes; the nonliving distinction was chosen so that participants would attend to the category distinction which guided the selection of semantic monitoring
control tasks. Because the current study focused on a semantic attribute, emotional connotation, the semantic monitoring comparison tasks emphasized attributes which were
characteristic of the categories used in these tasks and which
were largely devoid of emotional connotation. For animal
names, subjects pressed a button when the animal had more
than two legs and was land-dwelling (e.g., moose and grasshopper would be targets; dolphin, crab, and ostrich would
be nontargets). For implement names, subjects pressed a
button when the implement was used primarily outdoors
and required only one hand (e.g., hatchet and baseball would
be targets; shovel, spatula, keyboard would be nontargets).
The tasks were matched in frequency of targets (average of
two targets per half cycle). Responses consisted of a thumb
press to a hand-held button assembly placed in the subject’s
left hand when a target was recognized. Button presses produced a visual signal in the control room. The notebook
computer used for stimulus presentation recorded those items
that were endorsed as targets, and measured response latency to those stimuli via a program written for this purpose. The order of the experimental runs (words with
emotional connotation vs. animal names vs. implement
names) was counterbalanced across subjects.
B. Crosson et al.
A pilot study was conducted with different subjects to
equate the three experimental tasks for accuracy and reaction time. During the pilot study, a separate group of 7
subjects (5 F, 2 M; graduate students) listened to words and
pressed a button in their right hand if the word met both
criteria for the task. They pressed a button in their left hand
if the word did not. This dual response format was used for
the pilot study so that reaction time could be used as one
index of the ambiguity or difficulty of semantic judgments
for both target and nontarget items. Subjects performed the
three monitoring tasks in separate experimental runs. Ninetynine words were used in each task, divided into 11 cycles.
Words were excluded if they were excessively ambiguous
or difficult, as indexed by variability in response or long
reaction times.
The control task was a tone monitoring task patterned
after that of Binder et al. (1997). Subjects heard sequences
of high- and low-frequency digitally synthesized tones. Sequences of three to seven tones were presented, and subjects pressed a button following each sequence that contained
two or more high-frequency tones. Target density for tone
monitoring was matched to that of the semantic monitoring
tasks. The control task was selected to mitigate the effects
of auditory input, attentional demands, and motor responding. We did not choose words or word-like stimuli for a
control task because of their potential for activating semantic searches (see, e.g., Price et al., 1996).
Image Acquisition
Each experimental run consisted of 8 cycles, alternating
between 28-s periods of word monitoring and 28 second
periods of tone monitoring. For each functional image slice,
there were 8 images per half cycle (128 images per task).
Data were acquired on a 1.5 T GE Signa scanner using a
dome-shaped quadrature radio frequency head coil. The head
was aligned such that the interhemispheric fissure was within
18 of vertical. The most medial sagittal slice for functional
images was placed such that the medial edge of the slice
corresponded with the medial boundary of the left hemisphere. For f MRI sequences, nine slices (6.5 to 6.9 mm
thick) were used to cover the entire left hemisphere. (At the
time of the experiment, our imaging software was limited
to nine slices, and the nine slices were used to cover the left
hemisphere.) Functional images were acquired using a gradient echo spiral scan sequence (Noll et al., 1995) with
TE 5 40 ms, TR 5 885 ms, FA 5 458, FOV 5 18 cm, 4
spirals. After functional image acquisition for all tasks, structural images were acquired of the whole brain (124 3 1.3 mm
thick sagittal slices), using a 3D spoiled GRASS acquisition (TE 5 7 ms, TR 5 27 ms, FA 5 608, NEX 5 1, FOV 5
24 cm, matrix size 5 256 3 192).
Image Analysis
Functional images were analyzed and overlaid onto structural images using the AFNI program from the Medical
Semantic monitoring of emotional connotation
College of Wisconsin (Cox, 1996). To correct for head motion, all individual functional images were registered to the
last functional image of the last run with an iterated differential spatial method of alignment. Subsequently, images
were visually inspected for gross artifacts and viewed in
cine loop to detect residual motion. If any time series of a
subject was judged to contain a significant number of images with gross artifacts or residual motion, the subject’s
data were removed from further analyses. To control for
large vessel effects and other sources of artifact, voxels
where the standard deviation of the signal was more than
5% of the mean intensity were eliminated from the functional image series for each run before further processing.
Next, difference images between experimental and control tasks were created as follows. For each experimental or
control task half cycle, the first three functional images
were discarded because they represented transition of the
hemodynamic response between the two task states. Subsequently, within each voxel, a mean signal intensity for
each half cycle was derived from the remaining 5 images.
For each experimental run, the control task mean was subtracted from the semantic monitoring mean for each of the
8 cycles on a voxel-by-voxel basis. To measure changes in
signal intensity between comparison conditions for each
subject, two types of t tests were conducted on a voxelby-voxel basis: (1) response to alternations between each
semantic monitoring task (nouns with emotional connotations, animal names, and implement names) and the tonemonitoring control task were compared; (2) differences
between each of the semantic monitoring tasks and the tone
monitoring task were compared to differences between all
other semantic monitoring tasks and the tone monitoring
task. For each subject, resulting functional intensities consisted of t values for the various comparisons.
Individual anatomic scans and functional images were
interpolated to 1 mm 3 voxels, co-registered and converted
to standard atlas space (Talairach & Tournoux, 1988). To
compensate for individual subject variability in structural
and functional anatomy, functional image volumes were
smoothed 3 mm, by replacing each voxel value with the
average value of the others in a 3-mm radius. The functional data sets were then merged across subjects by averaging the t statistics in each voxel; i.e., the functional
intensity within each voxel was the average t value across
the 16 subjects, hereafter referred to as tave . The procedure
of averaging t values was used to control for heteroscedasticity of MR signal variation between subjects arising from
sources unrelated to experimental conditions, such as differing degrees of tissue pulsatility, variability in global blood
flow or hemodynamic reactivity, or scanner variability between sessions (Binder et al., 1999). Similar to the analyses
of Binder et al. (1997), a probability of p # .0001 was used
along with the Cornish-Fisher expansion (Fisher & Cornish, 1960) to select a threshold for rejection of the null
hypothesis when semantic monitoring tasks were compared
directly to the tone monitoring control task. When semantic
monitoring-control task differences were compared be-
611
tween the different semantic monitoring tasks, a probability
level of p # .005 (Cornish-Fisher expansion) was used as a
threshold to determine significance. The relaxed probability criterion for the latter comparisons was necessary to
increase sensitivity of comparisons because the semantic
monitoring tasks being compared were all similar language
tasks, with fewer differences in cognitive components than
word monitoring versus tone monitoring comparisons. Such
procedures have been used in other functional neuroimaging studies of language (e.g., Martin et al., 1996). For the
comparisons among semantic monitoring tasks, all areas
meeting significance criteria are listed in the results section, but interpretation focused on the a priori hypothesis
that activity in the more anterior portions of the left frontal
and temporal lobes would be stronger for monitoring words
with emotional connotations than for monitoring animal or
implement names. To further control for type I statistical
error, only contiguous volumes $ 200 ml were interpreted
in all comparisons.
RESULTS
Analysis of Behavioral Data
Reaction times for items that subjects endorsed as targets
were compared between word monitoring tasks to ascertain
if subjects spent more time on one task versus others and as
an index of task difficulty. Average reaction times for target
items are presented in Table 1. A one-way repeated measures analysis of variance indicated no significant differences between word monitoring tasks in reaction times
@F~2,26! 5 1.47, p 5 .25]. Thus, word monitoring tasks
were equivalent with respect to this measure.
Comparison of semantic monitoring tasks
to tone monitoring tasks
Based on previous literature, two a priori hypotheses could
be evaluated in comparisons of semantic monitoring and
tone monitoring tasks. The first stated that activity in the
more anterior portions of the left frontal and temporal lobes
would be stronger for monitoring words with emotional
connotations than for monitoring animal or implement
names. Second, because the semantic attributes of animals
and implements include visual form, and because items with
emotional connotations were selected for ease of visualization, it was hypothesized that all word monitoring tasks
Table 1. Mean reaction times for target responses
Task
Emotional connotations
Animal names
Implement names
Reaction time (s)
SD
1.95
2.00
2.16
0.18
0.59
0.23
612
would demonstrate activity in the fusiform gyrus compared
to the tone monitoring control task.
Each of the semantic monitoring tasks was compared
directly to the control task (monitoring of tone sequences).
T values were derived within voxels for each subject and
then averaged across subjects. Thus, the functional intensity for group images was an average t value, referred to as
tave . A common criterion for significance was applied to all
comparisons (probability for tave # .0001, cluster of contiguous activity . 200 ml).
Activity in the More Anterior Portions of the
Left Frontal and Temporal Lobes
The first experimental hypothesis stated that semantic monitoring of words with emotional connotations would produce activity in the more anterior portions of the left frontal
and temporal lobes while monitoring of animal and implement names would not produce such activity. If this hypothesis is correct, then comparison of semantic monitoring for
words with emotional connotations to the tone monitoring
task should evoke activity in the anterior left frontal and
temporal lobes whereas experimental-control task comparisons for animal and implement names should not evoke
such activity. For monitoring words with emotional connotations, Table 2 shows a large volume of activity whose
maximum tave (max tave ) is in the anterior left frontal lobe
(anterior superior frontal gyrus). Figure 1 indicates this activity actually begins on the crest of the left hemisphere just
lateral to midline and anterior to pre-SMA. Then it extends
anteriorly to cortex near the frontal pole, spanning the entirety of what Talairach and Tournoux (1988) labeled Brodmann’s areas (BAs) 8 and 9, and extending into BA10 near
the frontal pole. The max tave within this cluster is somewhat posterior and superior to the activity that Crosson et al.
(1999) found near the frontal pole for generation of words
with emotional connotations versus emotionally neutral
words (Talairach coordinates for maximum functional intensity from previous study 5 213, 62, 27). However, the
activity in the current comparison subsumes the area of
activity difference from the previous study. Although there
was no activity at or just behind the left temporal pole in the
current study, there was an area of activity in the anterior
portion of the superior temporal gyrus (Table 2, Figure 2)
about 20 mm behind the maximum functional intensity from
the word generation study (Talairach coordinates for maximum functional intensity from the previous study 5 248,
17, 215). Thus, for monitoring of words with emotional
connotations in the current study, activity in the more anterior portions of the left frontal lobe was confirmed, and
anterior left temporal lobe activity was observed somewhat
posterior to the temporal pole.
No activity in the more anterior portions of the left frontal and temporal lobes was found for monitoring of animal
names; however, monitoring of implement names produced
activity in the anterior left frontal lobe and posterior to the
temporal pole near the max tave for monitoring words with
B. Crosson et al.
emotional connotations (Figures 1 and 2, respectively). In
both locations, activity was more wide spread for words
with emotional connotations than for implement names. For
monitoring words with emotional connotations, the activity
in the anterior left frontal lobe was contiguous with activity
extending posteriorly along the crest of the frontal lobe, but
for implement names, activity was not contiguous along the
crest of the frontal lobe. Instead, for implement names, one
superior area of activity located more posteriorly in the
medial frontal cortex merged with a large area of lateral
frontal activity (Table 2). The spatial extent of activity in
the anterior temporal lobe was about 75% greater for words
with emotional connotations compared to tool names.
Activity in the Fusiform Gyrus
The second hypothesis predicted that in semantic monitoringtone monitoring comparisons, activity would be present in
the fusiform gyrus for all three experimental conditions
(monitoring words with emotional connotations, monitoring animal names, monitoring implement names) compared
to the tone monitoring control task. The findings were as
expected (Table 2, Figure 3). The activity was more medially located for animal names than for implement names
and words with emotional connotations. For implement
names, the activity merges with activity located more laterally in the inferior temporal gyrus.
Other Areas of Activity
For experimental–control task comparisons, a number of
other volumes of significant activity were present (Table 2).
All tasks share some common volumes of activity. Every
semantic monitoring task evoked activity in the lateral prefrontal cortex; the volume was largest and quite extensive
for monitoring of implement names, and it was smallest but
still substantial for animal names. Monitoring of all three
word types also elicited activity centering in the angular
gyrus; again, the activity was most extensive and occupied
surrounding areas for implements, and it was least extensive for animals.
In addition to activity in the more anterior portion of the
left frontal lobe and activity in the anterior temporal cortex,
monitoring of words with emotional connotations and implement names had small volumes of activity in common
for two other regions. Compared to the tone monitoring
control task, these two experimental tasks both evoked a
separate volume of activity in pre-SMA; the activity was
slightly more superior in monitoring of emotional words.
The posterior cingulate region also demonstrated activity
during both tasks, though the activity for monitoring of
words with emotional connotations was anterior and superior to that for monitoring implement names.
Monitoring of implement names and animal names also
showed activity in common for three other regions. First,
compared to the tone monitoring task, both of these semantic monitoring tasks evoked activity in the superior portion
Semantic monitoring of emotional connotation
613
Table 2. Volumes of tissue showing significant activity changes for semantic monitoring tasks versus tone monitoring task
Volumes of activity change . 200 ml, tave p , .0001
Location
Monitor emotional connotation
vs. monitor tone sequences
Monitor tool names
vs. monitor tone sequences
Monitor animal names
vs. monitor tone sequences
ml anatomic area
(max tave loc)
max tave
ml anatomic area
(max tave loc)
max tave
ml anatomic area
(max tave loc)
max tave
Semantic Monitoring > Tone Monitoring
Cortex in anterior frontal lobe
8815 BA 8, 9, 10 (212,50,46)
max tave 5 3.16
608 BA 10 (211,66,21)
max tave 5 1.77
Anterior temporal cortex
859 BA 21, 22 (258,23,211) 489 BA 21, 22 (257,22,25)
max tave 5 1.68
max tave 5 1.68
Fusiform gyrus
275 BA 37 (242,247,214)
max tave 5 1.63
4053 BA 37 (243,242,212)
max tave 5 2.13
224 BA 37 (234,233,220)
max tave 5 1.61
6061 BA 9, 10, 44,
45, 47 (252,26,15)
max tave 5 2.22
22155 BA 8, 9, 10, 44
45, 46, 47 (249,44,3)
max tave 5 3.24
1518 BA 9, 10, 45 (250,32,19)
max tave 5 2.41
Other frontal activity
Lateral Frontal
Pre-SMA0other medial frontal 351 BA 6 (25,23,59)
max tave 5 1.82
349 BA 6 (25,21,48)
max tave 5 1.62
(Second volume extends from
lateral frontal volume across
crest of frontal lobe, BAs 6, 8, 9)
Superior premotor
Other inferior temporal
Angular gyrus
3236 BA 39 (249,261,26)
max tave 5 2.72
Occipitoparietal
Posterior cingulate
527 BA 6, 8 (235,17,42)
max tave 5 1.74
1106 BA 6, 8 (226,20,53)
max tave 5 2.17
(Extends from peak in fusiform
gyrus to inferior temporal gyrus,
BA 37)
1101 BA 37 (248,257,210)
max tave 5 1.97
7619 BA 19, 39, 40 (249,264,31) 603 BA 39 (250,261,26)
max tave 5 2.83
max tave 5 1.89
(Extends from peak in angular
gyrus to occipital cortex, BA 19)
397 BA 23 (24,242,27)
max tave 5 1.76
257 BA 19 (233,266,40)
max tave 5 1.70
316 BA 30 (28,254,11)
max tave 5 1.64
Tone Monitoring > Semantic Monitoring
Other frontal
SMA
308 BA 6(24,21,62)
max tave 5 21.52
Motor
Superior Premotor
Temporal
324 BA 4 (257, 1, 17)
max tave 5 21.99
309 BA 4, 44 (256, 7, 6)
max tave 5 21.58
1036 BA 6, 4 (242, 26, 57)
max tave 5 22.04
839 BA 6, 4 (242, 26, 57)
max tave 5 21.99
1106 BA 6, 4 (242, 27, 57)
max tave 5 2.17
412 BA 42, 22 (263,235,19)
max tave 5 21.69
702 BA 42, 22 (264,233,19)
max tave 5 21.82
269 BA 41, 42, 22 (250, 26, 5)
max tave 5 21.75
725 BA 42, 22 (263,233,20)
max tave 5 2.24
BA 5 putative Brodmann’s Area (according to Talairach & Tournoux, 1988); max tave 5 maximum tave within given cluster of activity.
614
B. Crosson et al.
Fig. 1. Significant activity ( p , .0001) along crest of the left frontal lobe, including cortex near frontal pole, for
emotional connotation vs. tones (upper left) and for implement names vs. tones (lower right). Significant activity is in
black. No significant activity was present for animal names vs. tones (not shown). Anatomical underlay is averaged
across 16 subjects. Left two images in each montage are sagittal planes, and right two images are coronal planes. L 5
distance in mm to left of midline; A 5 distance in mm anterior to anterior commissure.
of premotor cortex. Second, activity extended from its maximum functional intensity in the fusiform gyrus into the
inferior temporal gyrus for monitoring of implement names;
activity in the inferior temporal gyrus was a separate volume of activity for monitoring of animal names. Third, activity extended from its maximum functional intensity in
the angular gyrus to occipital cortex (BA 19) for monitor-
ing of implement names; activity in this latter area was a
separate volume for monitoring of animal names.
The tone monitoring task also evoked a few areas of significant activity. First, compared to each semantic monitoring task, the tone monitoring task produced a significant
activity cluster in the superior temporal gyrus close to primary auditory cortex. Second, compared to each semantic
Semantic monitoring of emotional connotation
615
Fig. 2. Significant activity ( p , .0001) in anterior temporal lobe for emotional connotation vs. tones (upper left) and
for implement names vs. tones (lower right). Significant activity is in black. No significant activity was present for
animal names vs. tones (not shown). Anatomical underlay is averaged across 16 subjects. Left two images in each
montage are sagittal planes; right two images are coronal planes. L 5 distance in mm to left of midline; P 5 distance
in mm posterior to anterior commissure
monitoring task, the tone monitoring task produced a significant cluster in superior premotor cortex. For both the superior temporal and the superior premotor clusters, the local
maxima are in almost exactly the same location across comparisons with all three semantic monitoring tasks. The tone
monitoring task produced two unique areas of activity in
motor cortex when compared to the implement monitoring
tasks, and one unique area of activity in SMA when compared to monitoring of words with emotional connotations.
Comparison of Monitoring Words with
Emotional Connotations to Other
Experimental Tasks
Comparisons among semantic monitoring tasks also were
made by contrasting semantic monitoring minus tone monitoring signal intensity differences from one word monitoring task to the signal intensity differences from another
semantic monitoring minus tone monitoring comparison.
616
B. Crosson et al.
Fig. 3. Significant activity ( p , .0001) in the fusiform gyrus (posterior inferior temporal lobe) for emotional connotation vs. tones (upper right), for animal names vs. tones (lower left), and for implement names vs. tones (lower right).
Black arrows point to this activity, which is represented in black. Anatomical underlay is averaged across 16 subjects.
Left two images in each montage are sagittal planes; right two images are coronal planes. L 5 distance in mm to left of
midline; P 5 distance in mm posterior to anterior commissure.
Similar to other analyses, tave was computed by averaging t
values from individual subject images. As in similar studies
(e.g., Martin et al., 1996), statistical criteria were relaxed
somewhat (probability for average t # .005, volume of activity . 200 ml) to increase sensitivity for comparisons that
are more closely matched in their cognitive components.
All clusters meeting these criteria are listed in Table 3. Since
the criterion was relaxed, we focus primarily on the experimental hypotheses that monitoring of words with emotional connotations more intensely involves cortex in the
anterior portions of the left frontal and temporal lobes than
monitoring of animal or implement names.
When monitoring of words with emotional connotations
was compared to monitoring of implement names in this
fashion, two areas of activity arose in the anterior left
frontal lobe (Figure 4). The maximum intensity of the
volume closest to the frontal pole (Talairach coordinates
for max tave 5 213, 63, 29) was within 2 mm or less in
every plane to the maximum intensity of a similar volume
found in comparison of generation of words with emotional connotations to generation of emotionally neutral
words (Crosson et al., 1999). The second anterior frontal
volume for this comparison was somewhat posterior and
superior to the first. When monitoring of words with emotional connotations was compared to monitoring of animal
names, one volume of activity in the more anterior portion
of the left frontal lobe emerged (Figure 4); this volume of
activity subsumed both activity volumes from the emotional connotation versus implement name comparison. The
fact that activity in the more anterior aspect of the left
frontal lobe was greater for monitoring words with emotional connotation than for monitoring either animal names
or implement names is evident from the average time
series plotted in Figure 4. Monitoring of words with emotional connotations did not produce a significant volume
of activity in the anterior temporal lobe compared either
to monitoring of implement names or to monitoring of
animal names.
Semantic monitoring of emotional connotation
617
Table 3. Volumes of activity change . 200 ml, tave p , .005
Location
Cortex near frontal pole
Monitor emotional connotation
vs. monitor implement names
Monitor emotional connotation
vs. monitor animal names
Monitor implement names
vs. monitor animal names
ml anatomic area (max tave loc)
max tave
ml anatomic area (max tave loc)
max tave
ml anatomic area (max tave loc)
max tave
Emotional connotation >
animal names
3019 BA 8, 9, 10 (211, 50, 46)
max tave 5 1.70
Implement names >
animal names
Emotional connotation >
implement names
842 BA 8, 9 (28, 53, 45)
max tave 5 1.22
268 BA 9, 10 (213, 63, 29)
max tave 5 1.30
Other frontal
205 BA 47 (250, 34, 24)
max tave 5 1.04
Lateral frontal
Premotor
488 BA 47 (252, 33, 21)
max tave 5 1.24
1430 BA 6, 8 (219, 25, 53)
max tave 5 1.31
Implement name >
emotional connotation
Animal names >
emotional connotation
Animal names >
implement names
Other Frontal
Lateral frontal
237 BA 8, 9 (243, 31, 28)
max tave 5 21.09
Premotor
2245 BA 6 (220, 11, 52)
max tave 5 21.25
751 BA 6, 8 (249, 8, 29)
max tave 5 21.16
Temporal
1460 BA 21, 37 (259, 251, 23)
max tave 5 21.59
Parietal occipital0parietal
1552 BA 40, 19 (234, 278, 36)
max tave 5 21.62
365 BA 21, 37 (253, 252, 21)
max tave 5 21.09
206 BA 7 (26, 268, 48)
max tave 5 21.26
Insula
589 Insula (240, 240, 42)
max tave 5 21.13
BA 5 putative Brodmann’s Area (according to Talairach & Tournoux, 1988); max tave 5 maximum tave within given cluster of activity.
One other region of activity is worth noting. Monitoring
of implement names produced a volume of significant activity in the superior premotor cortex compared both to
monitoring of animal names and to monitoring of words
with emotional connotations (Figure 4). The fact that activity in the superior premotor cortex was greater for monitoring implement names than for monitoring either animal
names or words with emotional connotations also can be
discerned from the average time series plotted in Figure 4.
This volume is important because it demonstrates that activity along the crest of the left frontal lobe follows an
anterior to posterior gradient for monitoring of words with
emotional connotations versus monitoring of implement
names. Although both tasks demonstrate activity in both
regions compared to tone monitoring, monitoring of words
with emotional connotations evoked greater activity near
the frontal pole while monitoring of implement names
evoked greater activity in superior premotor cortex.
DISCUSSION
This study was designed as a follow-up to our earlier study
on generation of words with emotional connotations (Crosson et al., 1999) and was based on the theory that a phonological input lexicon, a phonological output lexicon, and a
structural (visual) description system all have separate neural representations which must access semantic information
to assign meaning to these representations (e.g., Ellis &Young,
1988). Under this theory, a neural mechanism involved in
processing the semantic attribute of emotional connotation
would be active whether the task involved input through the
phonological input lexicon or the structural description
618
B. Crosson et al.
Fig. 4. Sagittal and coronal images in the upper left show significant activity in black ( p , .005) near the left frontal
pole for emotional connotation minus tones . implement names minus tones (top) and for emotional connotation
minus tones . animal names minus tones (bottom). Anatomical underlay is averaged across the 16 subjects. The time
series in the upper right were derived from the large area near the frontal pole in the emotional connotation minus
tones . implement names minus tones comparison. A mask of this area was created, voxels within the mask were
averaged into a single time series for each individual participant, the resulting time series was subjected to slight
temporal smoothing, and this single average time series was averaged across participants. Sagittal and coronal images
in the lower right demonstrate activity in white ( p , .005) in the left superior premotor cortex for implement names
minus tones . emotional connotation minus tones (top) and for implement names minus tones . animal names minus
tones (bottom). The time series in the lower left were derived from the area in superior premotor cortex in the
implement names minus tones . emotional connotation minus tones comparison. For all time series graphs in this
figure, the x axis represents eight 56-s cycles alternating between semantic monitoring and tone monitoring, and the y
axis is scaled to represent a 1.4% change in signal intensity from the bottom to the top of the graph. The asterisks
indicate the time series for which semantic monitoring minus tone monitoring differences are significantly greater than
for the other tasks. L 5 distance in mm to the left of midline; A 5 distance in mm anterior to anterior commissure.
system or involved accessing the phonological output lexicon. Anterior left frontal cortex previously has been shown
to be active when processing pictures with emotional content (Lane et al., 1997; Paradiso et al., 1999), and when accessing the phonological output lexicon during production
of words with emotional connotation (Crosson et al., 1999).
Anterior left temporal cortex previously has shown activity
when processing pictures with positive emotional content
(Lane et al., 1999), and when accessing the phonological output lexicon during production of words with emotional connotation (Crosson et al., 1999). Therefore, if words with
emotional connotations presented via the phonological input lexicon activated anterior left frontal and temporal cortices in comparison to emotionally neutral words, the inference
that these areas are involved in processing the semantic
attribute of emotional connotation would be strengthened.
Semantic monitoring of emotional connotation
Thus, this study was designed to ascertain if processing
of words with emotional connotations, presented via the
auditory modality, would evoke activity in the more anterior portions of the left frontal and temporal lobes, thereby
confirming the involvement of these cortices in semantic
processing of emotional connotation. Further, emotional connotation was hypothesized to be a semantic attribute that
cuts across traditional semantic categories. For example,
both living and nonlivings things may have emotional connotations (e.g., roach and coffin, respectively). To assess
this aspect of emotional connotation as a semantic attribute,
words with emotional connotations were chosen from both
living and nonliving categories, and semantic monitoring
of these words was compared to semantic monitoring of
emotionally neutral words from two categories, animal names
and implement names. Cortex near the left frontal pole was
active when processing words with emotional connotations
was compared to the tone monitoring control task, monitoring of implement names, or monitoring of animal names,
suggesting it is involved in semantic processing of emotional connotation. Cortex in the anterior temporal lobe was
active only when monitoring of words with emotional connotations was compared to the tone monitoring control task,
leaving some doubt as to whether it is involved in processing emotional connotation as a semantic attribute, or whether
some other factor accounts for its activity in our previous
study (Crosson, et al., 1999).
An alternative strategy to assess involvement of the anterior left frontal cortex in processing the semantic attribute
of emotional connotation might have been to give subjects
words with emotional connotations in both a semantic monitoring task and a phonological monitoring task. If anterior
left frontal cortex were active during the semantic but not
the phonological task, then it would be involved in semantic processing. Such semantic processing could be shown to
be exclusive for emotional connotation if words from other
categories were subjected to similar comparisons and did
not activate the cortex in question. We chose not to use
such a strategy for two reasons: First, previous work by
Price et al. (1996) has shown that processing of real words
activates areas not activated by processing pronounceable
nonwords when words and nonwords are processed in an
identical task involving neither lexical nor semantic features. Thus, it may be difficult not to process lexical–
semantic features of real words, even when tasks do not
require such processing. This may be particularly true for
words with emotional connotations because of the salience
of this property, and would weaken or negate the ability to
image cortex involved in processing emotional connotation. Second, in the current study, the direct comparison of
semantic monitoring of words with emotional connotation
to semantic monitoring of animal and implement names
was used to control for areas involved in more general aspects of semantic processing.
Limitations of the current study should be mentioned.
For example, the characteristics that were chosen for each
semantic monitoring task were carefully selected to facili-
619
tate the comparison of monitoring words with emotional
connotations to monitoring animal and tool names. However, within the design of the current study, it can be argued
that the type of word being monitored (i.e., words with
emotional connotations vs. words from the categories of
animals or implements) cannot be separated from the characteristics being monitored (i.e., “evokes a negative response and is nonliving” vs. “has more than two legs and is
land dwelling” or “is used with one hand and is used outdoors”). However, when this study is considered in the context of the previous studies of word generation (Crosson
et al., 1999) and viewing pictures (Lane et al., 1997; Paradiso et al., 1999), a strong case can be built for emotional
connotation of words as the semantic attribute responsible
for activity in the more anterior portion of the left frontal
lobe. Although type of word and characteristic being monitored cannot be separated in the current study, no such
problem existed in the previous study of word generation
(Crosson et al., 1999). In that study, subjects simply generated words from a given category; some categories were
chosen to evoke words with emotional connotations, and
some were chosen to evoke emotionally neutral words. Cortex in the more anterior portion of the left frontal lobe also
has been active during viewing of pictures with emotional
content (Lane et al., 1997; Paradiso et al., 1999), and Beauregard et al. (1997) showed left anterior frontal activation
for passive viewing of words with emotional connotation
versus concrete words, even though no active processing of
words was required. Activity in the latter study was somewhat posterior to activity in the current study. Thus, since
no problem separating monitored characteristics from semantic attribute existed in these previous studies, we must
conclude that processing the attribute of emotional connotation is responsible for evoking activity in the anterior left
frontal lobe in these studies. The lack of anterior left frontal
activity in Maddock and Buonocore’s (1997) work probably relate to methodological differences. Subjects passively
listened to words, as opposed to actively processing them
words, and activity only from threat related words was compared to activity from emotionally neutral words.
A second limitation of the current study involves our
decision to use the nine slices available in our image acquisition sequence to cover the entire left hemisphere. While
this decision was based upon evidence of left-hemisphere
dominance for processing words with or without emotional
connotations (e.g., Ali & Cimino, 1997; Heilman et al.,
1993), the lack of information regarding right-hemisphere
activity precludes examining the role of the right hemisphere in processing words with emotional connotation.
Some data suggest the right hemisphere is dominant for
processing emotions (e.g., Gainotti, 1997; Heilman et al.,
1984; Ross, 1997), and other data suggest the right hemisphere plays a role in processing negative (as opposed to
positive) emotions (e.g., Ali & Cimino, 1997; Canli et al.,
1998). However, previous research has shown that making
linguistic inferences regarding emotional connotation is not
impaired in patients with right-hemisphere lesions (e.g.,
620
Blonder et al., 1991). Further, Cato (2001) recently conducted an f MRI study in which subjects generated words
with positive emotional connotations, words with negative
emotional connotations, and emotionally neutral words. In
comparison to generating emotionally neutral words, generating words with positive or negative emotional connotations engaged primarily left-hemisphere mechanisms. Only
for anterior frontal cortex and retrosplenial cortex did activity extend into the right hemisphere, and for both positive and negative emotional connotations, activity was far
more extensive in the left than the right hemisphere for
both areas. Thus, Cato’s (2001) findings indicate that normal participants rely primarily on left-hemisphere mechanisms for processing the emotional connotation of words,
and that the current findings regarding left-hemisphere activity during monitoring of emotional connotation would
not be greatly changed by information regarding righthemisphere activity.
A feature in both our previous study (Crosson et al., 1999)
and the current study was that words with positive and negative emotional connotation were grouped together. As just
noted, the findings of Cato (2001) found similar patterns of
activity in anterior left frontal cortex for generating words
with both positive and negative emotional connotation. Thus,
cortex near the left frontal pole appears to be involved in
processing of emotional connotation of words without respect to valence. Additional areas might show activity if
processing of positive and negative connotations were segregated in a different experimental design. It also is possible that emphasis on monitoring for negative emotion in the
current study weakened activity in the anterior left temporal lobe compared to our previous study of word generation
(Crosson et al., 1999) since the left hemisphere is thought
to be more involved in processing positive than negative
emotion (Canli et al., 1998; Heilman et al., 1993).
Our secondary hypothesis that all semantic monitoringtone monitoring comparisons would demonstrate activity
in the fusiform gyrus also was confirmed. This finding reflects the importance of visual form to semantic processing
of words in our tasks. Visual form has long been presumed
an important distinction in semantic processing of animals.
While implements are defined by function, they also are
recognized by visual form. Since words with emotional connotations were chosen for ease of visualization, visual form
was important in their identification as well. Activity in the
ventral visual stream of our subjects is consistent with recent findings demonstrating activity in the fusiform gyrus
during processing of animals and tools, whether during picture naming, semantic monitoring of words, or visual tasks
(Chao et al., 1999; Damasio et al., 1996; Ishai et al., 1999;
Martin et al., 1996).
Similarities between monitoring words with emotional
connotations and monitoring implement names should be
noted. Compared to tone monitoring, monitoring of both
words with emotional connotations and implement names
evoked activity along the crest of the left frontal lobe, including cortex in the left anterior frontal lobe, and activity
B. Crosson et al.
in the anterior temporal lobe. Lesion data of Lu et al. (2002)
and of Cappa et al. (1998) confirm left anterior temporal
involvement in naming implements as opposed to living
things. Although activity along the crest of the left frontal
lobe occurred for both conditions relative to tone monitoring, it should be noted that there was an anterior to posterior gradient differentiating the two tasks (Figure 4). In
cortex near the left frontal pole, which shows greater connectivity to limbic regions, activity for monitoring words
with emotional connotations was significantly stronger than
activity for monitoring implement names. Greater involvement of this cortex in processing emotional connotations is
consistent with access to information from the limbic system. In particular, cortex in the more anterior portions of
the frontal lobe, including Brodmann’s area 9, is connected
with paralimbic association areas, including the anterior
cingulate gyrus, the posterior cingulate gyrus, retrosplenial
cortex, and the posterior parahippocampal gyrus (Pandya
& Yeterian, 1985). In superior premotor cortex, however,
activity for monitoring implement names was significantly stronger than activity for monitoring words with emotional connotations. Greater involvement of this cortex in
processing implement names is consistent with the fact that
information about movement sequences is central to understanding the nature of specific implements.
Our data indicate that some cortical regions supporting
lexical–semantic processing cut across semantic category
and item attributes. An area around the angular gyrus and a
lateral frontal area encompassing Broca’s area were active
in all semantic monitoring-tone monitoring comparisons.
Binder et al. (1999) recently demonstrated that the angular
gyrus and an area in the dorsal frontal lobe demonstrated
more activity during semantic monitoring (animal names)
than phoneme monitoring. The angular gyrus activity in the
current study overlapped with that of Binder et al.; however, the dorsal frontal activity of Binder et al. was generally superior to the common activity of the current study.
Anomic aphasia often occurs in lesions of the angular gyrus
(Goodglass & Kaplan, 1983), and lesions in this general
region also may cause transcortical sensory aphasia (Alexander, 1997). Anomic aphasia also may be found in lesions
extending into Broca’s area from the cortex just anterior to
it; transcortical motor aphasia also may occur as a result of
lesions in this region (Alexander, 1997). Anomic aphasia appears to be due to an inability to link semantic information to
the corresponding lexical items during language output. Semantic components also are thought to exist for transcortical
sensory aphasia (Alexander, 1997) and transcortical motor
aphasia (McCarthy & Warrington, 1984). Thus, when examined in light of existing literature, our data suggest areas in
the left angular gyrus and left frontal lobe play a role in
coordinating semantic functions for language, perhaps linking lexical and semantic retrieval processes.
A brief discussion of the difference in monitoring animal
names versus implement names is in order. The major difference in direct comparison of these tasks was the greater
activity in the premotor cortex for monitoring implement
Semantic monitoring of emotional connotation
names than for monitoring animal names. The more inferior of our premotor activations for this comparison was
consistent with Martin et al.’s (1996) finding in a comparison of animal and tool naming. The fact that premotor
cortex is important in processing implement names lends
credence to the notion that knowledge about the movements involved in using implements is a central semantic
feature of these items. Indeed, Damasio and Tranel (1993)
found that lesions in this area affected action naming.
In conclusion, we return to the concept of emotional connotation as a semantic attribute. Findings from the current
study, especially in the context of previous studies (Beauregard et al., 1997; Crosson et al., 1999; Lane et al., 1997;
Paradiso et al., 1999), provide strong evidence that cortex
near the left frontal pole is involved in processing emotional connotation as a semantic attribute. Emotional connotation appears to cut across the traditional semantic
categories of living versus nonliving things and is largely
orthogonal to that distinction. In other words, cortex near
the left frontal pole appears to be important for processing
emotional connotation without respect to whether the item
being processed is a living or a nonliving thing. This result
is important not only for understanding how emotional connotation is processed, it is important because it illustrates
how attributes in general can be a dimension of semantic
processing that is orthogonal to semantic category. Although past studies (Mummery et al., 1998; ThompsonSchill et al., 1999) have addressed the issue of semantic
attributes, they have confounded the living–nonliving distinction with attributes processed in the ventral visual stream.
Since processing of living things is thought to be dependent
on processing in the ventral visual stream, these studies
have not been able to demonstrate that processing of a
semantic attribute may be independent from processing categories. Future research should continue to focus on distinctions between processing of semantic category and
semantic attribute.
ACKNOWLEDGMENTS
Supported by McKnight Brain Institute of the University of Florida and Grant # R01-DC03455 from NIDCD. Thanks to Jeffrey R.
Binder (Grant # R01 NS33576 from NINDS) for providing the
semantic and tone monitoring tasks after which our tasks were
modeled. Thanks also to NIMH Center for the Study of Emotion
and Attention (Grant # P50-MH52384, P.J. Lang, Director; Margaret M. Bradley; Bruce N. Cuthbert) for consultation; Douglas
C. Noll for spiral scan sequences; Jeffrey R. Fitzsimmons for head
coil; Wayne King for sound transducer.
REFERENCES
Alexander, M.P. (1997). Aphasia: Clinical and anatomic aspects.
In T.E. Feinberg & M.J. Farah (Eds.), Behavioral neurology
and neuropsychology (pp. 133–149). New York: McGraw-Hill.
Ali, N. & Cimino, C.R. (1997). Hemispheric lateralization of perception and memory for emotional verbal stimuli in normal
individuals. Neuropsychology, 11, 114–125.
621
Beauregard, M., Chertkow, H., Bub, D., Murtha, S., Dixon, R., &
Evans, A. (1997). The neural substrate for concrete, abstract,
and emotional word lexica: A positron emission tomography
study. Journal of Cognitive Neuroscience, 9, 441– 461.
Binder, J.R., Frost, J.A., Hammeke, T.A., Cox, R.W., Rao, S.M., &
Prieto, T. (1997). Human brain language areas identified by
functional magnetic resonance imaging. Journal of Neuroscience, 17, 353–362.
Binder, J.R., Frost, J.A., Hammeke, T.A., Bellgowan, P.S.F., Rao,
S.M., & Cox, R.W. (1999). Conceptual processing during the
conscious resting state: A functional MRI study. Journal of
Cognitive Neuroscience, 11, 80–93.
Blonder, L.X., Bowers, D., & Heilman, K.M. (1991). The role of
the right hemisphere in emotional communication. Brain, 114,
1115–1127.
Bradley, M.M., Cuthbert, B.N., & Lang, P.J. (1988). Affective Norms
for English Words (ANEW). Technical manual and affective
ratings. Gainesville, FL: University of Florida Center for
Research in Psychophysiology.
Canli, T., Desmond, J.E., Zhao, Z., Glover, G., & Gabrieli, J.D.E.
(1998). Hemispheric asymmetry for emotional stimuli detected with f MRI. NeuroReport, 9, 3233–3239.
Cappa, S.F., Frugoni, M., Pasquali, P., Perani, D., & Zorat, P.
(1998). Category-specific naming impairment for artefacts: A
new case. NeuroCase, 4, 391–397.
Caramazza, A. & Shelton, J.R. (1998). Domain-specific knowledge systems in the brain: The animate–inanimate distinction.
Journal of Cognitive Neuroscience, 10, 1–34.
Cato, M.A. (2001). Emotional connotation as a semantic attribute:
A whole brain fMRI study of emotional valence and sex effects
on word generation. Unpublished doctoral dissertation, University of Florida, Gainesville.
Chao, L.L., Haxby, J.V., & Martin, A. (1999). Attribute-based neural substrates in temporal cortex for perceiving and knowing
about objects. Nature Neuroscience, 2, 913–919.
Cox, R.W. (1996). AFNI: Software for analysis and visualization
of functional magnetic resonance neuroimages. Computers in
Biomedical Research, 29, 162–173.
Crosson, B., Radonovich, K., Sadek, J.R., Gökçay, D., Bauer, R.M.,
Fischler, I.S., Cato, M.A., Maron, L., Auerbach, E.J., Browd,
S.R., & Briggs, R.W. (1999). Left-hemisphere processing of
emotional connotation during word generation. NeuroReport,
10, 2449–2455.
Damasio, H., Grabowski, T.J., Tranel, D., Ponto, L.L.B., Hichwa,
R.D., & Damasio, A.R. (1999). Neural correlates of retrieval
of words for spatial relationships. NeuroImage, 9, S918.
Damasio, A.R. & Tranel, D. (1993). Nouns and verbs are retrieved
with differently distributed neural systems. Proceedings of the
National Academy of Science, USA, 90, 4957– 4960.
Demonet, J.-F., Chollet, R., Ramsay, S., Cardeat, D., Nespoulous,
J.L., Wise, R., Rascol, A., & Frackowiak, R.S.J. (1992). The
anatomy of phonological and semantic processing in normal
subjects. Brain, 115, 1753–1768.
Ellis, A.W. & Young, A.W. (1988). Human cognitive neuropsychology. London: Erlbaum.
Ferreira, C.T., Fiusiano, B., & Poncet, M. (1997). Categoryspecific anomia: Implication of different neural networks in
naming. NeuroReport, 8, 1595–1602.
Fisher, R.A. & Cornish, E.A. (1960). The percentile points of
distributions that have known cumulants. Technometrics, 2,
209–226.
622
Francis, W.N. & Kucera, H. (1982). Frequency analysis of
English usage: Lexicon and grammar. Boston: Houghton
Mifflin.
Gainotti, G. (1997). Emotional disorders in relation to unilateral
brain damage. In T.E. Feinberg & M.J. Farah (Eds.), Behavioral neurology and neuropsychology (pp. 691– 698). New York:
McGraw-Hill.
Goodglass, H. & Kaplan, E. (1983). The assessment of aphasia
and related disorders. Philadelphia: Lea & Febiger.
Hart, J. & Gordon, B. (1992). Neural subsystems for object knowledge. Nature, 359, 60– 64.
Heilman, K.M., Bowers, D., Speedie, L., & Coslett, H.B. (1984).
Comprehension of affective and nonaffective prosody. Neurology, 34, 917–921.
Heilman, K.M., Bowers, D., & Valenstein, E. (1993). Emotional
disorders associated with neurological diseases. In K.M. Heilman & E. Valenstein (Eds.), Clinical neuropsychology (3rd
ed., pp. 461– 497). New York: Oxford University Press.
Hillis, A.E. & Caramazza, A. (1991). Category-specific naming
and comprehension impairment: A double dissociation. Brain,
114, 2081–2094.
Ishai, A., Ungerleider, L.G., Martin, A., Schouten, J.L., & Haxby,
J.V. (1999). Distributed representation of objects in the human
ventral visual pathway. Proceedings of the National Academy
of Science, USA, 96, 9379–9384.
Lane, R.D., Reiman, E.M., Bradley, M.M., Lang, P.J., Ahern, G.L.,
Davidson, R.J., & Schwartz, G.E. (1997). Neuroanatomic correlates of pleasant and unpleasant emotion. Neuropsychologia,
35, 1437–1444.
Lane, R.D., Chua, P.M.-L., & Dolan, R.J. (1999). Common effects
of emotional valence, arousal and attention on neural activation during visual processing of pictures. Neuropsychologia,
37, 989–997.
Lu, L.H., Crosson, B., Nadeau, S.E., Heilman, K.M., Rothi, L.J.G.,
Raymer, A., Gilmore, R.L., Bauer, R.M., & Roper, S.N. (2002).
Category-specific naming deficits for objects and actions: Semantic attribute and grammatical role hypotheses. Neuropsychologia, 40, 1608–1621.
Maddock, R.J. & Buonocore, M.H. (1997). Activation of left
posterior cingulate gyrus by the auditory presentation of
threat-related words: An f MRI study. Psychiatry Research, 75,
1–14.
Martin, A., Wiggs, C.L., Ungerleider, L.G., & Haxby, J.V. (1996).
Neural correlates of category-specific knowledge. Nature, 379,
649– 652.
McCarthy, R.A. & Warrington, E.K. (1984). A two-route model
of speech production: Evidence from aphasia. Brain, 107,
463– 486.
B. Crosson et al.
McCarthy, R.A. & Warrington, E.K. (1988). Evidence for modalityspecific meaning systems in the brain. Nature, 344, 428– 430.
Mummery, C.J., Patterson, K., Hodges, J.R., & Price, C.J. (1998).
Functional neuroanatomy of the semantic system: Divisible by
what? Journal of Cognitive Neuroscience, 10, 766–786.
Noll, D.C., Cohen, J.D., Meyer, C.H., & Schneider, W.J. (1995).
Spiral k-space MR imaging of cortical activation. Magnetic
Resonance Imaging, 5, 49–56.
Oldfield, R.C. (1971). The assessment and analysis of handedness: The Edinburgh Inventory. Neuropsychologia, 9, 97–113.
Pandya, D.N. & Yeterian, E.H. (1985). Architecture and connections of cortical association areas. In A. Peters & E.G. Jones
(Eds.), Cerebral cortex. Vol. 4: Association and auditory cortices (pp. 3– 61). New York: Plenum Press.
Paradiso, S., Johnson, D.L., Andreasen, N.C., O’Leary, D.S., Watkins, G.L., Boles Ponto, L.L., & Hichwa, R.D. (1999). Cerebral blood flow changes associated with attribution of emotional
valence to pleasant, unpleasant, and neutral visual stimuli in a
PET study of normal subjects. American Journal of Psychiatry, 156, 1618–1629.
Price, C.J., Wise, R.J.S., & Frackowiak, R.S.J. (1996). Demonstrating the implicit processing of visually presented words and
pseudowords. Cerebral Cortex, 6, 62–70.
Ross, E.D. (1997). The aprosodias. In T.E. Feinberg & M.J. Farah
(Eds.), Behavioral neurology and neuropsychology (pp. 699–
710). New York: McGraw-Hill.
Sacchett, C. & Humphreys, G.W. (1992). Calling a squirrel a squirrel but a canoe a wigwam: A category-specific deficit for artefactual objects and body parts. Cognitive Neuropsychology, 9,
73–86.
Talairach, J. & Tournoux, P. (1988). Co-planar stereotaxic atlas of
the human brain. New York: Thieme Medical Publishers.
Thompson-Schill, S.L., Aguirre, G.K., D’Esposito, M., & Farah,
M.J. (1999). A neural basis for category and modality specificity of semantic knowledge. Neuropsychologia, 37, 671– 676.
Toglia, M.P. & Battig, W.F. (1978). Handbook of semantic word
norms. Hillsdale, NJ: Lawrence Erlbaum Associates.
Tranel, D., Damasio, H., & Damasio, A.R. (1997). A neural basis
for the retrieval of conceptual knowledge. Neuropsychologia,
35, 1319–1327.
Ungerleider, L.G. & Mishkin, M. (1982). Two cortical visual systems. In D.J. Ingle, M.A. Goodale, & R.J.W. Mansfield (Eds.),
The analysis of visual behavior (pp. 549–586). Cambridge,
MA: MIT Press.
Warrington, E.K. & McCarthy, R. (1983). Category specific access dysphasia. Brain, 106, 859–878.
Warrington, E.K. & Shallice, T. (1984). Category specific semantic impairments. Brain, 107, 829–854.