Download Involuntary Leakage in Deceptive Facial Expressions as a Function

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Face negotiation theory wikipedia , lookup

Affective forecasting wikipedia , lookup

Forensic facial reconstruction wikipedia , lookup

Social sharing of emotions wikipedia , lookup

The Expression of the Emotions in Man and Animals wikipedia , lookup

Emotions and culture wikipedia , lookup

Meta-emotion wikipedia , lookup

Emotion in animals wikipedia , lookup

Emotional labor wikipedia , lookup

Transcript
J Nonverbal Behav (2012) 36:23–37
DOI 10.1007/s10919-011-0120-7
ORIGINAL PAPER
Secrets and Lies: Involuntary Leakage in Deceptive
Facial Expressions as a Function of Emotional Intensity
Stephen Porter • Leanne ten Brinke • Brendan Wallace
Published online: 9 October 2011
Springer Science+Business Media, LLC 2011
Abstract Darwin (1872) hypothesized that some facial muscle actions associated with
emotion cannot be consciously inhibited, particularly when the to-be concealed emotion is
strong. The present study investigated emotional ‘‘leakage’’ in deceptive facial expressions
as a function of emotional intensity. Participants viewed low or high intensity disgusting,
sad, frightening, and happy images, responding to each with a 5 s videotaped genuine or
deceptive expression. Each 1/30 s frame of the 1,711 expressions (256,650 frames in total)
was analyzed for the presence and duration of universal expressions. Results strongly
supported the inhibition hypothesis. In general, emotional leakage lasted longer in both the
upper and lower face during high-intensity masked, relative to low-intensity, masked
expressions. High intensity emotion was more difficult to conceal than low intensity
emotion during emotional neutralization, leading to a greater likelihood of emotional
leakage in the upper face. The greatest and least amount of emotional leakage occurred
during fearful and happiness expressions, respectively. Untrained observers were unable to
discriminate real and false expressions above the level of chance.
Keywords
Universal emotions Facial expression Deception Emotional intensity
Introduction
During his 1938 Munich meeting with Hitler, British Prime Minister Neville Chamberlain
scrutinized Hitler’s face as he swore that he would not invade Czechoslovakia. After the
meeting, Chamberlain infamously reported, ‘‘I got the impression that here was a man who
could be relied upon when he had given his word’’ (see Porter and ten Brinke 2010, Ekman
1985/2001). How could he have made such a cataclysmic error in misreading Hitler’s
deceptive face?
S. Porter (&) L. ten Brinke B. Wallace
Centre for the Advancement of Psychological Science and Law, Department of Psychology,
University of British Columbia, Kelowna, BC V1V 1V7, Canada
e-mail: [email protected]
123
24
J Nonverbal Behav (2012) 36:23–37
The complex musculature of the human face and its direct relation with affective
processes of the brain make it a rich canvas upon which humans communicate their
emotional states and from which we infer those of others. Accordingly, in daily life we
read the faces of intimates and strangers to make inferences about their emotions and
intentions, and adopt expressions ourselves to communicate sincerely or insincerely how
we are feeling. Accurately assessing the nature and veracity of others’ facial expressions is
a serious business in everyday life (e.g., Hess and Bourgeois 2010) and applied settings. In
the context of personal relationships, the presence of concealed but subtly communicated
negative emotional expressions is highly predictive of divorce (Gottman et al. 2001).
Further, the identification of falsified emotional information is crucial in many applied
contexts in society, including the courts, parole hearings, politics, and corporations.
The Supreme Court of Canada (R. v. B. (K. G.) 1993) concluded that judges and jurors
must view a witness to ‘‘adequately evaluate body language, facial expressions and other
indicators of credibility’’ and that credibility assessment is ‘‘common sense.’’ In a recent
landmark Canadian case R v N.S. (2010), the Ontario Court of Appeal—in deciding
whether to permit a Muslim complainant to wear her face-covering niqab during testimony—concluded that: ‘‘Covering the face of a witness may impede cross-examination in
two ways. First, it limits the trier of fact’s ability to assess the demeanour of the witness.
Demeanour is relevant to the assessment of the witness’s credibility and the reliability of
the evidence given by that witness. Second, witnesses do not respond to questions by words
alone. Non-verbal communication can provide the cross-examiner with valuable insights’’
(p. 54). In sentencing and parole hearings, the perceived credibility of an offender’s
emotional display/remorse informs decision-making (ten Brinke et al. 2011). Further,
travelers’ faces are scrutinized by airport security staff trained in detecting concealed
emotions and intentions (see Porter and ten Brinke 2010).
The foundation of this evaluative process originates in our evolutionary past; the basic
discrimination of friend and foe likely was one of the earliest interpersonal judgments to
evolve (e.g., Williams and Mattingley 2006). Sometimes emotion is ‘‘written all over the
face’’, a salient and powerful representation of one’s affective state (Matsumoto 2007;
Matsumoto and Willingham 2009). However, the evolutionary development of interpersonal deception—partially accomplished by altering or inhibiting an expression normally
accompanying the emotion—magnified the complexity of interpreting facial expressions.
There are three major ways in which emotional facial expressions are intentionally
manipulated (Ekman and Friesen 1975): an expression is simulated when it is not
accompanied by any real emotion, masked when the expression corresponding to the felt
emotion is replaced by a falsified expression corresponding to a different emotion, or
neutralized when the expression of a genuine felt emotion is inhibited while the face
remains neutral. These deceptive displays can be highly convincing; observers often are
unable to discriminate genuine versus faked expressions (Porter and ten Brinke 2008;
Porter et al. 2010), despite high self-reported confidence in their assessments (Vrij and
Mann 2001).
Thus, as with Chamberlain, mistakes in reading faces occur frequently and sometimes
with major consequences. In corporate settings, white-collar criminals such as Bernard
Madoff often find easy victims by appearing trustworthy, empathetic, and displaying
convincing emotional masks. Porter et al. (2009) found that psychopaths were 2.5 times as
likely as their counterparts to be granted parole in National Parole Board hearings, a
pattern the authors attributed in part to their convincing emotional performances. Indeed,
psychopaths seem particularly proficient at stifling emotional facial inconsistencies (Porter
et al. 2011).
123
J Nonverbal Behav (2012) 36:23–37
25
Although emotional performances often are thoroughly misinterpreted, emotional
deception may be expressed in subtle but reliable and perceptible ways that are sometimes
missed by untrained observers. This idea first was studied by Duchenne (1862/1990), who
examined the muscle actions of the smile. He noted that the common conceptualization of
a happiness expression is the contraction of the zygomatic major muscle, which upturns the
mouth corners. But when he electrically stimulated this muscle, the resulting smile did not
seem ‘‘genuine.’’ As it turned out, real happiness also involves the orbicularis oculi surrounding the eyes, which pull the cheek up while slightly lowering the brow (e.g., Ekman
et al. 1990).
Darwin (1872), of course, was interested in emotional expressions (Hess and Thibault
2009) and observed that, ‘‘A man when moderately angry, or even when enraged, may
command the movements of his body, but…those muscles of the face which are least
obedient to the will, will sometimes alone betray a slight and passing emotion’’ (p. 79). He
hypothesized that some facial muscle actions associated with strong emotion are beyond
voluntary control and cannot be completely inhibited. Further, he proposed that certain
facial muscles cannot be intentionally engaged during emotional simulation. Collectively,
these two propositions form the inhibition hypothesis (Ekman 2003a, 2009), a proposal
with major theoretical and applied implications but one that has hardly been examined. A
derivative idea long advocated by Ekman (Ekman 1985/2001; Ekman and O’Sullivan
2006; Haggard and Isaacs 1966) is that when an emotion is concealed or masked, the true
emotion is manifested as a micro-expression, a fleeting expression suppressed in 1/25–1/5
of a second making it difficult to perceive with the naked eye. This hypothesis has been
accepted somewhat non-critically in the scientific community and media (e.g., Henig 2006;
Lipton 2006). In fact, although the validity of these hypotheses, and more generally the
idea that the face involuntarily communicates covert emotional states, are widely assumed,
they have been subjected to relatively little direct empirical scrutiny (Ekman et al. 1988,
1991; Porter and ten Brinke 2008; Stewart et al. 2009).
One might argue that the face is largely uncharted terrain in terms of the richness of the
information it may communicate relative to the empirical attention it has received. Porter
and ten Brinke (2008) investigated the nature of facial expressions accompanying genuine
and false/concealed emotions. They found that the involuntary leakage of emotions was, in
a way, ubiquitous—no participant was able to falsify emotions without such betrayals on at
least one occasion. Emotional arousal associated with this form of deception also was
revealed by changes in blink rate; masking one’s true feelings with a false emotion led to
increases in blink rate while holding a neutral ‘‘poker face’’ in response to emotional
stimuli led to decreased blink rate, relative to genuine expressions. Despite the presence of
cues to deceit indicated by these analyses, naı̈ve judges could only discriminate genuine
versus deceptive expressions at a level slightly above chance (above chance for happiness
and disgust but at chance for sad and fearful expressions; see also Ekman et al. 1988 and
Frank et al. 1993 for discrimination of genuine vs. deceptive smiles). Similarly, Warren
et al. (2009)—utilizing Ekman and Friesen’s (1974) emotional fabrication paradigm—
found that performance for emotional lie detection was slightly but significantly above
chance (mean of 64.4%), while that for unemotional lie detection was significantly below
chance (36.1%). Indeed, this may have been related to the leakage of discordant emotions
during fabricated, emotional lies. Other lie scenario variables too seem to influence ability
to detect deceit; police officers are better at detecting high stakes lies, relative to low
(O’Sullivan et al. 2009). In general, however, while lie scenario or individual differences
may produce slight advantages over chance, deception detection accuracy remains sub-par
without empirically-based training.
123
26
J Nonverbal Behav (2012) 36:23–37
Darwin’s (1872) hypothesis stated only that strong emotions would be difficult to
conceal or fabricate. No research has examined directly the intensity of emotion required
for ‘‘leakage’’ to occur in facial communication. Does emotional leakage occur only when
concealing powerful emotions, or is ‘‘more leakage’’ exhibited during such emotions
relative to lower intensity emotions? We hypothesize that powerful emotional states will be
relatively unique in leading to leakage in involuntary facial expressions, similar to an
involuntary reflex of the knee requiring contact with a minimum level of force. Emotional
leakage during high intensity deception specifically may relate in part to a reduction in
attentional resources. Gable and Harmon-Jones (2010) found that low intensity emotion
increases the breadth of one’s attention whereas high intensity narrows one’s attentional
focus. It is possible that experiencing and trying to inhibit or mask powerful emotions—
relative to weaker emotional states—require so much competing cognitive, physiological,
and affective effort that one’s voluntary control is compromised or eliminated resulting in
leakage.
The Present Study
While the interpretation of emotional facial expressions has been complicated by the
evolutionary development of interpersonal deception, it appears that certain aspects of the
face cannot be controlled. As such, the face has the potential to reveal hidden emotions
with objective analysis. However, little research has been conducted that investigates the
inhibition hypothesis and the secrets of the human face. A key remaining question is does
emotional ‘‘leakage’’ occur only for powerful emotions, or is there ‘‘more leakage’’ in such
emotions versus lower intensity emotions? The present study was the first to directly test
the proposition that intensity has an impact on the presence and duration of emotional
leakage during simulated, masked, and neutralized expressions. We tested the hypothesis
that concealed felt emotions must be relatively potent in order for ‘‘leakage’’ to appear
involuntarily on the face, and that less powerful emotional states will be easily concealed
or masked (i.e., under voluntary control).
Specifically, we predicted that more inconsistent emotional expressions would appear in
deceptive versus honest expressions, and a veracity by intensity interaction such that
inconsistent emotion would be least prevalent in high intensity genuine expressions and
most prevalent in high intensity masked expressions (replacing one’s emotion with a
different opposing emotion) followed by low intensity masked expressions, high intensity
neutralized expressions, and low intensity simulated expressions. Further, it was expected
that neutralizing high intensity emotions (trying to appear as though one is feeling no
emotion) would lead to greater emotional leakage than neutralizing low intensity emotion.
Because emotions generally involve muscles in both the upper and lower face, but some
muscles may be less controllable than others, emotional expression in the upper (eyes and
forehead) and lower (nose and mouth) face were coded separately to allow for accurate
facial coding when, for example, the eyebrows lowered as in disgust, but the mouth leaked
genuine happiness (Rinn 1984).
We also examined observers’ accuracy at identifying high versus low intensity
emotional deception. Given our prediction that more leakage will occur in high intensity
emotional deception, it was hypothesized that observers would perform somewhat better
at detecting such deception (slightly better than chance as found in Porter and ten
Brinke 2008) and around the level of chance/guessing with low intensity emotional
deception.
123
J Nonverbal Behav (2012) 36:23–37
27
Method
Participants
The primary sample was comprised of N = 59 participants (19 male, 40 female;
M age = 20.26 years, SD = 3.62) attending a Western Canadian university. An additional
28 (21 female, 7 male; M age = 19.14 years, SD = 1.14) participants participated as
untrained observers of the facial expressions; they were asked to assess the veracity of the
participants’ emotional expressions in real-time and it was anticipated that their presence
during the emotional displays would increase the realism of the experiment and the
motivation of the primary participants.
Materials
The images used to evoke emotion in this study were chosen from the International
Affective Picture System (IAPS; Lang et al. 1999). These images have been normed on
emotional valence, arousal, and the discrete emotion type that they evoke in viewers
(Mikels et al. 2005); images were selected based on these ratings. Images normed by Mikels
et al. (2005) as primarily evoking sadness, disgust, fear, contentment, and amusement (low
and high happiness, respectively) were considered for use as stimuli (i.e., images evoking a
combination of emotions, like sadness and disgust, were removed from the pool of potential
stimuli). High and low intensity emotional images then were chosen based on IAPS
(Libkuman et al. 2007) arousal norms (rated on a 1–9 Likert scale). The images were
selected to evoke either high or low intensity happiness, fear, disgust, or sadness, and
neutral/no emotion.1 For example, images included a smiling baby in the high-intensity
happy condition to a severed hand as a high-intensity disgusting image. Analysis of these
images revealed significant differences in arousal ratings of high (M = 5.89, SD = 0.75)
and low (M = 4.40, SD = 0.86) intensity images, t(22) = -4.54, p \ .01. Positive (high
and low intensity happiness) (M = 7.58, SD = 0.46), negative (high and low intensity
sadness, fear and disgust) (M = 3.27, SD = 0.84) and neutral (M = 4.82, SD = 0.21)
images were all significantly different on valence ratings, F(2,29) = 137.54, p \ .01.
Procedure
A 27-inch monitor was used to display the images. While viewing emotional images,
participants were recorded using a 30-frame per second Sony HD camcorder. The design of
the room was such that the primary sample participants sat approximately one meter away
from the display on which they viewed a timed Powerpoint presentation of images. The
camcorder was situated directly behind the display to record the participant’s face with a
direct field of view. The observer participant sat behind the camcorder and to the participant’s right field of view.
Participants were run individually and presented with a slideshow of images that were
high or low in emotional intensity and varied in emotional valence. They were instructed to
1
Mean arousal ratings for each emotion, by intensity, were as follows: high happiness (M = 5.40), low
happiness (M = 4.03), high sad (M = 5.86), low sad (M = 3.94), high fear (M = 6.61), low fear
(M = 6.17), high disgust (M = 6.46), low disgust (M = 4.25), and neutral (M = 3.04). Mean valence
ratings were: high happiness (M = 7.56), low happiness (M = 7.60), high sad (M = 2.30), low sad
(M = 4.00), high fear (M = 3.90), low fear (M = 4.03), high disgust (M = 2.31), low disgust (M = 3.28),
and neutral (M = 4.82).
123
28
J Nonverbal Behav (2012) 36:23–37
produce facial expressions that were genuine (felt emotion will be expressed), simulated
(expressed emotion with no emotion felt), masked (felt emotion will be covered by
opposing emotional expression), or neutralized (despite presence of a felt emotion, no
emotion will be expressed on the face) while attending to the emotional image on the
screen. Participants were monitored by the naı̈ve observers—who recorded dichotomously
whether they believed each expression observed was sincere or insincere—and were
videotaped (at a rate of 30 frames per second) for later analysis.
The emotional image slideshow consisted of 29 images. Each was displayed for 5 s with
a 5 s break between images to allow the participants to return to a neutral face. Images were
divided into sets, which were prefaced with instructions about how to respond to the
subsequently presented images. Five images were presented in each emotion set, excluding
the neutral set, which consisted of nine images. Participants were instructed to respond with
the same facial expression to each of the images in that set, while maintaining their gaze at
the screen. In each of the four 5-image sets, there was a high and low intensity emotional
image that was consistent with the expressed emotion (i.e., resulting in genuine expressions), a high and low intensity emotional image evoking an emotion opposite to what was
expressed (i.e., resulting in masked expressions), and a neutral image (i.e., resulting in a
simulated expression). For example, in the happiness set a participant would see an initial
screen stating ‘‘Please respond to the following five images with the expression of happiness’’. Participants then viewed images of high and low intensity happy (genuine) and sad
(masked) images, along with one neutral (simulated) image. Participants were asked to
conceal their felt emotions by adopting a neutral face in response to each of the nine images
(one high and low intensity happy, sad, fearful and disgusting image, in addition to one
neutral image) comprising the neutral set (i.e., neutralized expressions). The order in which
the sets were presented, and the order of the images within each set, was randomized.
Further, to confirm that the normed emotional stimuli indeed elicited the intended (genuine)
emotions, participants rated their own emotional experiences (valence and arousal) for each
image following the presentation of the image sets using 1 (not at all) to 7 (highly) scales.
Emotion Expression Analysis
Coding of the emotional expressions was conducted using a highly reliable coding procedure developed for Porter and ten Brinke’s (2008) study, founded on the Facial Action
Coding System (FACS; Ekman et al. 2002), with particular attention to upper and lower
face action units associated with each emotion (Emotion Facial Action Coding System;
EMFACS) and Pictures of Facial Affect which served as prototypical examples for each
emotion (POFA; a set of photographs of expressions depicting the universal emotions;
Ekman and Friesen 1976). For facial coding, each 1/30th second frame of the videotaped
clips was analyzed (150 frames/each 5-s clip) for the presence and duration (from onset to
offset) of the universal emotional expressions in the upper and lower facial regions (by two
trained ‘‘blind’’ analysts). The upper facial region corresponds to the eye and forehead
regions, and the muscles underlying the upper-face action units in the FACS; these muscles
include the frontalis, corrugator, orbicularis oculi, and procerus. The lower facial region
corresponds to the nose, mouth, cheek, and chin areas; the muscles involved include the
risorius, orbicularis oris, zygomatic major, and mentalis). Coders were blind to the veracity
of the emotions they were analyzing (i.e., whether participants were displaying genuine,
simulated, masked, or neutralized expressions), but aware of the emotions participants
intended to portray. They coded a total of 256,650 frames twice each (once focused on
upper face emotion and again for the lower face) in 1,711 expressions. Coding required
123
J Nonverbal Behav (2012) 36:23–37
29
classifying the emotion exhibited in each facial region and recording the frame/time at
which these expressions began and ended. Inconsistent emotions lasting from 1/25 to 1/5th
of a second were recorded as ‘‘Ekman (1985/2001) micro-expressions’’.
Training in this method involves facial musculature recognition, memorization of facial
action units associated with universal emotions, and identification of the seven universal
emotions. One of the analysts was previously trained for the Porter and ten Brinke (2008)
study. The second analyst was trained for two weeks in the coding procedure. Both coders
studied facial musculature, facial action units associated with the universal emotions, and
the identification of universal emotional expressions extensively. To facilitate training, we
have created a detailed reference guide that includes numerous examples of each emotion
and the main muscle movements involved. Training with this reference guide was complemented by detailed study of the POFA and by practice/expertise with the FACS,
although the FACS was not formally used for coding. In addition, the coders reviewed
studies investigating facial actions involved in the universal emotional expressions (Kohler
et al. 2004; Suzuki and Naitoh 2003).
So that we could assess the coders’ knowledge level after training, coders viewed a slide
show of 50 faces from the POFA database and classified the emotion expressed in each
image. Additionally, they viewed 48 videos in a micro-expression task similar to that used
by Frank and Ekman (1997). Each of these videos included a 1/25-s glimpse of one still
picture of facial affect embedded within another, different expression, and coders were
required to classify the emotion in the micro-expression image. The two coders obtained
accuracy rates of 100 and 96% on the POFA task, and 98 and 96% on the microexpression-identification task. Finally, they practiced frame-by-frame video analysis of
emotional facial expressions by coding the video of a sample participant until they were
able to attain nearly perfect reliability.
Coding Reliability
To examine inter-rater reliability statistically, we had both coders analyze the complete
videos of nine participants (261 expressions or 39,150 frames, each coded twice—once for
the upper, and once for lower face emotional expression). Inter-rater reliability was at least
‘‘good’’ (as defined, e.g., by Cicchetti and Sparrow 1981, and Fleiss 1981) on all indices.
The coders demonstrated good reliability in coding the presence of inconsistent emotions
(i.e., any emotional expression discordant with the intended expression, not including
neutral/no emotion) and the duration of the displays, Kappa = .70, p \ .001, 87.3%
agreement, and r(520) = .75, p \ .001, respectively. The raters averaged 92.18%
(SD = 17.79) agreement on the number of inconsistent frames per expression. Disagreement in coding a frame as inconsistent was infrequent, occurring for an average of 12.28
(SD = 27.24) frames for the upper face and 11.14 (SD = 26.14) frames for the lower face,
out of 150 frames per expression (i.e., the coders agreed on an average of 137.72 and
138.86 frames for the upper and lower face, respectively).
Results
Ensuring that Participants Experienced the Intended Emotion
Because participants were instructed to exhibit particular expressions in response to the
normed stimuli, we wanted to ensure that the intended genuine emotions were experienced.
123
30
J Nonverbal Behav (2012) 36:23–37
As such, we had participants rate their own emotional reactions (valence and arousal) to
the images on 7 point scales. Indeed, analyses of mean ratings of emotional valence and
arousal replicated those conducted using IAPS norms. Analysis of participant ratings (on a
1–7 Likert scale) revealed significant differences in arousal ratings of high (M = 5.16,
SD = 0.50) and low (M = 4.37, SD = 0.52) intensity images, t(22) = -3.78, p \ .01.
Positive (high and low intensity happiness) (M = 5.71, SD = 0.51), negative (high and
low intensity sadness, fear and disgust) (M = 2.30, SD = 0.65), and neutral (M = 4.06,
SD = 0.05) images were all significantly different on valence ratings, F(2,29) = 137.54,
ps \ .01.
Testing the Inhibition Hypothesis
Due to the within-subjects study design and the greater number of images in neutral,
relative to happy, sad, fear or disgust sets, the inhibition hypothesis was examined in two
independent sets of statistical tests. Specifically, the impact of emotional intensity on the
frequency and duration of emotional leakage was examined in genuine versus masked
expressions and in genuine neutral versus neutralized expressions, separately. Potential
differences in blink rate also were examined in genuine versus masked expressions and in
genuine neutral versus neutralized expressions, separately.
Inconsistent Emotional Leakage
Impact of Emotional Intensity in Genuine Versus Masked Expressions
First, a 4 (expressed emotion: happy, sad, fear disgust) 9 2 (veracity: genuine, masked) 9
2 (intensity: high, low)2 MANOVA was conducted to evaluate the effect of emotional
intensity on the presence of inconsistent emotion in the upper and lower face. A significant
multivariate main effect of expressed emotion was revealed, F(6,53) = 28.28, p \ .01,
g2p = .76. This effect was present for the both upper (F(3,174) = 19.68, p \ .01, g2p = .25)
and lower face (F(3,174) = 40.99, p \ .01, g2p = .41). An examination of means suggests
that inconsistencies were most likely to occur in fearful expressions (upper face: M = 0.48,
SE = 0.05; lower face: M = 0.53, SE = 0.05) and least likely to occur during expressions
of happiness (upper face: M = 0.08, SE = 0.02; lower face: M = 0.03, SE = 0.01).
A significant expressed emotion 9 veracity interaction also was present at the multivariate
level, F(6,53) = 2.70, p \ .05, g2p = .23. Follow-up univariate analyses revealed that this
effect held only for the upper face, F(3,174) = 4.05, p \ .01, g2p = .07. While veracity did
not affect the presence of inconsistent emotions for either fearful or disgusting expressions
(ps [ .05), emotional leakage was more likely to occur in masked (M = 0.10, SE = 0.04)
than genuine (M = 0.05, SE = 0.02) expressions of happiness and in genuine (M = 0. 37,
SE = 0.05) than masked (M = 0.24, SE = 0.05) displays of sadness. Lastly, while the
intensity 9 veracity interaction only approached significance, F(2,57) = 1.81, p = .17,
g2p = .06, there was a significant multivariate expressed emotion 9 intensity 9 veracity
interaction, F(6,53) = 4.28, p [ .01, g2p = .33, which held for both the upper and lower
face, ps \ .01. In general, an examination of means revealed that masking high intensity
emotions resulted in a greater likelihood of inconsistent emotion for expressions of
2
In all analyses, the order of images to which the participants was assigned was included as a betweensubjects variable. The effect of this variable was never statistically significant, p’s [ .05, and was dropped
from subsequent analyses.
123
J Nonverbal Behav (2012) 36:23–37
31
happiness and sadness, but did not lead to greater emotional leakage during masks of fear
or disgust.
In addition, the impact of veracity and emotional intensity on duration of inconsistent
emotional expressions was examined using a 4 (expressed emotion) 9 2 (veracity) 9 2
(intensity) MANOVA. Duration of inconsistent emotional expressions in the upper and
lower face, respectively, served as dependent variables. A main effect of expressed
emotion was present at the multivariate level (F(6,53) = 15.73, p \ .01, g2p = .64) and for
the upper and lower face independently, p’s \ .01. In general, the greatest amount of
emotional leakage occurred during fearful expressions (upper face: M = 35.09,
SE = 4.83; lower face: M = 46.10, SE = 6.14) while happiness expressions were the least
likely to include inconsistent emotions (upper face: M = 2.99, SE = 1.00; lower face:
M = 0.62, SE = 0.28). As the inhibition hypothesis would predict, our analysis revealed a
significant Intensity 9 Veracity interaction, F(2,57) = 4.28, p \ .05, g2p = .13. Further,
this interaction was significant in both the upper (F(1,58) = 6.74, p \ .05, g2p = .10) and
lower (F(1,58) = 4.49, p \ .05, g2p = .07) face. In the upper face in particular, there was
significantly longer emotional leakage in the high intensity masked relative to the high
intensity genuine condition, p \ .05. Further, the difference between duration of emotional
inconsistency during high intensity masks (M = 23.67; 95% CI [17.89, 29.45]) and low
intensity masks (M = 18.56; 95% CI [12.59, 24.30]) approached significance (see Fig. 1).
This interaction is qualified by an expressed emotion 9 intensity 9 veracity interaction,
F(6,53) = 5.65, p \ .01, g2p = .39. In general, emotional leakage was greater in both the
upper and lower face during high-intensity masked, relative to low-intensity masked,
expressions for all emotions except disgust.
Impact of Emotional Intensity in Neutralized Expressions
The impact of veracity and emotional intensity on duration of inconsistent emotional
expressions during emotion neutralization was examined using 4 (felt emotion) 9 2
(intensity) MANOVAs. First, the presence of inconsistent emotional expression in the
upper and lower face were the dependent variables. Although the main effect of intensity
did not reach statistical significance at the multivariate level, F(2,57) = 2.06, p = .14,
g2p = .07, post-hoc univariate follow-up analyses revealed that this effect was significant in
the upper face, F(1,58) = 4.11, p \ .05, g2p = .07. Emotional inconsistencies were more
Fig. 1 Duration (in 1/30thsecond frames) of inconsistent
emotion in the upper face as a
function of intensity and veracity
Genuine
Genuine, Low,
23.2
Masked
Masked, Low,
18.56
Masked, High,
23.67
Genuine, High,
17.68
123
32
J Nonverbal Behav (2012) 36:23–37
likely to occur during high (M = 0.14, SE = 0.03) relative to low (M = 0.09, SE = 0.02)
intensity neutralized expressions. Second, duration of inconsistent emotional expression
(measured in 1/30th-second frames) in the upper and lower face, respectively, served as
dependent variables. The analyses did not reveal any significant main effects or interaction. However, there was a trend for high intensity emotions (M = 7.0, SE = 2.14) to be
revealed by emotional leakage of a longer duration than low intensity emotions (M = 5.13,
SE = 1.95) in the upper face, F(1,58) = 3.34, p = .07.
Ekman Micro-Expressions
No complete micro-expressions (1/25th–1/5th of a second) involving both the upper and
lower halves of the face simultaneously (as described by Ekman and Friesen 1975) were
detected in any of the 1,711 analyzed expressions. However, 15 participants exhibited 18
partial micro-expressions; 10 in the upper and 8 in the lower facial region. Of these, seven
occurred during the presentation of high intensity emotional images, and 11 during low
intensity emotional images. Six of the 10 micro-expressions that occurred during masked
and neutralized emotional portrayals were congruent with the deceivers’ felt emotion.
Blink Rate
Impact of Emotional Intensity in Genuine Versus Masked Expressions
A 4 (expressed emotion: happy, sad, fear, disgust) 9 2 (veracity: genuine, masked) 9 2
(intensity: high, low) repeated measures ANOVA was conducted to evaluate the effect of
veracity and emotional intensity on the number of blinks in each 5-second expression.
There was a significant main effect of expressed emotion, F(3,174) = 5.41, p \ .05,
g2p = .09, such that the greatest number of blinks occurred during sad (M = 1.71,
SE = 0.19), followed by fearful (M = 1.49, SE = 0.16), happy (M = 1.28, SE = 0.16),
and disgust (M = 1.21, SE = 0.17) expressions. However, effects of veracity and intensity
were non-significant, ps [ .05.
Impact of Emotional Intensity in Neutralized Expressions
The impact of veracity and emotional intensity on blink rate during emotion neutralization
was examined using 4 (felt emotion) 9 2 (intensity) repeated measures ANOVA. A significant effect of intensity was revealed, F(1,58) = 5.61, p \ .05, g2p = .09, with high
intensity neutralized expressions being associated with fewer blinks (M = 1.25,
SE = 0.16) than low intensity (M = 1.44, SE = 0.15).
Accuracy of the Observers in Judging the Veracity of Expressions
A final question was whether the observers could identify deceptive facial expressions with
the naked eye. A 4 (expressed emotion: happy, sad, fear, disgust) 9 2 (intensity: high vs.
low) 9 2 (veracity: genuine vs. masked) was conducted to test the hypothesis that high
intensity deceptive emotions would be easier for naı̈ve observers to detect than low
intensity emotional deception. However, this analysis did not yield any significant main
effects or interactions, all ps [ .05. In general, judges achieved an overall mean accuracy
of 54.82%, which was not significantly above chance, t(27) = 1.93, p = .06. Judges
123
J Nonverbal Behav (2012) 36:23–37
33
performed above the level of chance in detecting deception in happiness, t(27) = 2.56,
p \ .05, whereas their accuracy in judging the veracity of sad, disgust and fearful
expressions did not differ from chance (ps [ .05).
Discussion
The widespread notion that the human face and its expressions can betray covert emotional
states originates in the inhibition hypothesis (Ekman et al. 1988, 1991). Darwin (1872)
postulated that powerful emotions in particular could not be completely inhibited nor
fabricated because of the involuntary nature of emotional expressions. A century later,
Haggard and Isaacs (1966) and then Ekman and colleagues (e.g., Ekman 1985/2001;
Ekman 2006; Frank and Ekman 1997) argued that extremely brief ‘‘flashes’’ of an individual’s true emotion appear on the face uncontrollably as ‘‘micro-expressions.’’ If confirmed, these hypotheses potentially offer critical insights into the nature of human
communication.
The present study was a comprehensive investigation relating to these ideas. It also was
one of the most thorough studies of human facial expressions yet conducted, with a
database of 1,711 expressions comprising a manual analysis of 256,650 video frames of
facial behavior and four of the universal emotions. Results were consistent with the
proposition that facial communication is not always under conscious control in that
emotional leakage was essentially ubiquitous, occurring in 98.3% of participants at least
once. Most importantly, the findings strongly supported the inhibition hypothesis; high
intensity deceptive emotions were associated with substantially more emotional leakage
than low intensity deceptive emotions. High intensity emotion was more difficult to
conceal than low intensity emotion during emotional masking (replacing a felt emotion
with another, false emotional expression), and leakage was especially likely to manifest in
the muscles of the upper face (see also Rinn 1984). As in Porter and ten Brinke (2008), it
was more difficult for participants to mask an emotion than to neutralize one (maintain a
neutral face in the presence of felt emotion). However, there were trends toward more and
longer inconsistencies in high intensity neutralization relative to low intensity neutralization, consistent with the inhibition hypothesis. On the other hand, some leakage did
occur in low intensity deceptive emotions, suggesting the possibility that leakage may
occur on a continuum of emotional intensity as opposed to being an ‘‘all or none’’ phenomenon (as we predicted), occurring only in falsified emotions of a particular high
intensity. Future research could clarify this issue by using a greater number of levels of
emotional intensity and establish the intensity threshold required for leakage to occur.
Inconsistencies were more likely to occur during particular emotions relative to others.
Specifically, in both the upper and lower face, inconsistencies were most likely to occur in
fearful expressions and least likely to occur during expressions of happiness, which may
relate to level of experience; Somerville and Whalen (2006) found that happiness is the
most commonly experienced expression whereas fearful expressions are exhibited the
least. Further, the impact of emotional intensity on the likelihood that one’s emotional
deception would be revealed by leakage of emotional inconsistencies varied by emotion.
Considering high intensity emotion specifically, masking resulted in a greater likelihood of
inconsistent emotion for expressions of happiness and sadness, but did not lead to greater
emotional leakage during masks of fear or disgust. The data suggest that one reason for this
is a ‘‘ceiling’’ effect such that fear and disgust were associated with a high level of leakage
for both low and high intensity emotions.
123
34
J Nonverbal Behav (2012) 36:23–37
This support for the inhibition hypothesis, specifically in the upper face, during falsification of happiness may reflect the incomplete fabrication of deceptive, negative emotions, and the leakage of genuine, positive emotion in falsified emotion (and vice versa for
the falsification of sadness). As in a recent study of falsified remorse (ten Brinke et al.
2011), this finding may be due specifically to the uncontrollability of the medial portion of
the frontalis muscles. Whereas most people can easily raise the eyebrows (frontalis muscle,
Action Units 1 & 2; Ekman et al. 2002), it is difficult for most people to engage only the
medial portion of the frontalis muscle (Action Unit 1; Ekman et al. 2002) spontaneously,
which brings the inner eyebrows upward to mimic the upper face activation associated with
a genuine sad face (Ekman 2003b). Thus, the deceiver activates the complete (inner and
outer) frontalis muscle, appearing surprised and leaking inconsistent emotion during falsified sadness. Alternatively, while genuinely experiencing sadness, the deceiver may not
be able to suppress medial frontalis action, leading to leakage of a raised brow during a
falsified display of happiness.
The notion of the 1/25th–1/5th of a second full-face micro-expression long advocated
by Ekman was not supported here. No complete micro-expressions involving both the
upper and lower halves of the face simultaneously (as described by Ekman and Friesen
1975) were detected in any of the 1,711 analyzed expressions, suggesting that they may not
exist or that they are exceedingly rare. However, there were rare occurrences of ‘‘partial’’
fleeting micro-expressions manifesting in either the upper or lower face. Fifteen (25.4%)
participants exhibited 18 partial micro-expressions: ten in the upper and eight in the lower
facial region. From an applied perspective, this offers good and bad news. The good news
is that most instances of emotional leakage lasted longer than 1/5th of a second, and
frequently lasted for closer to a full second. This means that ‘‘subtle’’ deceptive expressions occur frequently (especially in high intensity emotional displays) and should be
relatively perceptible to the trained eye. On the other hand, both Ekman (1985/2001)
micro-expressions and (the far more common) longer-lasting emotional leakage we
observed typically are manifested in one region of the face (the latter more common in the
upper face) making them more ‘‘subtle’’ than traditionally believed. Clearly, relative to the
traditional view of micro-expressions, a better way to conceptualize involuntary facial
communication is as subtle emotional leakage lasting long enough for the trained eye to
detect and more often occurring in the upper face and for high intensity emotions. Related
‘‘real-life’’ findings were recently generated in study of extremely high-stakes emotional
deception (ten Brinke and Porter 2011); deceptive individuals (killers) ‘‘pleading’’ for the
return of a missing relative (versus sincere pleaders) exhibited similar patterns of emotional leakage in their facial expressions.
Despite the perceptible leakage that appeared on deceptive emotional faces, observers
generally were unable to discriminate sincere and insincere emotions, performing at the
level of chance for all negative emotions and only slightly above chance for happiness
expressions (see also Ekman et al. 1988). Although objectively there was more leakage in
high intensity expressions, observers did no better at identifying high intensity versus low
intensity emotional deception. It should be noted that when producing the expressions,
participants were asked to look at the screen. Thus, it is possible that observers had
difficultly detecting deceptive expressions because all participants were looking away from
them (e.g., Adams and Kleck 2003). However, the findings are consistent with the general
findings on deception detection (see Vrij et al. 2011) and are similar to the findings of Hess
and Kleck (1994) and Porter and ten Brinke (2008) relating to emotional deception.
Generally, these findings suggest that if observers are not sophisticated in evaluating
emotional facial expression, the increased likelihood and duration of diagnostic emotional
123
J Nonverbal Behav (2012) 36:23–37
35
leakage are of little assistance to naı̈ve judges. However, research does suggest that short
training programs that focus on emotional cues to deceit can drastically improve the
detection of high-stakes, high-intensity emotional deception from the level of chance to
80% accuracy (Shaw et al. 2011).
This exciting scientific area promises to provide greater illumination of a topic pioneered by Duchenne, Darwin, and Ekman: that of human facial communication and the
universal emotions. Arguably, many aspects of human facial communication are uncharted
scientific territory, despite long-standing ideas and assumptions about the involuntary
nature of face communication. Research such as the current study is firmly establishing that
the human face can reveal the presence of a contradictory underlying emotion, particularly
when the deceiver is experiencing a powerful emotional state at odds with the expressed
emotion. Ultimately, this knowledge could improve the identification of emotional
deception in legal and security environments where such assessments carry enormous
consequences.
References
Adams, R. B., Jr., & Kleck, R. E. (2003). Perceived gaze direction and the processing of facial displays of
emotion. Psychological Science, 14, 644–647. doi:10.1046/j.0956-7976.2003.psci_1479.x.
Cicchetti, D., & Sparrow, S. (1981). Developing criteria for establishing interrater reliability of specific
items: Applications to assessment of adaptive behavior. American Journal of Mental Deficiency, 86(2),
127–137.
Darwin, C. (1872). The expression of the emotions in man and animals. Chicago: University of Chicago
Press.
Duchenne, G. B. (1990). The mechanism of human facial expression. New York: Cambridge University
Press. (Original work published 1862).
Ekman, P. (1985/2001). Telling lies: Clues to deceit in the marketplace, politics, and marriage. New York:
Norton.
Ekman, P. (2003a). Darwin, deception and facial expression. In P. Ekman, R. J. Davidson, & F. De Waals
(Eds.), Annals of the New York Academy of Sciences. Emotions inside out: 130 years after Darwin’s
The Expression of the Emotions in Man and Animals (Vol. 1000, pp. 205–221). New York: New York
Academy of Sciences.
Ekman, P. (2003b). Emotions revealed: Recognizing faces and feelings to improve communication and
emotional life. New York, NY, US: Times Books/Henry Holt and Co.
Ekman, P. (2006, October 29). How to spot a terrorist on the fly. Washington Post. Retrieved from
http://www.washingtonpost.com.
Ekman, P. (2009). Telling lies: Clues to deceit in the marketplace, politics, and marriage. New York, NY,
US: Norton.
Ekman, P., Davidson, R., & Friesen, W. (1990). The Duchenne smile: Emotional expression and brain
physiology: II. Journal of Personality and Social Psychology, 58(2), 342–353. doi:10.1037/
0022-3514.58.2.342.
Ekman, P., & Friesen, W. (1974). Detecting deception from the body or face. Journal of Personality and
Social Psychology, 29, 288–298. doi:10.1037/h0036006.
Ekman, P., & Friesen, W. V. (1975). Unmasking the face: A guide to recognizing emotions from facial clues.
Oxford, England: Prentice-Hall.
Ekman, P., & Friesen, W. V. (1976). Pictures of facial affect. Palo Alto, CA: Consulting Psychologists
Press.
Ekman, P., Friesen, W. V., & Hagar, J. C. (2002). Facial action coding system. Salt Lake City, UT: Network
Information Research. (Original work published 1976).
Ekman, P., Friesen, W. V., & O’Sullivan, M. (1988). Smiles when lying. Journal of Personality and Social
Psychology, 54(3), 414–420. doi:10.1037/0022-3514.54.3.414.
Ekman, P., & O’Sullivan, M. (2006). From flawed self-assessment to blatant whoppers: The utility of
voluntary and involuntary behavior in detecting deception. Behavioral Sciences and the Law, 24(5),
673–686. doi:10.1002/bsl.729.
123
36
J Nonverbal Behav (2012) 36:23–37
Ekman, P., O’Sullivan, M., Friesen, W. V., & Scherer, K. R. (1991). Invited article: Face, voice, and body in
detecting deceit. Journal of Nonverbal Behavior, 15(2), 125–135. doi:10.1007/BF00998267.
Fleiss, J. (1981). Balanced incomplete block designs for inter-rater reliability studies. Applied Psychological
Measurement, 5(1), 105–112. doi:10.1177/014662168100500115.
Frank, M. G., & Ekman, P. (1997). The ability to detect deceit generalizes across different types of highstake lies. Journal of Personality and Social Psychology, 72, 1429–1439.
Frank, M. G., Ekman, P., & Friesen, W. V. (1993). Behavioral markers and recognizability of the smile of
enjoyment. Journal of Personality and Social Psychology, 64(1), 83–93. doi:10.1037/0022-3514.
64.1.83.
Gable, P., & Harmon-Jones, E. (2010). The blues broaden, but the nasty narrows: Attentional consequences
of negative affects low and high in motivational intensity. Psychological Science, 21(2), 211–215. doi:
10.1177/0956797609359622.
Gottman, J., Levenson, R., & Woodin, E. (2001). Facial expressions during marital conflict. Journal of
Family Communication, 1(1), 37–57. doi:10.1207/S15327698JFC0101_06.
Haggard, E. A., & Isaacs, K. S. (1966). Micromomentary facial expressions as indicators of ego mechanisms
in psychotherapy. In L. A. Gottschalk & A. H. Auerbach (Eds.), Methods of research in psychotherapy
(pp. 154–165). New York: Appleton Century Crofts.
Henig, R. M. (2006, February 5). Looking for the lie. New York Times. Retrieved December 1, 2010, from
www.nytimes.com.
Hess, U., & Bourgeois, P. (2010). You smile–I smile: Emotion expression in social interaction. Biological
Psychology, 84, 514–520. doi:10.1016/j.biopsycho.2009.11.001.
Hess, U., & Kleck, R. (1994). The cues decoders use in attempting to differentiate emotion-elicited and
posed facial expressions. European Journal of Social Psychology, 24(3), 367–381. doi:10.1002/
ejsp.2420240306.
Hess, U., & Thibault, P. (2009). Darwin and emotion expression. American Psychologist, 64, 120–128. doi:
10.1037/a0013386.
Kohler, C., Turner, T., Stolar, N., Bilker, W., Brensinger, C., Gur, R., et al. (2004). Differences in facial
expressions of four universal emotions. Psychiatry Research, 128(3), 235–244. doi:10.1016/j.psychres.
2004.07.003.
Lang, P., Bradley, M., & Cuthbert, B. N. (1999). International Affective Picture System (IAPS): Instruction
manual and affective ratings (Tech. Rep. No. A-4). Gainesville: University of Florida, Center for
Research in Psychophysiology.
Libkuman, T. M., Otani, H., Kern, R., Viger, S. G., & Novak, N. (2007). Multidimentional normative ratings
for the International Affective Picture System. Behaviour Research Methods, 39(2), 326–334. doi:
10.3758/BF03193164.
Lipton, E. (2006, August 17). Threats and responses: Screening; faces, too, are searched as U.S. airports try
to spot terrorists. New York Times. Retrieved December 1, 2010, from www.nytimes.com.
Matsumoto, D. (2007). Emotion judgments do not differ as a function of perceived nationality. International
Journal of Psychology, 42(3), 207–214. doi:10.1080/00207590601050926.
Matsumoto, D., & Willingham, B. (2009). Spontaneous facial expressions of emotion of congenitally and
non-congenitally blind individuals. Journal of Personality and Social Psychology, 96(1), 1–10. doi:
10.1037/a0014037.
Mikels, J. A., Fredrickson, B. L., Larkin, G. R., Lindberg, C. M., Magilo, S. J., & Reuter-Lorenz, P. A.
(2005). Emotional category data on images from the International Affective Picture System. Behavior
Research Methods, 37(4), 626–630.
O’Sullivan, M., Frank, M. G., Hurley, C. M., & Tiwana, J. (2009). Police lie detection accuracy: The effect
of lie scenario. Law and Human Behavior, 33(6), 530–538. doi:10.1007/s10979-008-9166-4.
Porter, S., Juodis, M., ten Brinke, L., Klein, R., & Wilson, K. (2010). Evaluation of the effectiveness of a
brief deception detection training program. Journal of Forensic Psychiatry and Psychology, 21(1),
66–76. doi:10.1080/14789940903174246.
Porter, S., & ten Brinke, L. (2008). Reading between the lies: Identifying concealed and falsified emotions in
universal facial expressions. Psychological Science, 19(5), 508–514. doi:10.1111/j.1467-9280.2008.
02116.x.
Porter, S., & ten Brinke, L. (2010). The truth about lies: What works in detecting high-stakes deception?
Legal and Criminological Psychology, 15(1), 57–75. doi:10.1348/135532509X433151.
Porter, S., ten Brinke, L., Baker, A., & Wallace, B. (2011, in press). Would I lie to you? ‘‘Leakage’’ in
deceptive facial expressions relates to psychopathy and emotional intelligence. Personality and
Individual Differences. doi:10.1016/j.paid.2011.03.031.
123
J Nonverbal Behav (2012) 36:23–37
37
Porter, S., ten Brinke, L., & Wilson, K. (2009). Crime profiles and conditional release performance of
psychopathic and non-psychopathic sexual offenders. Legal and Criminological Psychology, 14(1),
109–118. doi:10.1348/135532508X284310.
R. v. B. (K.G.), 1 S.C.R. 740 (Supreme Court of Canada, 1993).
R v N.S., ONCA 670 (Court of Appeal for Ontario, 2010).
Rinn, W. E. (1984). The neuropsychology of facial expression: A review of the neurological and psychological mechanisms for producing facial expressions. Psychological Bulletin, 95(1), 52–77. doi:
10.1037/0033-2909.95.1.52.
Shaw, J., Porter, S., & ten Brinke, L. (2011, under review). Catching liars: Training mental health and legal
professionals to detect extremely high-stakes lies.
Somerville, L. H., & Whalen, P. J. (2006). Prior experience as a stimulus category confound: An example
using facial expressions of emotion. Social, Cognitive, and Affective Neuroscience, 1, 271–274. doi:
10.1093/scan/nsl040.
Stewart, P., Waller, B., & Schubert, J. (2009). Presidential speechmaking style: Emotional response to
micro-expressions of facial affect. Motivation and Emotion, 33(2), 125–135. doi:10.1007/s11031-0099129-1.
Suzuki, K., & Naitoh, K. (2003). Useful information for face perception is described with FACS. Journal of
Nonverbal Behavior, 27(1), 43–55. doi:10.1023/A:1023666107152.
ten Brinke, L., MacDonald, S., Porter, S., & O’Connor, B. (2011). Crocodile tears: Facial, verbal and body
language behaviours associated with genuine and fabricated remorse. Law and Human Behavior,
published online February 8, 2011. doi:10.1007/s10979-011-9265-5..
ten Brinke, L., & Porter, S. (2011, under review). Darwin the detective: Behavioural consequences of
extremely high-stakes lies.
Vrij, A., Granhag, P. A., & Porter, S. (2011). Pitfalls and opportunities in nonverbal and verbal lie detection.
Psychological Science in the Public Interest, 11, 89–121. doi:10.1177/1529100610390861.
Vrij, A., & Mann, S. (2001). Who killed my relative? Police officers’ ability to detect real-life high-stake
lies. Psychology, Crime and Law, 7(2), 119–132. doi:10.1080/10683160108401791.
Warren, G., Schertler, E., & Bull, P. (2009). Detecting deception from emotional and unemotional cues.
Journal of Nonverbal Behavior, 33(1), 59–69. doi:10.1007/s10919-008-0057-7.
Williams, M. A., & Mattingley, J. B. (2006). Do angry men get noticed? Current Biology, 16, R402–R404.
doi:10.1016/j.cub.2006.05.018.
123