Sunteți pe pagina 1din 11

J Autism Dev Disord (2013) 43:16521661

DOI 10.1007/s10803-012-1715-5

ORIGINAL PAPER

Evidence for Impaired Verbal Identification But Intact Nonverbal


Recognition of Fearful Body Postures in Aspergers Syndrome
John P. Doody Peter Bull

Published online: 20 November 2012


Springer Science+Business Media New York 2012

Abstract While most studies of emotion recognition in


Aspergers Syndrome (AS) have focused solely on the
verbal decoding of affective states, the current research
employed the novel technique of using both nonverbal
matching and verbal labeling tasks to examine the decoding of emotional body postures and facial expressions. AS
participants performed as accurately as controls at matching fear body postures, but were significantly less accurate
than controls verbally identifying these same stimuli. This
profile arguably indicates that that while the AS participants were aware that the fear body posture stimuli represented a distinct emotion, they were unsure as to which
specific emotion. In addition, the AS participants took
significantly longer than the controls to respond to anger
body posture stimuli on a matching task. However, in
contrast to previous studies, AS and control participants did
not differ significantly in their responses to facial expression stimuli, in terms of either accuracy or response times.
Keywords Aspergers  Autism  Body posture  Facial
expression  Emotion

Introduction
Generally considered to be a form of High-Functioning
Autism (HFA; Miller and Ozonoff 2000), Aspergers
Syndrome (AS) is estimated to affect approximately
0.092 % of the population (Gillberg et al. 2006). AS came
to the attention of mental health professionals worldwide in
the 1990 s via its inclusion in both the International
J. P. Doody (&)  P. Bull
Department of Psychology, University of York, York, UK
e-mail: Johnpdoody2@yahoo.co.uk

123

Classification of Diseases, 10th edition (ICD-10; World


Health Organization 1992) and the Diagnostic and Statistical Manual of Mental Disorders 4th edition, (DSM-IV;
American Psychiatric Association 1994). Since then,
researchers have studied the ability of those with the disorder to decode the mental states of others by observing
their nonverbal cues. To date, most studies have examined
the decoding of facial expressions, with comparatively
little research on other channels of nonverbal communication, such as vocal expressions and whole body
expressions.
It has been noted that the findings of the studies conducted thus far have been somewhat contradictory (Harms
et al. 2010), arguably reflecting heterogeneity of social
decoding skills within the autistic/AS population. For
example, with regard to overall accuracy in emotion recognition, while some studies have found AS/HFA individuals to be significantly less accurate than controls at
decoding emotional facial expressions (e.g., Lindner and
Rosen 2006; Pohlig 2008), others have found no such
significant differences between the groups (e.g., Adolphs
et al. 2001; Baron-Cohen et al. 1997).
In relation to the recognition of specific emotions, AS/
HFA individuals have been observed to perform significantly worse than controls at verbally identifying facial
expressions of fear (e.g., Corden et al. 2008; Howard et al.
2000; Humphreys et al. 2007; Pelphrey et al. 2002; Wallace et al. 2008), surprise (Baron-Cohen et al. 1993), anger
(Bal et al. 2010; Philip et al. 2010; Wright et al. 2008),
sadness (Boraston et al. 2007; Corden et al. 2008; Wallace
et al. 2008), and disgust (Wallace et al. 2008).
The ability of AS individuals to decode emotions from
vocal expressions has also been examined, with the findings of these studies generally indicating that those with
AS/HFA perform significantly worse than typically

J Autism Dev Disord (2013) 43:16521661

developing individuals at this task (e.g., Golan et al. 2006;


Kleinman et al. 2001; Rutherford et al. 2002). However,
other studies have found no differences between AS participants and controls (e.g., Mazefsky and Oswald 2007).
In relation to the decoding of affect from whole body
expressions, the research findings are again somewhat
inconsistent. (Whole body expressions represent emotional
encodings and are thus distinct from non-emotional
actions). Atkinson (2009) examined affect recognition
from dynamic whole bodies presented as both point-light
and full-light displays. The faces of the figures were not
visible in either condition. Relative to matched neurotypical controls, the AS/HFA group was significantly less
accurate at identifying disgust, happiness and anger, and
marginally less accurate at identifying fear and sadness
from both full-light and point-light displays. Similar findings were obtained by Philip et al. (2010).
Other studies have found that AS/HFA individuals are
impaired at decoding emotional, but not non-emotional,
actions from viewing video clips of moving human figures
(e.g., Hubert et al. 2007; Murphy et al. 2009; Parron et al.
2008). On the other hand, Blake et al. (2003) found that
autistic children performed significantly worse than controls at interpreting unemotional human actions. In a recent
study conducted by the authors (Doody and Bull 2011), AS
participants were observed to perform as accurately as
controls matching static body posture stimuli on a nonverbal matching task, but made significantly more mistakes
decoding the same stimuli on a verbal labeling task. This
effect was found to be specific to body postures encoding
boredom, suggesting that while the AS participants were
aware that these stimuli represent a distinct attitude, they
were unsure as to which specific attitude they encode.
A few studies examining the social decoding skills of
AS individuals have included reaction time as a dependent
measure. Hence, there is evidence to suggest that those
with AS take significantly longer than their neurotypical
peers to process affective facial expressions (e.g., Bal et al.
2010) and body posture stimuli (e.g., Doody and Bull 2011;
Zamagni et al. 2011). Possibly, such delayed processing
results from an atypical visual processing style (Behrmann
et al. 2006).
Previous studies examining emotion recognition skills in
AS demonstrated a number of methodological problems.
There has been a heavy reliance on forced-choice verbal
labeling paradigms. According to Russell (1993, 1994),
such forced-choice response formats may not include a
truly appropriate verbal label for a given facial affect
stimulus. This in turn means that participants may feel
forced to select a response option that they do not feel
adequately describes the stimulus. Similarly, Ekman
(1994) noted that different cultures may use different
words to label a certain emotion, even cultures that speak

1653

the same language. Although it has been shown that failure


to include a None of the above response option in
emotion recognition tasks can induce artificially inflated
rater agreement (Frank and Stennett 2001; Winters 2005),
there appears to have been only one such study conducted
with AS participants that included this response caption
(i.e., Doody and Bull 2011).
With these criticisms in mind, the purpose of the current
research was to examine the ability of AS individuals to
decode emotional body postures and facial expressions via
both nonverbal matching and verbal labeling tasks. The
bodily and facial stimuli demonstrated encodings of the six
Ekman (1972) emotions (i.e., happiness, sadness, anger,
fear, surprise, and disgust). These affective states are crosscultural universals, that is to say, they are decoded in the
same way by members of both literate and pre-literate
cultures (e.g., Ekman et al. 1972).
Participants completed four tasks in total. The first two
(Matching Tasks) respectively examined the ability of
participants to differentiate between facial expressions and
body postures pertaining to the same/different emotions.
Tasks 3 and 4 respectively examined whether participants
could verbally identify which emotions these facial
expression and body postures encoded. To ensure that
performance on the matching tasks was not influenced by
the verbal labeling procedures, matching always took place
before verbal labeling. A None of the above response
option was included in each task. Participants were
examined on both accuracy of responses and reaction
times.

Method
Participants
The same 40 male individuals (20 AS and 20 controls) who
took part in an earlier experiment (Doody and Bull 2011)
also participated in the present study. Participants with AS
were recruited with the help of staff from the School of
Psychology and Department of Psychiatry at Trinity College Dublin, the Asperger Syndrome Association of Ireland, and a special unit at a secondary school in Co.
Kildare. All AS participants had received their diagnosis
from either a psychologist, psychiatrist or multi-disciplinary team of mental health professionals. Seven of the AS
participants also had a comorbid diagnosis of ADHD.
Control participants were recruited with the assistance
of a number of secondary schools in Kildare and Dublin,
Republic of Ireland. AS and control participants were
matched on chronological age, IQ, and visual-perceptual
ability; the abbreviated version of the Stanford Binet
Intelligence Scales, Fifth Edition (SB5; Roid 2003) and the

123

1654

J Autism Dev Disord (2013) 43:16521661

Table 1 Participant variables


Age

Abbreviated IQ

DTVP-A

KADI

15.90

103.30

103.95

101.45

1.50

7.21

10.96

8.44

16.03

102.10

103.45

62.55

1.55

6.53

11.70

3.80

t (df = 38)

0.28

0.55

0.14

18.79

0.78

0.87

0.89

0.00

AS group
Mean
SD
Control group
Mean
SD

Developmental Test of Visual PerceptionAdolescent and


Adult (DTVP-A; Reynolds et al. 2002) being used to
measure IQ and visual-perceptual ability respectively. The
Krug Aspergers Disorder Index (KADI; Krug and Arick
2003) was used to confirm the diagnosis of AS participants,
and to ensure that undiagnosed AS individuals were not
inadvertently assigned to the control group. In addition, all
participants were found to have a reading age of
C12 years, as indicated by their performance on the
Woodcock-Johnson III Tests of Achievement, Form C/Brief
Battery (WJIII ACH; Woodcock et al. 2007). This helped
to ensure that they would be able to read all instruction
screens used in the experiment.
Independent measures t tests indicated no statistically
significant differences between the AS and control groups
on chronological age, IQ, or DTVP-A scores. However, the
AS group scored significantly higher on the KADI than did
the control groupsee Table 1.
Materials/Apparatus
The experiment was run through Superlab Pro Version
4.0.7 on a Sony Vaio VGN-CR21S laptop. Body posture
stimuli were created using e-frontiers Poser 7 figure design
and animation package, and were modeled on a set of
photographs1 of actors standing in posed emotional body
postures (i.e., happy, sad, angry, fear, surprise, and disgust). Twenty-four photographs from this set, four depictions of each of the six emotions were randomly selected.
Each of these body posture stimuli were recreated in Poser
7 using the same four figures (two male/two female) utilized in a previous study by the authors (Doody and Bull
2011). To ensure that the facial expressions of the figures
would not act as a confounding variable, the entire facial
area of each figure was erased. This resulted in a new set of
24 photo-realistic figures expressing emotional postures,
1

The photographs were taken by researchers at Tilburg University,


Holland. Images from the set have previously been used in a
published study (Schindler et al. 2008).

123

devoid of any trace of a face. To validate the set, each


stimulus was shown in turn to a panel of 10 independent
judges who, using a forced-choice format, indicated whether a figure demonstrated happiness, sadness, anger, fear,
surprise, disgust, or None of the above. All stimuli were
decoded with at least 60 % accuracy by the judges.
Facial affect stimuli were selected from the Japanese
and Caucasian Facial Expressions of Emotion set (JACFEE; Matsumoto and Ekman 1988). The JACFEE contains
56 photographs of individuals of Japanese and Caucasian
ethnicities, each encoding prototypical facial expressions
of basic emotions. The facial expressions were coded using
the Facial Action Coding System (FACS; Ekman and
Friesen 1978), and images from the set were validated
through a series of studies where they were found to induce
high levels of rater agreement amongst decoders (Matsumoto and Ekman 1988). For the purposes of the current
research, four representations of each of the six Ekman
(1972, 1992) emotions (i.e., happiness, sadness, anger, fear,
surprise, disgust) were randomly selected from the JACFEE (half male/half female).
Procedure
The experiment consisted of four experimental tasks,
completed in one block by the participants. Each participant was tested individually, seated approximately 70 cm
away from the computer screen.
Tasks 1 and 2: Nonverbal Matching
Immediately prior to beginning the match tasks, participants engaged in a short practice exercise which required
them to match geometric shapes. A large target shape was
presented on the left-side of the screen, while on the rightside there were six smaller probe shapes arranged into a
three by two grid. At the bottom of the screen, below the
probes, was a None of the above verbal label. For each
trial, one of the probe shapes was the same as the target
shape. Participants were required to click the probe shape
which was the same as the target shape, or alternatively, the
None of the above option.
Once the practice session had been completed, participants viewed an instruction screen which informed them of
how the matching tasks would proceed. Task 1 (Matching
of Body Postures) consisted of 24 trials. On each trial, a
large figure (Target Stimulus) was presented on the
extreme left of the screen. To the right of this were six
smaller figures (Probe Stimuli) arranged into a three by two
grid, each representing one of the six basic emotions. In
every trial, one of the probe figures encoded the same
emotion as the target figure. At the bottom of the screen,
beneath the six probes, was a None of the above caption.

J Autism Dev Disord (2013) 43:16521661

1655

Fig. 1 Screenshot of the matching of body postures task

The order in which stimuli were presented was randomly


generated. Participants were required to click on the probe
figure that they felt displayed the same emotion as the
target, as quickly and as accurately as possible. If they felt
that none of the probes encoded the same emotion as the
target stimulus, they were instructed to click None of the
above. Every stimulus figure from the set of 24 images
was presented once as the target figure. Whenever a male
figure was presented as the target stimulus, the six figures
from the alternative male figure set were used as the probe
stimuli. Similarly, whenever a female figure was presented
as the target stimulus, the six figures from the alternative
female figure set were used as the probe stimuli. This was
done to ensure that participants would not match stimuli
based on the identity of the encoder. Clicking on any of the
seven response options moved the experiment on to the
next trial.
After completing the first task, the participants engaged
in Task 2 (Matching of Facial Expressions). Following a
similar format to Task 1, participants worked through 24
trials in which each of the JACFEE facial expression
stimuli was presented once as a large target stimulus on the
left side of the screen. To the right of this were six smaller
probe stimuli arranged into a three by two grid, with the
probes each representing one of the six basic emotions. At
the bottom of the screen, beneath the probes, was a None
of the above response caption. The order of presentation
for the target stimuli and probe stimuli was generated

randomly. For each trial, the gender of the individuals in


the probe images always matched the gender of the figure
in the target image. To avoid identity-based matching, it
was ensured that an encoder (i.e., the individual displaying
the emotional expression) was never represented in both a
probe stimulus and target stimulus on any given trial. As
with the previous task, participants were required to click
on the probe stimulus that they felt demonstrated the same
emotion as the target stimulus, or, if they felt that none of
the probe images represented the same emotion as the
target stimulus, the None of the above option. Once a
participant had completed the second task, the experiment
moved on to the verbal labeling tasks. Figure 1 shows a
screenshot of Task 1matching of body postures.
Tasks 3 and 4: Verbal Labeling
As a practice exercise before the verbal labeling tasks,
participants verbally labeled geometric shapes. In each
trial, a large geometric shape was presented on the left-side
of the screen. On the right hand side of the screen were
seven verbal labels, six representing the name of the shapes
and a None of the above caption. Participants were
instructed to click on the verbal label that corresponded to
the shape shown, or, if they felt that none of the labels were
appropriate, the None of the above option.
Upon completion of the practice session, participants
began Task 3 (Verbal Labeling of Body Postures). This

123

1656

J Autism Dev Disord (2013) 43:16521661

Fig. 2 Screenshot of the verbal labeling of facial expressions task

task consisted of 24 trials in which each of the body posture


stimuli were presented in turn on the left side of the screen.
On the right side of the screen were six emotional labels
(angry, disgusted, afraid, happy, sad, surprised) and a
None of the above caption. Participants were instructed
to, as quickly and as accurately as they could, click the
verbal label that best described how each figure was feeling, or alternatively, if they deemed none of the emotional
labels appropriate, the None of the above option. The
order in which the body posture stimuli were presented was
randomly generated.
Once Task 3 had been completed, participants engaged
in the fourth and final experimental task (Verbal Labeling
of Facial Expressions). Each of the 24 JACFEE facial
affect stimuli were presented one at a time on the left side
of the screen, with the order of the presentations generated
randomly. On the right side of the screen were the same
seven verbal labels as used in the previous task; six corresponding to the basic emotions and a None of the
above option. Participants were again required to click on
the verbal label that corresponded to the emotion felt by the
target figure, or alternatively, the None of the above
caption. Completing Task 4 finished the experiment for
participants. Figure 2 shows a screenshot from Task 4
verbal labeling of facial expressions.

Results
Chi Square tests were used to examine accuracy on each of the
four tasks. Alpha was Bonferroni adjusted to .0071. The two
groups (AS and controls) did not differ significantly on overall
accuracy on any of the four tasks. With regard to specific

123

emotions, it was found that the control participants were significantly more accurate than the AS participants at verbally
labeling fear body postures only (v2[1, N = 40] = 8.37,
p = .003)see Table 2. To examine whether the seven AS
participants with comorbid ADHD differed significantly from
the remaining AS participants in their emotion recognition
skills, additional Chi square tests were conducted. However,
no significant differences in overall performance on any of the
four tasks were observed between these two groups.
As a lack of increased None of the above clicks in
response to fear body postures between Task 1 and Task 3
would arguably bolster the hypothesis that the AS group
identified that these stimuli encode a specific emotion but
were unsure as to which emotion, the number of times this
response option was clicked on each task was calculated.
The AS participants were found to click the None of the
above caption in response to fear body postures more on
Task 1 than on Task 3 (5 clicks versus 2 clicks), thus
offering support for this hypothesis.
QQ plots indicated that the response time data were
skewed for each of the four tasks, and thus Log-transformation was employed. Subsequent QQ plots indicated
normal distribution, and therefore GLM ANOVAs with
repeated-measures were used to examine the response time
data. For all four tasks, Mauchlys Test of Sphericity
indicated a significant result (p \ .01) and so the Greenhouse-Geisser correction was used as necessary. Each
analysis was computed both with and without covariates
(i.e., age, IQ, and DTVP-A scores). When included in the
analyses, none of the covariates were observed to have a
significant effect.
The control and AS groups did not differ significantly in
overall time taken to make responses on any of the four

J Autism Dev Disord (2013) 43:16521661

1657

Table 2 Mean decoding accuracy on the four tasks


Overall accuracy

Happiness

Sadness

Anger

Fear

Surprise

Disgust

Control (%)

83.6

98.8

93.8

92.5

80

48.8

87.5

AS (%)

81.1

95

96.3

95

82.5

32.5

85

1.03

.82

.13

.10

.04

3.73

.05

Task 1

Task 2
Control (%)

83.8

100

86.3

87.5

71.3

81.3

76.3

AS (%)

78.6

98.8

83.8

78.8

72.5

78.8

58.8

v2

4.62

.00

.04

.60

.00

.03

4.81

Task 3
Control (%)

83.6

92.5

88.8

86.3

98.8

77.5

57.5

AS (%)
v2

78.2
4.20

87.5
.62

86.3
.05

80
.71

85
8.37*

73.8
.13

56.3
.00

81.5

100

95

92.5

63.7

95

42.5

Task 4
Control (%)
AS (%)

76.3

96.3

88.8

91.3

56.3

96.3

28.7

v2

3.28

1.35

1.33

.00

.65

.00

2.72

Task 1 = Matching of body postures / Task 2 = Matching of facial expressions / Task 3 = Verbal labeling of body postures / Task 4 = Verbal
labeling of facial expressions
* p \ .0071

tasks (Task 1 F[1,35] = 3.20, p [ .05; Task 2 F[1,35] =


2.62, p [ .05; Task 3 F[1,35] = 0.74, p [ .05; Task 4
F[1,35] = 0.44, p [ .05). Only for Task 1 was there a
significant group by emotion interaction (F[1,35] = 4.83,
p \ .001), with the AS participants taking longer to match
angry body postures than controls (t[38] = 2.80, p =
.007). Further repeated-measure ANOVAs were conducted
to check whether the AS participants also diagnosed with
ADHD differed from the rest of the AS sample in terms of
response times on any of the four tasks. However, no significant differences in reaction times between these groups
were observed.
To examine the relationship between accuracy of
responses and response time, the mean reaction time for
correct responses and the mean reaction time for incorrect
responses were calculated for each group on each task.
Dependent measures t tests indicated that both groups took
significantly longer to respond to stimuli that they got
incorrect than stimuli they got correct on Task 1 (control
t[19] = 6.84, p \ .01; AS t[19] = 3.90, p \ .01) and Task
3 (control t[19] = 3.41, p \ .01; AS t[19] = 2.91,
p \ .01). The control group also demonstrated this pattern
on Task 4 (t[19] = 2.94, p \ .01).
Pearsons correlation coefficients were calculated both
within-groups, and for the sample as a whole (Bonferroni
correction was applied). Neither the AS participants nor the
controls demonstrated significant correlations between
their accuracy/speed on any task and their measured

variables (i.e., age, IQ, DTVP-A scores, KADI scores).


Participants ages, scores on the psychometric measures,
and performance on each task were collapsed across the
two groups. However, subsequent analysis failed to reveal
any significant correlations.

Discussion
No significant differences in overall performance on any of
the four tasks (in terms of either accuracy or reaction times)
were observed between the AS and control groups. However, there were significant between-group effects for
specific emotional stimuli on two of the tasks; the AS
participants performed less accurately than the controls at
verbally labeling fear body postures and took significantly
longer to respond to angry stimuli on the matching of body
postures task. These findings will now be discussed in
detail.
While the AS group made significantly more mistakes
than the controls verbally labeling fear body postures, no
significant difference between the groups was observed
regarding their accuracy of matching these same stimuli.
This suggests that while AS individuals can identify that
fear body postures encode a specific mental state they are
unsure as to what specific emotion they encode. Further
bolstering this hypothesis is the fact that the AS group did
not demonstrate more clicks on the None of the above

123

1658

label in response to fear body postures on Task 3 relative to


Task 1.
However, other potential explanations for this pattern of
results must also be considered. Alexithymic tendencies on
the part of the AS participants may account for their relatively poor verbal labeling of fear body postures. Alexithymia, which shows significant clinical overlap with AS,
is characterized by an inability to express emotions verbally, minimal fantasy life, and difficulty distinguishing
between emotional states and bodily sensations (Fitzgerald
and Bellgrove 2006). Individuals with clinically significant
levels of alexithymia have difficulty verbally describing
emotional expressions, and an inability to use emotional
words in the appropriate context (Pandey and Mandal
1997). As such, the impairment in decoding other peoples
mental states typically found in AS individuals may result
from a specific problem with verbally identifying these
states rather than an awareness that certain expressions
represent a distinct attitude but uncertainty as to which
attitude they encode.
In relation to performance on the two matching tasks,
the possibility must also be considered that participants
used a feature-matching strategy to match stimuli. In other
words, as stimuli representing the same emotion were often
characterized by similar nonverbal cues/featural combinations, participants may have matched stimuli based on
these similarities. In this scenario, good accuracy on the
matching task may not necessarily indicate recognition that
certain nonverbal cues encode distinct mental states.
As noted in the introduction, other researchers have also
found that autistic individuals have particular difficulty
identifying fear from bodily movements/postures (e.g.,
Philip et al. 2010). In a study using fMRI technology, AS/
HFA participants, in contrast to their neurotypical peers,
did not demonstrate a differential pattern of brain activation while viewing human bodies expressing fear compared
to emotionally neutral bodies (Hadjikhani et al. 2009).
Impaired decoding of fearful facial expressions by those
with AS/HFA has also been reported (e.g., Corden et al.
2008; Howard et al. 2000; Humphreys et al. 2007; Pelphrey
et al. 2002; Wallace et al. 2008). However, these studies
typically relied on verbal labeling response formats, and
therefore the findings only demonstrate that autistic individuals are impaired at verbally labeling fearful expressions. Furthermore, the researchers who obtained these
findings generally proposed that the fear-recognition
impairment results from a failure of autistic individuals to
spend sufficient time fixating on the eye area of faces.
However, in the current study the facial area of the body
posture stimulus figures was removed, and thus failure to
glean emotional information from their eyes cannot explain
the between-group effect in verbal labeling of fear body
postures. Interestingly, the groups did not differ

123

J Autism Dev Disord (2013) 43:16521661

significantly in their ability to either match or verbally


label fear facial expressions. Taken together, these findings
cast serious doubt on the theory that impaired fear recognition in AS stems (at least solely) from an aversion to
looking at the eye area of faces.
It has been variously reported that autistic individuals
are significantly worse than neurotypical controls at
decoding nonverbal expressions of sadness (Boraston et al.
2007, Philip et al. 2010; Wallace et al. 2008), surprise
(Baron-Cohen et al. 1993), anger (Bal et al.2010; Philip
et al. 2010; Wright et al. 2008), and disgust (Wallace et al.
2008). However, the findings of the present experiment do
not support the hypothesis that AS individuals are significantly less accurate than typically developing individuals at
identifying any of these emotions, at least from body
postures or facial expressions. Other studies have likewise
found no significant differences between AS and control
participants in the decoding (that is, verbal identification)
of affective facial expressions (e.g., Adolphs et al. 2001;
Baron-Cohen et al. 1997). Possibly, these inconsistent
findings may result from different stimuli used across the
experiments and/or may reflect heterogeneity of emotional
decoding skills within the AS population. Also of note is
the fact that in the present research accuracy of responses
for certain affective stimuli was at ceiling. This was most
evident for happy stimuli on all four tasks. Consequently,
the AS and control participants may have differed significantly in their ability to decode emotional stimuli in
addition to fear body postures, but such effects go
undetected.
Concerning reaction times, the groups did not differ
significantly in overall time taken to respond on any of the
four tasks. This is somewhat surprising, given that in a
previous experiment involving the same participants, the
AS group demonstrated significantly slower overall
response times on the two experimental tasks (Doody and
Bull 2011). However, the stimuli used in that experiment
were body posture representations of the attitudes of
boredom, interest, and disagreement. Possibly, stimuli
representing the Ekman emotions can be processed more
efficiently by AS individuals than can postural displays of
the above-mentioned attitudes. Nevertheless, in the current
research the AS groups did take significantly longer to
respond to a specific emotional stimuli type than the controls; i.e., anger body postures on the matching task.
Other researchers have also reported slowed response
times by AS/HFA individuals to a range of emotional faces
(e.g., Bal et al. 2010) and body postures (e.g., Zamagni
et al. 2011). Whereas neurotypical individuals tend to
visually process both facial and body posture stimuli in a
global/holistic way, AS individuals are believed to process
faces in a piecemeal way (Katsyri et al. 2008; Rondan and
Deruelle 2007) and demonstrate a similar difficulty with

J Autism Dev Disord (2013) 43:16521661

the configural processing of body postures (Reed et al.


2007). Arguably, such atypical visual processing on the
part of AS individuals leads to delayed responses to these
stimuli (Behrmann et al. 2006). It is not entirely clear why
the participant groups in the present study only differed
significantly in reaction times to one specific stimulus type.
However, given the relatively small data set, it is possible
that the groups also differed in time taken to respond to
other emotional stimuli, and that these differences would
have been observably significant with a larger sample. It is
worth noting that between-group response times for several
other emotional stimuli (in addition to anger body postures)
were significant at the a = .05 level.
Both the AS and control groups took significantly longer
to respond to stimuli that they got incorrect than stimuli
they got correct on the matching of body postures and
verbal labeling of body postures tasks. The control group in
the current study further demonstrated this pattern on the
verbal labeling of facial expressions task. Such negative
correlations between decoding accuracy and reaction times
have also been observed in other studies involving both
AS/HFA and neurotypical participants (e.g., Doody and
Bull 2011; Palermo and Coltheart 2004; Tracy et al. 2011).
Presumably, when the correct answer was obvious to participants they responded quickly, but took longer to
respond when the correct answer was not so apparent.
Finally, although Attention Deficit Hyperactivity Disorder (ADHD) has also been shown to impact on emotional
decoding ability (e.g., Kats-Gold et al. 2007; Yuill and
Lyon 2007), in the current research no significant differences in either accuracy or response times on any of the
four tasks were observed between the seven AS participants diagnosed with ADHD and the remainder of the AS
sample. It is worth noting that there are differing clinical
procedures in the Republic of Ireland regarding the diagnosis of AS with comorbid ADHD. Some diagnosticians
argue that attention/concentration problems are a core
feature of AS itself and thus do not in themselves warrant a
separate diagnosis, while others believe that these significant attentional problems can and should be diagnosed as
comorbid ADHD. As such, it is unclear whether the AS
participants with and without the ADHD diagnosis experienced comparable levels of ADHD symptomology.
In summary, the AS group made more mistakes than the
controls at verbally labeling, but not matching, fear body
postures. This implies that while the AS participants knew
that these fear stimuli represented a distinct mental state,
they were unsure as to which specific mental state. However, there are other potential explanations for this pattern
of results. It is possible that participants used a featurebased matching strategy on the matching tasks, and that AS
participants experienced a specific problem with verbally
identifying emotions, rather than an unawareness of what

1659

emotion the fear body posture stimuli represented. The


only between-group effect for response times was observed
with the matching of anger body postures; the AS group
taking longer to respond than the controls. The groups did
not differ significantly regarding their decoding accuracy
or response times to any other affective stimuli, in contrast
to the findings of previous research. Arguably, the discrepant findings across the emotion processing literature
reflects individual differences within higher functioning
autistic individuals and emphasizes the variability in skills
across the autism spectrum rather than a distinct impairment that is seen universally.

References
Adolphs, R., Sears, L., & Piven, J. (2001). Abnormal processing of
social information from faces in autism. Journal of Cognitive
Neuroscience,13, 232240.
American Psychiatric Association. (1994). Diagnostic and statistical
manual of mental disorders (4th ed.). Washington, DC: American Psychiatric Association Press.
Atkinson, A. P. (2009). Impaired recognition of emotions from body
movements is associated with elevated motion coherence
thresholds in autism spectrum disorders. Neuropsychologia,47,
30233029.
Bal, E., Harden, E., Lamb, D., Van Hecke, A. V., Denver, J. W., &
Porges, S. W. (2010). Emotion recognition in children with
autism spectrum disorders: Relations to eye gaze and autonomic
state. Journal of Autism and Developmental Disorders,40,
358370.
Baron-Cohen, S., Jolliffe, T., Mortimore, C., & Robertson, M. (1997).
Another advanced test of theory of mind: Evidence from very
high functioning adults with autism or Asperger syndrome.
Journal of Child Psychology and Psychiatry,38, 813822.
Baron-Cohen, S., Spitz, A., & Cross, P. (1993). Can children with
autism recognize surprise? Cognition and Emotion,7, 507516.
Behrmann, M., Avidan, G., Leonard, G. L., Kimchi, R., Luna, B.,
Humphreys, K., et al. (2006). Configural processing in autism and
its relationship to face processing. Neuropsychologia,44, 110129.
Blake, R., Turner, L. M., Smoski, M. J., Pozdol, S. L., & Stone, W. L.
(2003). Visual recognition of biological motion is impaired in
children with autism. Psychological Science,14, 151157.
Boraston, Z., Blakemore, S. J., Chilvers, R., & Skuse, D. (2007).
Impaired sadness recognition is linked to social interaction
deficit in autism. Neuropsychologia,45, 15011510.
Corden, B., Chilvers, R., & Skuse, D. (2008). Avoidance of
emotionally arousing stimuli predicts social-perceptual impairment in Aspergers syndrome. Neuropsychologia,46, 137147.
Doody, J. P., & Bull, P. (2011). Aspergers syndrome and the
decoding of boredom, interest, and disagreement from body
posture. Journal of Nonverbal Behavior,35, 87100.
Ekman, P. (1972). Universals and cultural differences in facial
expressions of emotion. In J. Cole (Ed.), Nebraska Symposium
on Motivation 1971 (Vol. 19, pp. 207283). Lincoln, NE:
University of Nebraska Press.
Ekman, P. (1992). An argument for basic emotions. Cognition and
Emotion,6(3), 169200.
Ekman, P. (1994). Strong evidence for universals in facial expressions: A reply to Russells mistaken critique. Psychological
Bulletin,115(2), 268287.

123

1660
Ekman, P., & Friesen, W. (1978). Facial action coding system. Palo
Alto, CA: Consulting Psychologists Press.
Ekman, P., Friesen, W. V., & Ellsworth, P. (1972). Emotion in the
human face: Guidelines for research and interaction of findings.
New York: Pergamon Press.
Fitzgerald, M., & Bellgrove, M. A. (2006). Letter to the editor: The
overlap between alexithymia and Aspergers syndrome. Journal
of Autism and Developmental Disorders,36(4), 573576.
Frank, M. G., & Stennett, J. (2001). The forced-choice paradigm and
the perception of facial expressions of emotion. Journal of
Personality and Social Psychology,80(1), 7585.
Gillberg, C., Cederlund, M., Lamberg, K., & Zeijlon, L. (2006). The
autism epidemic. The registered prevalence of autism in a
Swedish urban area. Journal of Autism and Developmental
Disorders,36, 429434.
Golan, O., Baron-Cohen, S., & Hill, J. (2006). The Cambridge
mindreading (CAM) face-voice battery: Testing complex emotion recognition in adults with and without Asperger syndrome.
Journal of Autism and Developmental Disorders,36(2), 169183.
Hadjikhani, N., Joseph, R. M., Manoach, D. S., Naik, P., Snyder, J.,
Dominick, K., et al. (2009). Body expressions of emotion do not
trigger fear contagion in autism spectrum disorder. Social
Cognitive and Affective Neuroscience,4(1), 7078.
Harms, M. B., Martin, A., & Wallace, G. L. (2010). Facial emotion
recognition in autism spectrum disorders: A review of behavioral
and neuroimaging studies. Neuropsychology Review,20, 290322.
Howard, M. A., Cowell, P. E., Boucher, J., Broks, P., Mayes, A.,
Farrant, A., et al. (2000). Convergent neuroanatomical and
behavioural evidence of an amygdala hypothesis of autism.
NeuroReport,11(13), 29312935.
Hubert, B., Wicker, B., Moore, D. G., Monfardini, E., Duverger, H.,
Da Fonseca, D., et al. (2007). Recognition of emotional and nonemotional biological motion in individuals with autistic spectrum disorders. Journal of Autism and Developmental Disorders,37, 13861392.
Humphreys, K., Minshew, N., Leonard, G. L., & Behrmann, M.
(2007). A fine-grained analysis of facial expression processing in
autism. Neuropsychologia,45, 685695.
Kats-Gold, I., Besser, A., & Priel, B. (2007). The role of simple
emotion recognition skills among school aged boys at risk of
ADHD. Journal of Abnormal Child Psychology,35, 363378.
Katsyri, J., Saalasti, S., Tiippana, K., von Wendt, L., & Sams, M.
(2008). Impaired recognition of facial emotions from low-spatial
frequencies in Asperger syndrome. Neuropsychologia,46,
18881897.
Kleinman, J., Marciano, P. L., & Ault, R. L. (2001). Advanced theory
of mind in high-functioning adults with autism. Journal of
Autism and Developmental Disorders,31, 2936.
Krug, D. A., & Arick, J. R. (2003). Krug Aspergers disorder index.
Austin: Pro-Ed Inc.
Lindner, J. L., & Rosen, L. A. (2006). Decoding of emotion through
facial expression, prosody and verbal content in children and
adolescents with Aspergers syndrome. Journal of Autism and
Developmental Disorders,36, 769777.
Matsumoto, D., & Ekman, P. (1988). Japanese and Caucasian Facial
Expressions of Emotion (JACFEE). (Available from the Human
Interaction Laboratory, University of California, San Francisco,
401 Parnassus Avenue, San Francisco, CA 94143).
Mazefsky, C. A., & Oswald, D. P. (2007). Emotion perception in
Aspergers syndrome and high-functioning autism: The importance of diagnostic criteria and cue intensity. Journal of Autism
and Developmental Disorders,37, 10861095.
Miller, J. N., & Ozonoff, S. (2000). The external validity of Asperger
disorder: Lack of evidence from the domain of neuropsychology.
Journal of Abnormal Psychology,109, 227238.

123

J Autism Dev Disord (2013) 43:16521661


Murphy, P., Brady, N., Fitzgerald, M., & Troje, N. F. (2009). No
evidence for impaired perception of biological motion in adults
with autistic spectrum disorders. Neuropsychologia,47(14),
32253235.
Palermo, R., & Coltheart, M. (2004). Photographs of facial expression: Accuracy, response times, and ratings of intensity.
Behavior Research Methods, Instruments & Computers,36(4),
634638.
Pandey, R., & Mandal, M. K. (1997). Processing of facial expressions
of emotion and alexithymia. Journal of Clinical Psychology,36,
631633.
Parron, C., Da Fonseca, D., Santos, A., Moore, D. G., Monfardini, E.,
& Deruelle, C. (2008). Recognition of biological motion in
children with autistic spectrum disorders. Autism,12(3),
261274.
Pelphrey, K. A., Sasson, N. J., Reznick, J. S., Paul, G., Goldman, B.
D., & Piven, J. (2002). Visual scanning of faces in autism.
Journal of Autism and Developmental Disorders,32(4), 249261.
Philip, R. C. M., Whalley, H., Stanfield, A. C., Sprengelmeyer, R.,
Santos, I. M., Young, A. W., et al. (2010). Deficits in facial, body
movement and vocal emotional processing in autism spectrum
disorders. Psychological Medicine,40(11), 19191929.
Pohlig, R. L. (2008, May). Nonverbal sensitivity in individuals with
autism spectrum disorders. Poster session presented at the annual
International Meeting for Autism Research, London.
Reed, C. L., Beall, P. M., Stone, V. E., Kopelioff, L., Pulham, D. J., &
Hepburn, S. L. (2007). Perception of body posture: What
individuals with autism spectrum disorder might be missing.
Journal of Autism and Developmental Disorders,37, 15761584.
Reynolds, C. R., Pearson, N. A., & Voress, J. K. (2002). Developmental test of visual perception: Adolescent and adult (DTVPA). Austin, TX: Pro-Ed.
Roid, G. H. (2003). Stanford-binet intelligence scales (5th ed.). Itasca,
IL: Riverside Publishing.
Rondan, C., & Deruelle, C. (2007). Global and configural visual
processing in adults with autism and Asperger syndrome.
Research in Developmental Disabilities,28, 197206.
Russell, J. A. (1993). Forced-choice response in the study of facial
expression. Motivation and Emotion,17, 4151.
Russell, J. A. (1994). Is there universal recognition of emotion from
facial expressions? A review of the cross-cultural studies.
Psychological Bulletin,115, 102141.
Rutherford, M. D., Baron-Cohen, S., & Wheelwright, S. (2002).
Reading the mind in the voice: A study with normal adults and
adults with Aspergers syndrome and high functioning autism.
Journal of Autism and Developmental Disorders,32, 189194.
Schindler, K., Van Gool, L., & de Gelder, B. (2008). Recognizing
emotions expressed by body pose: A biologically inspired neural
model. Neural Networks,21(9), 12381246.
Tracy, J. L., Robins, R. W., Schriber, R. A., & Solomon, M. (2011). Is
emotion recognition impaired in individuals with autism spectrum disorders? Journal of Autism and Developmental Disorders,41, 102109.
Wallace, S., Coleman, M., & Bailey, A. (2008). An investigation of
basic facial expression recognition in autism spectrum disorders.
Cognition and Emotion,22(7), 13531380.
Winters, A. (2005). Perceptions of body posture and emotion: A
question of methodology. The New School Psychology Bulletin,3, 3545.
Woodcock, R. W., Schrank, F. A., Mather, N., & McGrew, K. S.
(2007). Woodcock-Johnson III tests of achievement, form c/brief
battery. Rolling Meadows, IL: Riverside Publishing.
World Health Organization. (1992). The ICD-10 classification of
mental and behavioural disorders. Clinical descriptions and
diagnostic guidelines. Author: Geneva.

J Autism Dev Disord (2013) 43:16521661


Wright, B., Clarke, N., Jordan, J., Young, A. W., Clarke, P., Miles, J.,
et al. (2008). Emotion recognition in faces and the use of visual
context in young people with high-functioning autism spectrum
disorders. Autism,12(6), 607626.
Yuill, N., & Lyon, J. (2007). Selective difficulty in recognising facial
expressions of emotion in boys with ADHD: General

1661
performance impairments or specific problems in social cognition? European Child and Adolescent Psychiatry,16(3),
398404.
Zamagni, E., Dolcini, C., Gessaroli, E., Santelli, E., & Frassinetti, F.
(2011). Scared by you: Modulation of bodily-self by emotional
body-postures in autism. Neuropsychology,25(2), 270276.

123

Copyright of Journal of Autism & Developmental Disorders is the property of Springer


Science & Business Media B.V. and its content may not be copied or emailed to multiple sites
or posted to a listserv without the copyright holder's express written permission. However,
users may print, download, or email articles for individual use.

S-ar putea să vă placă și