Documente Academic
Documente Profesional
Documente Cultură
Abstract Automated gender estimation has numerous appli- Automated gender estimation remains a challenging
cations, including video surveillance, humancomputer interac- research area, due to large intra-class variation [51], and also
tion, anonymous customized advertisement, and image retrieval. due to challenges concerning illumination, as well as pose,
Most commonly, the underlying algorithms analyze the facial
appearance for clues of gender. In this paper, we propose a age and ethnicity of a person. Further, facial expressions
novel method for gender estimation, which exploits dynamic have a negative affect on the accuracy of automated gender
features gleaned from smiles and we proceed to show that: estimation systems. This is why the majority of previous works
a) facial dynamics incorporate clues for gender dimorphism and have extracted and studied appearance-based features under
b) while for adult individuals appearance features are more the simplifying assumption of neutral face expressions with
accurate than dynamic features, for subjects under 18 years
facial dynamics can outperform appearance features. In addition, reasonably good results.
we fuse proposed dynamics-based approach with state-of-the-art
appearance-based algorithms, predominantly improving perfor-
mance of the latter. Results show that smile-dynamics include A. Gender and Emotional Expression
pertinent and complementary to appearance gender information. Deviating from such works, we here introduce the usage
Index Terms Soft biometrics, gender estimation, facial of a set of dynamic facial features for gender estimation.
dynamics. Specifically, we focus on extracting dynamic features from
I. I NTRODUCTION a common facial expression, namely the smile, and study
for age estimation, as well as spontaneous vs. posed smile features (patches) in order to identify the gender of a person.
detection based on facial dynamics, see also [25] and [28]. This is a particularly challenging problem, as is implied
The use of the framework is instrumental in answering from the fact that female and male average facial shapes are
following questions: generally found to be very similar [49].
Do facial dynamics provide information about gender Another challenge comes to the fore in unconstrained set-
in (a) spontaneous smile- and (b) posed smile video tings with different covariates, such as illumination, expres-
sequences? sions and ethnicity. While in more constrained settings,
Can facial smile dynamics improve the accuracy of face-based gender estimation has been reported to achieve
appearance based gender estimation systems? classification rates of up to 99.3% (see Table I), this per-
Which gender can pose smiles more genuinely? formance though significantly decreases in more realistic and
Related work of a holistic smile-based gender estimation unconstrained settings.
algorithm can be found in Bilinski et al. [8]. The majority of gender classification methods contain two
steps preceding face detection, namely feature extraction and
C. Structure of Paper pattern classification.
This work is organized as follows: Section I-D revisits Feature extraction: Notable efforts include the use of
existing works on gender estimation. Section II proceeds to SIFT [73], LBP [53], semi-supervised discriminant analy-
describe the proposed method, elaborating on individual steps sis (SDA) [7] or combinations of different features [35], [77].
(face detection, landmark location, selected features, statistics Classification: A number of classification methods have
of dynamic features, feature selection, classification and used been used for gender estimation, and a useful comparative
appearance features). Section III presents the employed dataset guide of these classification methods can be found in Mkinen
and the subsequent Section IV depicts and discusses related and Raisamo [53]. One interesting conclusion of their work
experimental results. Finally Section V concludes the paper. was that image size did not greatly influence the classification
rates. This same work also revealed that manual alignment
D. Related Work affected the classification rates positively, and that the best
Gender estimation Existing introductory overviews for classification rates were achieved by SVM.
algorithms related to gender estimation include the works of The area of gender estimation has also received some
Bekios-Calfa et al. [6], Dantcheva et al. [20], Mkinen and other contributions such as those that go beyond using static
Raisamo [53], Ng et al. [59], and Ramanathan et al. [63]. 2D visible spectrum face-images. Interesting related work
Based on these works we can conclude that gender estimation include the work of Han et al. [38], exploring 3D images,
remains a challenging task, which is inherently associated with Gonzalez-Sosa et al. [34], studying jointly body and face,
different biometric modalities including fingerprint, face, iris, and Chen and Ross [17], [67], using near-infrared (NIR) and
voice, body shape, gait, signature, DNA, as well as clothing, thermal images for gender classification.
hair, jewelery and even body temperature. The forensic litera- Expression Recognition Automated expression recognition
ture [51] suggests that the skull, and specifically the chin and has received increased attention in the past decade, since it
the jawbone, as well as the pelvis, are the most significant is particularly useful in a variety of applications, such as
indicators of the gender of a person; in juveniles, these shape- human computer interaction, surveillance and crowd analyt-
based features have been recorded to provide classification ics. The majority of methods aim to classify 7 universal
accuracy of 91% 99%. expressions namely neutral, happy, surprised, fearful, angry,
Humans are generally quite good at gender recognition from sad, and disgusted [80] based on the extracted features used.
early in life (e.g., [60], [62]), probably reflecting evolutive Classical approaches follow Ekmans facial action coding
adaptation. As pointed out by Edelman et al. [29], humans system (FACS) [30], assigning each facial unit to repre-
perform facial image based gender classification with an error sent movement of a specific facial muscle. In this context,
rate of about 11%, which is commensurate to that of a neural intensity and number of facial units have been studied,
network algorithm performing the same task. as well as of action unit combinations, towards expression
Dynamics have been used in the context of body-based recognition. Interesting work can be found in related survey
classification of gender. Related cues include body sway, papers [56], [69], [82] and in a related recent expression-
waist-hip ratio, and shoulder-hip ratio (see [57]); for example, recognition-challenge-study [74]. Latest advances involve deep
females have a distinct waist-to-hip ratio and swing their hips learning [46], [83].
more, whereas males have broader shoulders and swing their Inspired by cognitive, psychological and neuroscientific
shoulders more. findings, facial dynamics have been used previously towards
Despite these recent successes, automated gender recogni- improving face recognition [37], gender estimation [23], age
tion from biometric data remains a challenge and is impacted estimation [26], as well as kinship recognition reported in a
by other soft biometrics, for example, age and ethnicity; review article by Hadid et al. [36].
gender dimorphism is accentuated only in adults, and varies
across different ethnicities. II. DYNAMIC F EATURE E XTRACTION IN
Automated Image-based Gender Estimation from Face S MILE -V IDEO -S EQUENCES
In gender estimation from face, feature-based approaches Deviating from the above works on gender estimation, we
extract and analyze a specific set of discriminative facial propose to extract dynamic features in smile-video-sequences.
DANTCHEVA AND BRMOND: GENDER ESTIMATION BASED ON SMILE-DYNAMICS 721
TABLE I
OVERVIEW OF FACE -BASED G ENDER C LASSIFICATION A LGORITHMS . A BBREVIATIONS U SED : P RINCIPAL C OMPONENT A NALYSIS (PCA),
I NDEPENDENT C OMPONENT A NALYSIS (ICA), S UPPORT V ECTOR M ACHINES (SVM), G AUSSIAN P ROCESS C LASSIFIERS (GPC),
A CTIVE A PPEARANCE M ODEL (AAM), L OCAL B INARY PATTERN (LBP), A CTIVE S HAPE M ODEL (ASM),
D ISCRETE C OSINE T RANSFORM (DCT), S EMI -S UPERVISED D ISCRIMINANT A NALYSIS (SDA)
The general scheme is shown in Fig. 1. Specifically we focus deformable face alignment framework [79], using a discrimi-
on signal displacement of facial landmarks, as we aim to study native 3D facial deformable shape model fitted to a 2D image
among others the pertinence of different facial landmarks, as by a cascade of linear regressors. The detector was trained
well as the pertinence of different statistical properties of facial on the 300W -dataset (a dataset introduced in the context of
dynamics (e.g. intensity and duration) in the effort of gender the 300 faces in-the-wild challenge [68]) and detects 49 facial
estimation. landmarks (see Fig. 5). For the UvA Nemo-dataset the facial
Towards extraction of such dynamic features, we assume landmarks were detected robustly in all video sequences and
a near frontal pose of the subject and an initial near-neutral frames. We use these points to initialize a sparse optical flow
expression of the subject (given in the used dataset). tracking algorithm, based on the Kanade-Lucas-Tomasi (KLT)
algorithm [52] in the first frame of each video-sequence. For
A. Face Detection and Extraction of Facial Landmarks the here proposed framework we select a subset of facial-
Firstly we detect the face using the well established Viola points in three different face regions: (a) eye brow region,
and Jones algorithm [76]. We here note that the faces were (b) eye region, (c) mouth region (see Fig. 2) and proceed to
robustly detected in all video sequences and frames. Within the extract dynamic features thereof.
detected face we identify facial feature points corresponding B. Extraction of Dynamic Features
to points in the regions of the eye brows, eyes, nose and lips We extract dynamic features corresponding to the signal-
(see Fig. 5). Specifically we employ the facial landmark detec- displacement in facial-distances depicted in Table II. We have
tion algorithm proposed in the work of Asthana et al. [2]. The selected 27 such facial-distances based on findings on facial
algorithm is an incremental formulation for the discriminative movements during smile-expressions [66].
722 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 12, NO. 3, MARCH 2017
TABLE II
E XTRACTED S IGNAL -D ISPLACEMENT-F UNCTIONS C ONTRIBUTING TO DYNAMIC F EATURES . D ENOTES THE D ISTANCE
B ETWEEN FACIAL L ANDMARKS , li D ENOTES THE i th L ANDMARK P OINT, AS I LLUSTRATED IN F IG . 2
TABLE III
E XTRACTED DYNAMIC F EATURE S TATISTICS . D ENOTES THE N UMBER OF F RAMES , D D ENOTES THE R ESPECTIVE DYNAMIC F EATURE , V (t) = ddtD
2
D ENOTES THE S PEED , A = d D dV
2 = dt D ENOTES THE A CCELERATION , D ENOTES THE F RAME R ATE OF THE V IDEO S EQUENCE . T HE
dt
S UPERSCRIPT + D ENOTES T HE O NSET, S UPERSCRIPT a D ENOTES THE A PEX , S UPERSCRIPT D ENOTES THE O FFSET
D. Classification
A pattern classifier, trained on labeled data, is used to
classify the feature vector into one of two classes: male or
female.
We utilized linear Support Vector Machines (SVM) [14],
AdaBoost [6] and Bagged Trees [9] in this work. For SVM
the Gaussian RBF kernel is used. The optimum values for C
and the kernel parameter are obtained by a grid-search of
the parameter space based on the training set.
TABLE IV
S PONTANEOUS S MILE . T RUE G ENDER C LASSIFICATION R ATES . A GE G IVEN IN Y EARS
TABLE V
S PONTANEOUS S MILE IN A GE C ATEGORY > 19: C ONFUSION M ATRIX FOR M ALES AND F EMALES FOR (a) A PPEARANCE F EATURES #1 (OpenBR)
(D ENOTED AS A PP. 1), (b) A PPEARANCE F EATURES #2 (how-old.net) (D ENOTED AS A PP. 2) , (c) A PPEARANCE F EATURES #3 (COTS)
(D ENOTED AS A PP. 3), (d) DYNAMIC F EATURES (D ENOTED AS DYN .), (e) DYNAMIC AND A PPEARANCE F EATURES #1
(D ENOTED AS DYN . + A PP. 1), (f) DYNAMIC AND A PPEARANCE F EATURES #2 (D ENOTED AS DYN . + A PP. 2),
(g) DYNAMIC AND A PPEARANCE F EATURES #3 (D ENOTED AS DYN . + A PP. 3)
TABLE VII
P OSED S MILE IN A GE C ATEGORY > 19:C ONFUSION M ATRIX FOR M ALES AND F EMALES FOR ( A ) A PPEARANCE F EATURES #1 (O PEN BR) (D ENOTED
AS A PP. 1), ( B ) A PPEARANCE F EATURES #2 (how-old.net) (D ENOTED AS A PP. 2) , ( C ) A PPEARANCE F EATURES #3 (COTS)
(D ENOTED AS A PP. 3), ( D ) DYNAMIC F EATURES (D ENOTED AS DYN .), ( E ) DYNAMIC AND A PPEARANCE
F EATURES #1 (D ENOTED AS DYN . + A PP. 1), ( F ) DYNAMIC AND A PPEARANCE F EATURES #2
(D ENOTED AS DYN . + A PP. 2), ( G ) DYNAMIC AND A PPEARANCE
F EATURES #3 (D ENOTED AS DYN . + A PP. 3)
TABLE VIII
M OST D ISCRIMINATE DYNAMIC F EATURES FOR A GE < 20.
TGCR...T RUE G ENDER C LASSIFICATION R ATE
TABLE IX
subset-size, that contributes to larger trainings-sets in the case
M OST D ISCRIMINATE DYNAMIC F EATURES FOR A GE > 19.
of dynamics-based gender classification, as well as in the TGCR...T RUE G ENDER C LASSIFICATION R ATE
fusion of appearance and dynamics-based features. Neverthe-
less, the results suggest that dynamics of posed smiles carry
significant cues on gender, similarly to spontaneous smiles.
The related confusion matrices are shown in Table VII.
This result is in agreement with psychological findings,
that show that females are more accurate expressers of emo-
tion, when posing deliberately and when observed unobtru-
sively [10], hinting that posing a smile carries gender-specific
cues.
Fig. 7. Boxplots of most discriminative features in age category > 19 years. Females tended to show longer Mean Amplitude Apex of mouth opening,
a higher Maximum Amplitude on the right side of the mouth, as well as a shorter Mean Speed Offset on the left side of the mouth, than males. Further
the Mean Acceleration Offset of the mouth length is shorter for females than for males. (a) D11 Mean Amplitude Apex. (b) D8 Maximum Amplitude.
(c) D9 Mean Speed Offset. (d) D5 Mean Acceleration Offset.
[19] A. Dantcheva, J.-L. Dugelay, and P. Elia, Soft biometrics systems: [45] S. Jia and N. Cristianini, Learning to classify gender from four million
Reliability and asymptotic bounds, in Proc. IEEE Int. Conf. Biometrics, images, Pattern Recognit. Lett., vol. 58, pp. 3541, Jun. 2015.
Theory, Appl. Syst. (BTAS), Sep. 2010, pp. 16. [46] F. Juefei-Xu, E. Verma, P. Goel, A. Cherodian, and M. Savvides, Deep-
[20] A. Dantcheva, P. Elia, and A. Ross, What else does your biometric gender: Occlusion and low resolution robust facial gender classification
data reveal? A survey on soft biometrics, IEEE Trans. Inf. Forensics via progressively trained convolutional neural networks with attention,
Security, vol. 11, no. 3, pp. 441467, Mar. 2015. in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) Workshops,
[21] A. Dantcheva, A. Singh, P. Elia, and J.-L. Dugelay, Search pruning in Jun. 2016, pp. 6877.
video surveillance systems: Efficiency-reliability tradeoff, in Proc. Int. [47] B. F. Klare, M. J. Burge, J. C. Klontz, R. W. V. Bruegge, and A. K. Jain,
Conf. Comput. Vis. Workshops, Nov. 2011, pp. 13561363. Face recognition performance: Role of demographic information,
[22] A. Dantcheva, C. Velardo, A. DAngelo, and J.-L. Dugelay, Bag of IEEE Trans. Inf. Forensics Security, vol. 7, no. 6, pp. 17891801,
soft biometrics for person identification new trends and challenges, Dec. 2012.
Multimedia Tools Appl., vol. 51, no. 2, pp. 739777, Jan. 2011. [48] J. C. Klontz, B. F. Klare, S. Klum, A. K. Jain, and M. J. Burge, Open
[23] M. Demirkus, M. Toews, J. J. Clark, and T. Arbel, Gender classification source biometric recognition, in Proc. IEEE 6th Int. Conf. Biometrics,
from unconstrained video sequences, in Proc. IEEE Comput. Soc. Theory, Appl. Syst. (BTAS), Sep. 2013, pp. 18.
Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), Jun. 2010, [49] J. H. Langlois and L. A. Roggman, Attractive faces are only average,
pp. 5562. Psychol. Sci., vol. 1, no. 2, pp. 115121, Mar. 1990.
[24] F. M. Deutsch, D. LeBaron, and M. M. Fryer, What is in a smile? [50] M.-F. Libart et al., Smile line and periodontium visibility, Perio,
Psychol. Women Quart., vol. 11, no. 3, pp. 341352, Sep. 1987. vol. 1, no. 1, pp. 1725, 2004.
[25] H. Dibeklioglu, F. Alnajar, A. A. Salah, and T. Gevers, Combining [51] S. R. Loth and M. Y. Iscan, Sex Determination, Encyclopedia of Forensic
facial dynamics with appearance for age estimation, IEEE Trans. Image Sciences, vol. 1. San Diego, CA, USA: Academic, 2000.
Process., vol. 24, no. 6, pp. 19281943, Jun. 2015. [52] B. D. Lucas and T. Kanade, An iterative image registration technique
[26] H. Dibeklioglu, T. Gevers, A. A. Salah, and R. Valenti, A smile can with an application to stereo vision, in Proc. 7th Int. Joint Conf. Artif.
reveal your age: Enabling facial dynamics in age estimation, in Proc. Intell. (IJCAI), vol. 2. Vancouver, BC, Canada, 1981, pp. 674679.
20th ACM Int. Conf. Multimedia, Nov. 2012, pp. 209218. [53] E. Makinen and R. Raisamo, Evaluation of gender classification meth-
[27] H. Dibeklioglu, A. A. Salah, and T. Gevers, Are you really smiling at ods with automatically detected and aligned faces, IEEE Trans. Pattern
me? spontaneous versus posed enjoyment smiles, in Proc. Eur. Conf. Anal. Mach. Intell., vol. 30, no. 3, pp. 541547, Mar. 2008.
Comput. Vis. (ECCV), Oct. 2012, pp. 525538. [54] C. Z. Malatesta et al., The development of emotion expression during
[28] H. Dibekliolu, A. A. Salah, and T. Gevers, Recognition of genuine the first two years of life, Monogr. Soc. Res. Child Dev., vol. 54,
smiles, IEEE Trans. Multimedia, vol. 17, no. 3, pp. 279294, Mar. 2015. nos. 12, pp. 105136, 1989.
[29] B. Edelman, D. Valentin, and H. Abdi, Sex classification of face areas: [55] C. Z. Malatesta and J. M. Haviland, Learning display rules: The
How well can a linear neural network predict human performance? socialization of emotion expression in infancy, Child Develop., vol. 53,
Biol. Syst., vol. 6, no. 3, p. 241, 1996. no. 4, pp. 9911003, Aug. 1982.
[30] P. Ekman, Facial expression and emotion, Amer. Psychol., vol. 48, [56] A. Martinez and S. Du, A model of the perception of facial expressions
no. 4, pp. 384392, 1993. of emotion by humans: Research overview and perspectives, J. Mach.
[31] P. Ekman and W. V. Friesen, Felt, false, and miserable smiles, Learn. Res., vol. 13, no. 1, pp. 15891608, Jan. 2012.
J. Nonverbal Behavior, vol. 6, no. 4, pp. 238252, Jun. 1982. [57] G. Mather and L. Murdoch, Gender discrimination in biological motion
[32] A. Fogel, S. Toda, and M. Kawai, Mother-infant face-to-face interaction displays based on dynamic cues, Biol. Sci. B, vol. 258, no. 1353,
in japan and the United States: A laboratory comparison using 3-month- pp. 273279, 1994.
old infants, Develop. Psychol., vol. 24, no. 3, pp. 398406, 1988. [58] M. Nazir, M. Ishtiaq, A. Batool, M. A. Jaffar, and A. M. Mirza, Feature
[33] W. Gao and H. Ai, Face gender classification on consumer images selection for efficient gender classification, in Proc. WSEAS Int. Conf.
in a multiethnic environment, in Proc. IEEE Int. Conf. Biometrics, Neural Netw., Evol. Comput. Fuzzy Syst., Jun. 2010, pp. 7075.
Jun. 2009, pp. 169178. [59] C. B. Ng, Y. H. Tay, and B.-M. Goi, Vision-based human gender
[34] E. Gonzalez-Sosa, A. Dantcheva, R. Vera-Rodriguez, J.-L. Dugelay, recognition: A survey, in Proc. Pacific Rim Int. Conf. Artif. Intell.,
F. Brmond, and J. Fierrez, Image-based gender estimation from body vol. 7458. 2012, pp. 335346.
and face across distances, in Proc. Int. Conf. Pattern Recognit. (ICPR), [60] A. OToole, A. Peterson, and K. A. Deffenbacher, An other-race effect
Dec. 2016, pp. 16. for categorizing faces by sex, Perception, vol. 25, no. 6, pp. 669676,
[35] G. Guo, C. R. Dyer, Y. Fu, and T. S. Huang, Is gender recognition 1996.
affected by age? in Proc. Int. Conf. Comput. Vis. Workshops, Sep. 2009, [61] H. Peng, F. Long, and C. Ding, Feature selection based on mutual
pp. 20322039. information criteria of max-dependency, max-relevance, and min-
[36] A. Hadid, J.-L. Dugelay, and M. Pietikinen, On the use of dynamic redundancy, IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 8,
features in face biometrics: Recent advances and challenges, Signal, pp. 12261238, Aug. 2005.
Image Video Process., vol. 5, no. 4, pp. 495506, 2011. [62] P. C. Quinn, J. Yahr, A. Kuhn, A. M. Slater, and O. Pascalis, Rep-
[37] A. Hadid and M. Pietikinen, Combining appearance and motion for resentation of the gender of human faces by infants: A preference for
face and gender recognition from videos, Pattern Recognit., vol. 42, female, Perception, vol. 31, no. 9, pp. 11091121, 2002.
no. 11, pp. 28182827, 2009. [63] N. Ramanathan, R. Chellappa, and S. Biswas, Age progression in
[38] X. Han, H. Ugail, and I. Palmer, Gender classification based on 3D face human faces: A survey, J. Vis. Lang. Comput., vol. 15, pp. 33493361,
geometry features using SVM, in Proc. Int. Conf. CyberWorlds (CW), 2009.
Sep. 2009, pp. 114118. [64] E. Ramn-Balmaseda, J. Lorenzo-Navarro, and M. Castrilln-Santana,
[39] U. Hess, R. Adams, Jr., and R. Kleck, Who May frown and who should Gender classification in large databases, in Progress in Pattern Recog-
smile? dominance, affiliation, and the display of happiness and anger, nition, Image Analysis, Computer Vision, and Applications. Heidelberg,
Cognition Emotion, vol. 19, no. 4, pp. 515536, 2005. Germany: Springer, 2012, pp. 7481.
[40] U. Hess, R. B. Adams, Jr., and R. E. Kleck, Facial appearance, gender, [65] D. Reid, S. Samangooei, C. Chen, M. Nixon, and A. Ross, Soft
and emotion expression, Emotion, vol. 4, no. 4, pp. 378388, 2004. biometrics for surveillance: An overview, in Handbook of Statistics,
[41] U. Hess, R. B. Adams, Jr., and R. E. Kleck, When two do the same, vol. 31. 2013.
it might not mean the same: The perception of emotional expressions [66] C. K. Richardson, D. Bowers, R. M. Bauer, K. M. Heilman, and
shown by men and women, in Group Dynamics and Emotional Expres- C. M. Leonard, Digitizing the moving face during dynamic displays of
sion, U. Hess and P. Philippot, Eds. New York, NY, USA: Cambridge emotion, Neuropsychologia, vol. 38, no. 7, pp. 10281039, Jun. 2000.
University Press, 2007, pp. 3350. [67] A. Ross and C. Chen, Can gender be predicted from near-infrared face
[42] U. Hess and P. Thibault, Why the same expression May not mean images? in Proc. Int. Conf. Image Anal. Recognit. (ICIAR), Jun. 2011,
the same when shown on different faces or seen by different people, pp. 120129.
in Affective Information Processing. London, U.K.: Springer, 2009, [68] C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic, 300 faces
pp. 145158. in-the-wild challenge: The first facial landmark localization challenge,
[43] S. Y. D. Hu, B. Jou, A. Jaech, and M. Savvides, Fusion of region- in Proc. IEEE Int. Conf. Comput. Vis. Workshops (ICCVW), Jun. 2013,
based representations for gender identification, in Proc. Int. Joint Conf. pp. 397403.
Biometrics, Oct. 2011, pp. 17. [69] G. Sandbach, S. Zafeiriou, M. Pantic, and L. Yin, Static and dynamic
[44] A. K. Jain, S. C. Dass, and K. Nandakumar, Can soft biometric traits 3D facial expression recognition: A comprehensive survey, Image Vis.
assist user recognition? Proc. SPIE, vol. 5404, pp. 561572, Aug. 2004. Comput., vol. 30, no. 10, pp. 683697, Oct. 2012.
DANTCHEVA AND BRMOND: GENDER ESTIMATION BASED ON SMILE-DYNAMICS 729
[70] C. Shan, Gender classification on real-life faces, in Proc. Int. Conf. Antitza Dantcheva received the Ph.D.
Adv. Concepts Intell. Vis. Syst., 2010, pp. 323331. degree in signal and image processing from
[71] C. Shan, Learning local binary patterns for gender classification on real- Eurecom/Telecom ParisTech, France, in 2011.
world face images, Pattern Recognit. Lett., vol. 33, no. 4, pp. 431437, She was a Marie Curie Fellow with INRIA and
Mar. 2012. a Post-Doctoral Fellow with Michigan State
[72] R. W. Simon and L. E. Nath, Gender and emotion in the United States: University and West Virginia University, USA.
Do men and women differ in self-reports of feelings and expressive She is currently a Post-Doctoral Fellow with the
behavior? Amer. J. Sociol., vol. 109, no. 5, pp. 11371176, Mar. 2004. STARS team, INRIA, France. She was a recipient
[73] M. Toews and T. Arbel, Detection, localization, and sex classification of the Best Presentation Award in ICME 2011, the
of faces from arbitrary viewpoints and under occlusion, IEEE Trans. Best Poster Award in ICB 2013, and the Tabula
Pattern Anal. Mach. Intell., vol. 31, no. 9, pp. 15671581, Sep. 2009. Rasa Spoofing Award in ICB 2013. Her research
[74] M. F. Valstar et al., FERA 2015Second facial expression recognition interests are in soft biometrics for security and commercial applications,
and analysis challenge, in Proc. 11th IEEE Int. Conf. Workshops Autom. where she was involved in retrieval of soft biometrics from images and their
Face Gesture Recognit. (FG), vol. 6. May 2015, pp. 18. corresponding analysis.
[75] P. F. Velleman, Definition and comparison of robust nonlinear data
smoothing algorithms, J. Amer. Statist. Assoc., vol. 75, no. 371,
pp. 609615, 1980.
[76] P. Viola and M. J. Jones, Robust real-time face detection, Int. J.
Comput. Vis., vol. 57, no. 2, pp. 137154, 2004.
[77] J.-G. Wang, J. Li, W.-Y. Yau, and E. Sung, Boosting dense sift
descriptors and shape contexts of face images for gender recogni-
tion, in Proc. Comput. Vis. Pattern Recognit. Workshops, Jun. 2010,
pp. 96102.
[78] B. Xia, H. Sun, and B.-L. Lu, Multi-view gender classification based Franois Brmond received the Ph.D. degree in
on local Gabor binary mapping pattern and support vector machines, video understanding from INRIA in 1997, and he
in Proc. Int. Joint Conf. Neural Netw., Jun. 2008, pp. 33883395. pursued the research work as a post doctorate with
[79] X. Xiong and F. de la Torre, Supervised descent method and its the University of Southern California on the inter-
applications to face alignment, in Proc. IEEE Conf. Comput. Vis. pretation of videos taken from Unmanned Airborne
Pattern Recognit. (CVPR), Jun. 2013, pp. 532539. Vehicle (UAV). In 2007, he received the HDR degree
[80] K. Yu, Z. Wang, L. Zhuo, J. Wang, Z. Chi, and D. Feng, Learning (Habilitation a Diriger des Recherches) from Nice
realistic facial expressions from Web images, Pattern Recognit., vol. 46, University on Scene Understanding. He created the
no. 8, pp. 21442155, Aug. 2013. STARS team on January 1, 2012. He is a research
[81] L. A. Zebrowitz and J. M. Montepare, Social psychological face director at INRIA Sophia Antipolis, France. He has
perception: Why appearance matters, Social Pers. Psychol. Compass, conducted research work in video understanding
vol. 2, no. 3, pp. 14971517, May 2008. since 1993 in Sophia Antipolis. He has authored or coauthored over 140
[82] Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang, A survey of scientific papers published in international journals or conferences in video
affect recognition methods: Audio, visual, and spontaneous expressions, understanding. He is a Handling Editor of the MVA and a reviewer of several
IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 1, pp. 3958, international journals (CVIU, IJPRAI, IJHCS, PAMI, AIJ, Eurasip, JASP),
Jan. 2009. and conferences (CVPR, ICCV, AVSS, VS, ICVS). He has co-supervised 13
[83] X. Zhao, X. Shi, and S. Zhang, Facial expression recognition via deep Ph.D. theses. He is an EC INFSO and a French ANR Expert for reviewing
learning, IETE Tech. Rev., vol. 32, no. 5, pp. 347355, 2015. projects.