Sunteți pe pagina 1din 6

Facial Emotions Based PC Access for the Benefit of

Autistic People
Anjana R Sowmiya M
Department of Computer Science and Engineering Department of Computer Science and Engineering
SVCE,Sriperambudur SVCE,Sriperambudur
1
anjaravi@gmail.com 2
sowmiyamurthy@gmail.com
Abstract—Face recognition is a very challenging field  Image Segmentation image. Segmentation
refers to an individual's that targets methods to make  Features Extraction could therefore be seen
understanding and Human Computer Interaction  Classification and as a computer vision
interpretation of the human (HCI) effective. Facial Prediction problem. There are four
face especially in relation to expressions not only to popular segmentation
the associated information express our emotions but also A. Image approaches: threshold
processing in the brain. to provide important cues Acquisition methods, edge-based
Autism Spectrum Disorder during social interactions The image is captured methods , region-based
(ASD) is a comprehensive such as level of interest, our by a sensor (e.g. methods and the
neural developmental desire to take a speaking turn Camera), and digitized if connectivity-preserving
disorder that produces and to provide continuous the output of the camera relaxation methods.
many deficits including feedback on the or sensor is not already
social, communicative and understanding of the in digital form, using D. Features
perceptual. Individuals information conveyed. analogue-to-digital Extraction
with autism exhibit Identification and convertor. Feature extraction is
difficulties in various classification of emotions by a special form of the
aspects of facial perception, computers has been an B. Image Pre- dimensionality
including facial identity research area since Charles processing reduction. Feature
recognition and recognition Darwin’s age. Facial emotion Pre-processing extraction involves
of emotional expressions. recognition is a field where methods use a small simplifying the amount
Autism Spectrum Disorders lot of work has been done neighbourhood of a pixel of resources required to
(ASD) are characterized by and a lot more can be done. in an input image to get describe a large set of
atypical patterns of Face recognition refers to a new brightness value data accurately. Feature
behaviours and an individual's understanding in the output image. extraction methods
impairments in social and interpretation of the Such pre-processing can be supervised or
communication. Traditional human face especially in operations are also called unsupervised, depending on
intervention approaches relation to the associated filtration. Image pre- whether or not class labels
often require intensive information processing in the processing tool uses are used. Among the
support and well-trained brain. Autism Spectrum many useful pre- unsupervised methods,
therapists to address core Disorder (ASD) is a processing operations Principal Component
deficits. People with ASD comprehensive neural which suppress Analysis (PCA), Independent
have tremendous difficulty developmental disorder that information that is no Component Analysis (ICA),
accessing such care due to produces many deficits relevant to the specific Multi-dimensional scaling
lack of available trained including social, image processing and (MDS) are the most
therapists as well as communicative and enhance some image popular ones.
intervention costs. Thus a perceptual. Individuals with features important for Supervised FE methods
Human Facial Emotions autism exhibit difficulties in further processing. Some (and also FS methods) either
based image processing various aspects of facial of the pre processing use information about the
system is to be developed perception, including facial include image current classification
which processes autistic identity recognition and enhancement, cropping, performance called wrappers,
people’s expressions and recognition of emotional denoising, etc. or use some other, indirect
enables them to access PC expressions. measure, called filters.
applications based on their Image processing is a C. Image
expressions. rapidly growing area of Segmentation E. Classification and
Keywords— Autism computer science. Its growth Segmentation is the Prediction
Spectrum Disorder(ASD), has been fueled by process of partitioning This final step involves
Facial Emotion technological advances in an image into non- classification of segmented
Recognition(FER), Features digital imaging, computer intersecting regions such image under various labels
Extraction, PC Access, Sobel processors and mass storage that each region is based on the features
Filtering, Weber Law devices. homogeneous and the generated. This classification
Detector(WLD). There are five stages in any union of no two is done using the various data
Digital Image Processing adjacent regions is mining techniques.
application. They are broadly homogeneous. The goal Classification consists of
classified as: of segmentation is assigning a class label to a
I. INTRODUCTION
 Image Acquisition typically to locate certain set of unclassified cases.
Research on Facial  Image Pre- objects of interest which There are two different
Emotion Recognition (FER) processing may be depicted in the classes of classification. They
are supervised face recognition approaches eye-centers, mouth width or (LBP) proposed by
classification where the set as discussed by Chandan height, etc. These methods Kanan.H.R. et. al (2008) is
of possible classes is Singh, Ekta Walia and Neerja can be classified in two widely used dense descriptor
known in advance and Mittal (2011). The spatial- categories: firstly, the sparse due its simplicity in
unsupervised classification frequency techniques such as descriptor which initially extracting the local features
where the set of possible Fourier transform as divides face images into and excellent performance in
classes is not known and discussed by Singh.C and patches and then illustrates various texture and face
after classification we can Walia.E (2010) and Discrete its invariant features and, image analysis tasks. Several
try to assign a name to the Cosine Transform (DCT) as secondly, a dense descriptor variants of LBP are provided
class. Unsupervised discussed by Soyel.H and which extracts local features in literature to generate
classification is also known Demirel.H (2010) are useful pixel by pixel over the input compact feature set for face
as clustering. in extracting the facial image. Amongst the sparse analysis and/or representing
The paper is organized as features at some preferred descriptors, the scale- improved classification
follows. Literature survey is frequency. In these methods, invariant feature transform performance. In addition to
presented in Section II. The firstly the images are (SIFT), introduced by this, some researchers have
proposed work is presented transformed to the frequency Lowe.D.G (2004), consists of used this descriptor in a
in Section III. The domain, and thereafter, the useful characteristics of crossway, i.e., it is used either
implementation and results coefficients of low frequency being invariant to scale and as a dense descriptor or in a
are analysed in Section IV band are taken as the rotation. The discriminative sparse way. Recently, the
and future work is presented invariant image features. SIFT (D-SIFT) features was Weber’s law-based dense
in Section V. We conclude in Furthermore, the moment effectively used for facial local descriptor called WLD
Section VI. invariants are the most expression recognition by is established by
widely used image Teague.M.R (1980), but these Chen.J.Shan. et. al (2012)
descriptors in many pattern are partially invariant to incorporating powerful image
II. LITERATURE SURVEY recognition applications such illumination. Adaptively representation ability along
This section we will as character recognition, weighted patch PZM array with useful characteristics of
discuss some methods which palm print verification, etc. approach proposed by optimal edge detection,
are presently used for the Some of these moments as Turk.M (2001) is used invariance to image
human facial emotion discussed by Neerja and wherein the features are illumination and noise
recognition along with their Walia.E (2008) such as Hu extracted from a partitioned variations, etc. The
advantages and moment invariants and radial face image containing PZMs- experimental results devoted
disadvantages. The methods moments such as the Zernike based information of local to the texture analysis and
are explained below with moments (ZMs), pseudo areas instead of the global face detection prove the
their features and drawbacks. Zernike moments (PZMs), information of a face. This robustness of this descriptor
Global face recognition and orthogonal Fourier– method generates superior for scale, illumination, noise
methods are based on Mellin moments possess the results against occlusion, and rotation variations.
statistical approaches property of being invariant to expression, and illumination
wherein features are image rotation and can be variations even when only
extracted from the entire face made invariant to translation one exemplar image per III. PROPOSED METHOD
image. In this, every element and scale after applying the person is available in the For this study, video
in the feature vector refers to geometric transformations. database. containing static images of
some global characteristics of Although the global face different human beings with
face images. Subspace based recognition techniques are Gabor wavelet is one of different facial expressions is
methods, spatial-frequency most common and well-liked the most frequently used and considered.
techniques, and moment in face recognition, recently successful local image The flow of the proposed
based methods are examples lots of work is being done on descriptor in face system is represented in the
of most frequently used local feature extraction recognition. They incorporate figure 1 below:
global methods. Among methods as these are the characteristics of both
subspace-based methods, considered more robust space and frequency
Principal Component against variations in facial domains. The local features
Analysis (PCA), Fisher expressions, noise, and extracted by using Gabor
Linear Discriminant (FLD), occlusion. These structure- filters are invariant to scale
Two-Dimensional PCA based approaches deal with and orientation. These are
(2DPCA), and Two- local information related to able to detect the edges and
Dimensional Two-Directional some interior parts of face lines in face images as
PCA (2D2PCA) are the most images, i.e., features of nose proposed by Wee.C.Y. et. al
widely used and successful patch, distance between the (2007). Local Binary Patterns
techniques are available and yield distinguishable region reciprocal distance to
the best options can depend of skin or near skin-tone. compute fuzzy weights. The
on the image and how it will left eye. right eye and lips of
be used. Both analog and 3) Image Post-Processing the human being in the
digital image processing may image, which are the key
require filtering to yield a Once the image has been features needed to deduce a
usable and attractive end enhanced and segmented, the facial expression, are
result. There are different interesting part can be extracted .
types of filters such as low extracted and features can be
pass filters, high pass filters, analysed. The feature 6) Edge Detection
median filters etc. The low statistics include mean,
pass filters are smoothening variance, range, quantile The Sobel operator
filters where as the high pass maximum, quantile performs a 2-D spatial
filters are sharpening filters. minimum, and quantile gradient measurement on an
Smoothening filters are used range. The quantile features image and so emphasizes
for smoothening of the edges. were used instead of the regions of high spatial
Sharpening filters are used maximum, minimum, and frequency that correspond to
Fig 1: for enhancing the edges in range because they tend to be edges. Typically it is used to
Flowchart of the Proposed the image. In our system we less noisy. The pitch features find the approximate absolute
System are using Prewitt filter. The were extracted only over the gradient magnitude at each
Methodology purpose of smoothing is to voiced regions of the signal. point in an input gray scale
reduce noise and improve the The video motion-capture image.
1) Image acquisition visual quality of the image. derived features were Mathematically, the
occasionally missing values operator uses two 3×3
Every image processing b) Skin Tone Detection due to camera error or kernels which are convolved
application always begins obstructions. To combat this with the original image to
with image acquisition. The A skin detector typically missing data problem, the calculate approximations of
images of different human transforms a given pixel into features were extracted only the derivatives - one for
beings with different facial an appropriate colour space over the recorded data. horizontal changes, and one
expressions are considered as and then uses a skin classifier for vertical. The result of the
input. All the images should to label the pixel whether it is Sobel operator is a 2-
4) Feature Extraction
be saved in the same format – a skin or a non-skin pixel. A dimensional map of the
JPEG. The camera is skin classifier defines a gradient at each point.
In feature extraction,
interfaced with the system decision boundary of the skin Weber’s Law
which will take the images colour class in the colour 7) Database Training
Descriptor(WLD) based on
captured by the camera as an space. Weber’s Law is used. It
input. Naive Bayesian Classifiers
represents an image as a
Important challenges in histogram of differential have been used for Database
2) Image pre-processing skin detection are to Training along with C4.5
excitations and gradient
represent the colour in a way orientations, and has several algorithm. This method is
Image pre-processing that is invariant or at least important for several reasons.
interesting properties like
creates an enhanced image insensitive to changes in It is very easy to construct,
robustness to noise and
that is more useful or illumination. Another not needing any complicated
illumination changes, elegant
pleasing to a human observer. challenge comes from the iterative parameter estimation
detection of edges and
The image pre-processing fact that many objects in the schemes. This means it may
powerful image
steps used in the system are: real world might have skin- be readily applied to huge
representation..
a) Filtering of the image and tone colour. This causes the data sets. It is easy to
b)Skin Tone Detection. skin detector to have much interpret, so users unskilled
5) Classification
false detection in the in classifier technology can
a) Filtering of the image background. The simplest understand why it is making
The features classification
way to decide whether a is done by the Fuzzy C- the classification it makes. It
Filtering in image pixel is skin colour or not, is Means (FCM) classifier. is robust and is known to do
processing is a process that to explicitly define a Fuzzy clustering plays an quite well.
cleans up appearances and boundary. RGB matrix of the important role in solving
allows for selective given colour image converted problems in the areas of
highlighting of specific into different colour space to pattern recognition and fuzzy 8) Emotion Detection – PC
information. A number of model identification. It uses Access
5) Edge Detection emotions. Further, we will
Based on final feature set construct sex independent
analysis, different kinds of and culture independent
facial emotions are identified emotion detection system for
using classification better accuracy.
techniques and related PC
(Personal Computer) VI. CONCLUSION
applications will be
processed. In our application, Image processing technique
four different kinds of plays an important role in the
emotions are analyzed i.e. detection of the human facial
Smile, Surprise, Sad and emotions. Virtual Reality
Neutral which will help (VR)-based facial expression
Autism People to survive on Fig 6: Sobel Operator-Edge system is to be developed
their own. Detection that is able to collect eye
tracking and peripheral
Fig 3: Skin Tone Detection Figure 6 shows the output
psycho-physiological data
IV. IMPLEMENTATION of Edge Detection process.
while the subjects are
AND RESULTS 3) Face Detection
involved in emotion
6) Emotion Detection
The screenshots of the recognition tasks. It enables
application after each individuals with Autism
The database contains 50
operation is as follows: Spectrum Disorder (ASD) to
actors ( 25 male, 25
access PC by processing
female ) of age 20 to 50
Figure 2 and 3 show the nonverbal communication in
in 4 classes of emotions
output of the application after a virtual reality environment
and each image of size 8
the image pre-processing in spite of their potent
KB in JPEG format as in
works: Filtering and Skin impairments.
Table 1.
Tone Detection respectively.
Smile Surprise
1) Prewitt Filtering REFERENCES
150 150
Table 1: Classes of Emotions
[1]Chandan Singh, EktaWalia
and Neerja Mittal ,“Robust
Fig 4: Facial region detection Table 2 shows the results two-stage face recognition
for each emotion in approach using global and
4) Features Extraction various subjects. It is local features”, Springer -
found that the application Verlag 2011.
has an accuracy of [2]David Valle Cruz , Erika
85.4%.Thus desirable Rodr´ıguez , Marco Antonio
results have been Ramos Corchado , J.
achieved. Raymundo, Marcial-Romero
and Felix Ramos Corchado
“Facial Expressions Based in
Emotions
Emotions for Virtual
Smile
Agents”, International
Surprise Journal of Human-Computer
Fig 2: Application Sad Interaction, Elsevier ,Vol.
of Prewitt filter Neutral 66,no 9,pp. 622-677,2013.
Table 2: Correct Classification
of Emotions [3]Esubalew Bekele, Zhi
2) Skin Tone Detection Zheng, Amy Swanson and
Julie Crittendon,
Fig 5: Extraction of key V. FUTURE WORK “Understanding How
features
In this paper, we have Adolescents with Autism
Figure 4 and 5 show the studied the four basic Respond to Facial
output of the Face Detection emotions. As a future Expressions in Virtual
and Features Extraction enhancement we would study Reality Environments”, IEEE
operations respectively and implement further Transactions on Visualization
and Computer Graphics,Vol. structural similarity,” IEEE
19,No. 4,April 2013. Trans. Image Process.
[4]Jorge Rojas Castillo,
Student Member, IEEE, Adin
Ramirez Rivera, Student
Member, IEEE and Oksam
Chae, Member, IEEE “Local
Directional Number Pattern
for Face Analysis: Face and
Expression Recognition”,
IEEE Transactions on Image
Processing, December 2012.
[5]Maringanti Hima Bindu,
Priya Gupta, and U.S.Tiwary,
“Emotion Detection Using
Sub-image Based Features
through Human Facial
Expressions”, International
Conference on Computer &
Information Science (ICCIS),
2012.
[6]R. Srivastava, S. Roy, S.
Yan, and T. Sim,
“Accumulated motion images
for facial expression
recognition in videos,” in
Proc. IEEE Int. Conf. Autom.
Face Gesture Recog.
Workshop Facial Express
Recog Anal Challenge, Mar.
2011, pp. 903–908.
[7]Songfan Yang and Bir
Bhanu ,“Understanding
Discrete Facial Expressions
in Video Using an Emotion
Avatar Image”, IEEE
Transactions on
Systems,Man and
Cybernetics-Part
B:Cybernetics,Vol. 42, No. 4,
August 2012.
[8]S. Smith, The Scientist
and Engineer’s Guide to
Digital Signal
Processing.California Tech.
Publication, 1997.
[9]S. M. Lajevardi and Z. M.
Hussain, “Automatic facial
expression recognition:
Feature extraction and
selection,” Signal, Image
Vide.,
[10]Wang, A. C. Bovik, H. R.
Sheikh, and E. P. Simoncelli,
“Image quality assessment:
From error visibility to

S-ar putea să vă placă și