Documente Academic
Documente Profesional
Documente Cultură
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 23
Abstract— In this paper, a new face recognition method Hybrid Face Recognition System using Multi-
Feature Neural Network (MFNN) is proposed. This method consists of five phases: i) Extract images from the
database, ii) Normalization and face detection iii)dimensionality reduction using wavelet, iv) Feature extraction
using PCA, LDA and v) classification using Multi Feature Neural Network. Combination of PCA and LDA is
used for improving the capability of LDA when a few samples of images are available. The proposed system
shows improvement on the recognition rates over the conventional LDA and PCA face recognition systems that
use Euclidean Distance based classifier and also outperforms over the PCA using neural classifier and LDA
using neural classifier.
In the proposed system, two different feature domains are extracted from the training set in parallel.
Therefore this approach can extract global and local characteristics of face images for classification purpose.
The proposed system was tested on ORL, AR, and Indian databases of 40 people containing 10 images of each
person with different poses that are taken under varying illumination conditions. Experimental results show
that the proposed system outperforms other existing methods in terms of the low classification error and better
time complexity.
Keywords— Face Recognition System, DWT, PCA, LDA, MLP, MFNN
—————————— ——————————
Section III explains reduction of dimensionality
I. INTRODUCTION using wavelets. Section IV presents a proposed
Face recognition is a process of method for face recognition based MFNN. In
recognizing the face of a person in a system. Section V, experimental results and evaluation of
System may comprise of circuit board, software the developed techniques are presented. Finally,
for programming, digital camera or video conclusions are summarized in Section VI.
camera, robots and many more. A system under
test is fails when it could not recognize faces and II. PREPROCESSING
works differently from the expected behavior. A. Preprocessing of Face Image
Face recognition is difficult because it requires The aim of the face preprocessing step
certain information of an image (face), how it is is to normalize the coarse face detection, so that
unique from one to another before it could detect a robust feature extraction can be achieved.
that particular individual. Depending on the application, face preprocessing
Face Recognition using Neural Network includes: Alignment (translation, rotation,
is popularly applied in various applications scaling) and light normalization/ correlation. The
especially in security and biometrics face preprocessing step aims at normalizing, i.e.
applications. It has become a critical and reducing the variation of images obtained during
popular due to the improved security, quality of the face detection step. Unpredictable change in
relevance, and lower cost. lighting conditions is a problem in facial
recognition.
Details of the work done are described B. Histogram Equalization
in the remainder of this paper. Section II covers
Histogram equalization (HE) can be
preprocessing and face detection of face image.
used as a simple but very robust way to obtain
————————————————
1. Department of CSE, Vignan University, Guntur, A.P, light correction when applied to small regions
India. such as faces. HE is to maximize the contrast of
an input image, resulting in a histogram of the
2. HoD, Dept of IT, S.R.K.R. Engineering College, output image which is as close to a uniform
Bhimavaram, A.P, India
histogram as possible. However, this does not
remove the effect of a strong light source but
maximizes the entropy of an image, thus
reducing the effect of differences in illumination
within the same “setup” of light sources. By
23
Journal of Computing, Volume 2, Issue 7, July 2010, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 24
doing so, HE makes facial recognition a orientations are horizontal (chj+1), vertical
somehow simpler task. Two examples of HE of (cvj+1), and diagonal (cdj+1).
images can be seen in Figure 2 and 3.
The size of face images of ORL is
92*112. The decomposition process can be
recursively applied to low frequency channel
(LL or A) to generate decomposition at the next
level. Decomposition from level 1 to level 5 are
Figure 2. Before histogram Equalization given in the following figure 4. After 1st
decomposition the size of the images is at 63*53
and after 2nd decomposition the size of the face
images is decreased to 39*34 and after 3rd ,4th,
5th decompositions the size of the face images is
compressed to 27 * 24, 21 * 19, 18 * 17
Figure 3.After histogram Equalization
respectively as shown in figure 4.
III. DIMENSIONALITY REDUCTION
Human face has certain features that are
common for all persons and also certain features
that exhibit a unique characteristic for a person.
The face recognition task involves extraction of
these unique features from the face image of a
person. In this work, wavelets are used for
dimensionality reduction of all the input images. Figure 4.a. After level 1 decomposition
The face image is decomposed into several
subbands using the 2-dimensional wavelet
decomposition (DWT2) bior3.7.
24
Journal of Computing, Volume 2, Issue 7, July 2010, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 25
25
Journal of Computing, Volume 2, Issue 7, July 2010, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 26
Discriminant Analysis (LDA) finds an efficient class, and 5 training samples and 5 testing
way to represent the face vector space by samples for each class.
exploiting the class information. It differentiates
individual faces but recognizes faces of the same ii)Training Algorithm :Back propagation
individual. LDA is often referred to as a Fisher's The first step is to initialize the weights
Linear Discriminant (FLD) [3]. The images in to small random values. Then in each iteration of
the training set are divided into the for loop results in a single presentation of each
corresponding classes. LDA is an example of a pattern in the training set, and is sometimes
class specific method, in the sense that it tries to referred to as an epoch. Following are the main
“shape” the scatter in order to make it more steps of the feed forward-back propagation
reliable for classification. This method selects W algorithm:
in such a way that the ratio of the between class Initialize weights.
scatter and the within- class scatter is
maximized. Let the between-class scatter matrix 1. Present the pattern at the input layer.
be defined as 2. Let the hidden units evaluate their output
using the pattern.
3. Let the output units evaluate their output
using the result in Step 2 from the hidden units.
And the within-class scatter matrix be defined as
4. Apply the target pattern to the output layer.
5. Calculate the δs on the output nodes.
6.Train each output node using gradient descent.
Ni is the number of training samples in class Xi, c 7. For each hidden node, calculate its δ, propagate-
is the number of distinct classes, μi is the mean Ing layer by layer.
vector of samples belonging to class i and Xi 8. For each hidden node, use the δ found in
represents the set of samples belonging to class i. Step 7 to train according to gradient descent.
E. Classification using Neural Networks Algorithm: A simple back propagation outline
26
Journal of Computing, Volume 2, Issue 7, July 2010, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 27
recognition, the maximum output value of each is obtained because of the fact that the MFNN
test set is compared against the thresholds method discriminate the classes better and also
defined by the training set. If the MFNN is well describes the classes better than the existing
trained and generalizes well, the program will ones. The performance improvement in MFNN
return an accurate output like is higher than all the other 4 algorithms. Above
this. results are shown in the following figure 9.
100.00
95.00
r e c o g n it io n A c c u r a c y
90.00
85.00
80.00
75.00
Figure 6 A correct recognition by MFNN
70.00
IV EXPERIMENT RESULTS 2 3 4 5 6 7 8
27
Journal of Computing, Volume 2, Issue 7, July 2010, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 28
28