Sunteți pe pagina 1din 6

Journal of Computing, Volume 2, Issue 7, July 2010, ISSN 2151-9617

HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 23

Hybrid Face Recognition System using Multi Feature


Neural Network
Prof K.V. Krishna Kishore1, Dr G.P.S.Varma2

Abstract— In this paper, a new face recognition method Hybrid Face Recognition System using Multi-
Feature Neural Network (MFNN) is proposed. This method consists of five phases: i) Extract images from the
database, ii) Normalization and face detection iii)dimensionality reduction using wavelet, iv) Feature extraction
using PCA, LDA and v) classification using Multi Feature Neural Network. Combination of PCA and LDA is
used for improving the capability of LDA when a few samples of images are available. The proposed system
shows improvement on the recognition rates over the conventional LDA and PCA face recognition systems that
use Euclidean Distance based classifier and also outperforms over the PCA using neural classifier and LDA
using neural classifier.
In the proposed system, two different feature domains are extracted from the training set in parallel.
Therefore this approach can extract global and local characteristics of face images for classification purpose.
The proposed system was tested on ORL, AR, and Indian databases of 40 people containing 10 images of each
person with different poses that are taken under varying illumination conditions. Experimental results show
that the proposed system outperforms other existing methods in terms of the low classification error and better
time complexity.
Keywords— Face Recognition System, DWT, PCA, LDA, MLP, MFNN
——————————  ——————————
Section III explains reduction of dimensionality
I. INTRODUCTION using wavelets. Section IV presents a proposed
Face recognition is a process of method for face recognition based MFNN. In
recognizing the face of a person in a system. Section V, experimental results and evaluation of
System may comprise of circuit board, software the developed techniques are presented. Finally,
for programming, digital camera or video conclusions are summarized in Section VI.
camera, robots and many more. A system under
test is fails when it could not recognize faces and II. PREPROCESSING
works differently from the expected behavior. A. Preprocessing of Face Image
Face recognition is difficult because it requires The aim of the face preprocessing step
certain information of an image (face), how it is is to normalize the coarse face detection, so that
unique from one to another before it could detect a robust feature extraction can be achieved.
that particular individual. Depending on the application, face preprocessing
Face Recognition using Neural Network includes: Alignment (translation, rotation,
is popularly applied in various applications scaling) and light normalization/ correlation. The
especially in security and biometrics face preprocessing step aims at normalizing, i.e.
applications. It has become a critical and reducing the variation of images obtained during
popular due to the improved security, quality of the face detection step. Unpredictable change in
relevance, and lower cost. lighting conditions is a problem in facial
recognition.
Details of the work done are described B. Histogram Equalization
in the remainder of this paper. Section II covers
Histogram equalization (HE) can be
preprocessing and face detection of face image.
used as a simple but very robust way to obtain
————————————————
1. Department of CSE, Vignan University, Guntur, A.P, light correction when applied to small regions
India. such as faces. HE is to maximize the contrast of
an input image, resulting in a histogram of the
2. HoD, Dept of IT, S.R.K.R. Engineering College, output image which is as close to a uniform
Bhimavaram, A.P, India
histogram as possible. However, this does not
remove the effect of a strong light source but
maximizes the entropy of an image, thus
reducing the effect of differences in illumination
within the same “setup” of light sources. By

23
Journal of Computing, Volume 2, Issue 7, July 2010, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 24

doing so, HE makes facial recognition a orientations are horizontal (chj+1), vertical
somehow simpler task. Two examples of HE of (cvj+1), and diagonal (cdj+1).
images can be seen in Figure 2 and 3.
The size of face images of ORL is
92*112. The decomposition process can be
recursively applied to low frequency channel
(LL or A) to generate decomposition at the next
level. Decomposition from level 1 to level 5 are
Figure 2. Before histogram Equalization given in the following figure 4. After 1st
decomposition the size of the images is at 63*53
and after 2nd decomposition the size of the face
images is decreased to 39*34 and after 3rd ,4th,
5th decompositions the size of the face images is
compressed to 27 * 24, 21 * 19, 18 * 17
Figure 3.After histogram Equalization
respectively as shown in figure 4.
III. DIMENSIONALITY REDUCTION
Human face has certain features that are
common for all persons and also certain features
that exhibit a unique characteristic for a person.
The face recognition task involves extraction of
these unique features from the face image of a
person. In this work, wavelets are used for
dimensionality reduction of all the input images. Figure 4.a. After level 1 decomposition
The face image is decomposed into several
subbands using the 2-dimensional wavelet
decomposition (DWT2) bior3.7.

The 2-D wavelet transform uses a bior3.7 and its


associated scaling function to decompose the
original image into different subbands, namely
the low-low (LL), low-high (LH), high-low (HL)
and high-high (HH) subbands, which are also Figure 4.b. After level 2 decomposition
known as A,V,H,D respectively. Lo_D is the
decomposition low-pass filter.

Figure 4.c. After level 3 decomposition

Figure 5: Decomposition using DWT2

Hi_D is the decomposition high-pass


filter. Two-dimensional DWT leads to a
Figure 4.d. After level 4 decomposition
decomposition of approximation coefficients at
level j into four components of the approxima-
tion at level j+1, and the details in three

24
Journal of Computing, Volume 2, Issue 7, July 2010, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 25

could also approximate faces using only the


most significant Eigenfaces.
i) Algorithm for Finding Eigenfaces:
1. Obtain training images , … , it is
very important that the images are centered.
2. Represent each image as a vector as
discussed above.
Figure 4.e. After level 5 decomposition 3. Find the average face vector .
4. Subtract the mean face from each face vector
IV FEATURE EXTRACTION to get a set of vectors . The purpose of
In feature extraction phase of proposed subtracting the mean image from each image
MFNN face recognition, we represent the high vector is to be left with only the distinguishing
dimensional data in a lower dimensional space. features from each face and “removing” in a way
PCA[4] and LDA[3] are two feature extraction information that is common.
techniques used for feature extraction in
appearance based approaches. In the proposed
5. Find the Covariance matrix : ,
system we are using both feature extraction
techniques in parallel to get the best feature where
space. 6. We now need to calculate the Eigenvectors
The output of the PCA algorithm is of , However note that is a
called eigen faces. The output of LDA algorithm matrix and it would return Eigenvectors each
is called Fisher faces. PCA extracts features that being dimensional. For an image this number
represent the class. LDA extracts the features is HUGE. The computations required would
that are required to separate classes. Here in easily make your system run out of memory.
MFNN method both eigen faces and fisher faces How do we get around this problem?
are extracted to get the advantages of both PCA 7. Instead of the Matrix consider the
and LDA. matrix . Remember is a
matrix, thus is a matrix. If we
find the Eigenvectors of this matrix, it would
return Eigenvectors, each of Dimension
, let’s call these Eigenvectors .
Now from some properties of matrices, it follows
that: . We have found out earlier.
This implies that using we can calculate the M
largest Eigenvectors of . Remember that
Figure 5. Block diagram of proposed
MFNN model as M is simply the number of
C. PCA training images.
PCA encode the global features. Such 8. Find the best Eigenvectors of
features may or may not be intuitively by using the relation discussed above. That is:
understandable. When we find the principal . Also keep in mind that
components or the Eigenvectors of the image set, 9. Select the best Eigenvectors, the selection
each Eigenvector has some contribution from of these Eigenvectors is done heuristically.
each face used in the training set. So the
Eigenvectors also have a face like appearance. D. Fisher Faces Method
These look ghost like and are ghost images or In face recognition, each face is
Eigenfaces. Every image in the training set can represented by a large number of pixel values.
be represented as a weighted linear combination Linear discriminant analysis is primarily used
of these basis faces. The number of Eigenfaces here to reduce the number of features to a more
that would be equal to the number of images in manageable number before classification. Each
the training set. Let take this number to be . of the new dimensions is a linear combination of
Some of these Eigenfaces are more important in pixel values, which form a template. The linear
encoding the variation in face images, thus we combinations obtained using Fisher's linear
discriminant are called Fisher faces Linear

25
Journal of Computing, Volume 2, Issue 7, July 2010, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 26

Discriminant Analysis (LDA) finds an efficient class, and 5 training samples and 5 testing
way to represent the face vector space by samples for each class.
exploiting the class information. It differentiates
individual faces but recognizes faces of the same ii)Training Algorithm :Back propagation
individual. LDA is often referred to as a Fisher's The first step is to initialize the weights
Linear Discriminant (FLD) [3]. The images in to small random values. Then in each iteration of
the training set are divided into the for loop results in a single presentation of each
corresponding classes. LDA is an example of a pattern in the training set, and is sometimes
class specific method, in the sense that it tries to referred to as an epoch. Following are the main
“shape” the scatter in order to make it more steps of the feed forward-back propagation
reliable for classification. This method selects W algorithm:
in such a way that the ratio of the between class Initialize weights.
scatter and the within- class scatter is
maximized. Let the between-class scatter matrix 1. Present the pattern at the input layer.
be defined as 2. Let the hidden units evaluate their output
using the pattern.
3. Let the output units evaluate their output
using the result in Step 2 from the hidden units.
And the within-class scatter matrix be defined as
4. Apply the target pattern to the output layer.
5. Calculate the δs on the output nodes.
6.Train each output node using gradient descent.
Ni is the number of training samples in class Xi, c 7. For each hidden node, calculate its δ, propagate-
is the number of distinct classes, μi is the mean Ing layer by layer.
vector of samples belonging to class i and Xi 8. For each hidden node, use the δ found in
represents the set of samples belonging to class i. Step 7 to train according to gradient descent.
E. Classification using Neural Networks Algorithm: A simple back propagation outline

Compared to the conventional Steps 1-3 are collectively known as the


classifiers, NN classifiers gives better results. forward pass since information is flowing
Also due to simplicity, generality and good forward through the network in the natural sense
learning ability of the NN, these classifiers are of the nodes’ input-output relation. Steps 4-8 are
found to be more efficient. In this work, Feed collectively known as the backward pass. Step 7
forward neural network with back propagation involves propagating the δ’s back from the
training algorithm with different training output nodes to the hidden units – hence the
functions and Radial Basis Functions are used name back propagation.
for better classification. iv)Training Parameters and Weight Initialization
i)Training Next, we proceeded to determine the
We use back propagation algorithm to learning rate. We started with 0.0001, but results
train the Multilayer perceptron neural network. shows signs of undertraining. We continued with
Number of Neurons at the input layer is decided other values, each time increasing the learning
on the size of the features extracted from with small steps.
facespace. The proposed algorithm has been net.trainParam.lr=.001;
tested with different input sizes such as 30 X net.trainParam.epochs=5000;
30(900 nodes), 45 X 45 etc. Number of hidden net.trainParam.goal=1.0e-4;
layers increases complexity of the system, Three
hidden layers are used. Initially 10 hidden It was observed that the MFNN trained
neurons are used in each hidden layer. Finally faster and yet maintained its average recognition
the proposed algorithm works best with 30 in 1st performance (95). The proposed method MFNN
hidden layer, 35 in 2nd and number of nodes in uses the least mean square method; therefore,
3rd hidden layer is same as number of output hidden weights should be zero rather than
classes. The number of neurons at the output random in the first place. A zero initialized
layer depends on the total number of classes. The hidden layer ensures smooth convergence in
proposed algorithm is tested with 3 training lesser steps without affecting generalization.
samples and 7 testing samples for each class, 4 Maximum output values are then matched
training samples and 6 testing samples for each according on their positions and thresholds. For

26
Journal of Computing, Volume 2, Issue 7, July 2010, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 27

recognition, the maximum output value of each is obtained because of the fact that the MFNN
test set is compared against the thresholds method discriminate the classes better and also
defined by the training set. If the MFNN is well describes the classes better than the existing
trained and generalizes well, the program will ones. The performance improvement in MFNN
return an accurate output like is higher than all the other 4 algorithms. Above
this. results are shown in the following figure 9.

100.00

95.00

r e c o g n it io n A c c u r a c y
90.00

85.00

80.00

75.00
Figure 6 A correct recognition by MFNN
70.00
IV EXPERIMENT RESULTS 2 3 4 5 6 7 8

The performances of the proposed Number of Training Faces


system are measured by varying the number of
faces of each subject in the training and test PCA PCA+NN LDA LDA+NN 2FNN
faces. Following table shows the performances Figure 7. Comparison of FRS methods
of the proposed PCA-NN and LDA-NN methods
based on the neural network classifiers as well as A. Comparison with previous FR methods
the performances of the conventional PCA and
LDA based on the Euclidean Distance classifier. The face images, which are used for
Along with the existing algorithms the table also training and testing the neural network represent
shows the performance in percentage of the persons of various ethnicities, age and gender. A
proposed method. total of 400 face images of 40 persons with
different facial expressions are used from the
IMAGES PC PC L LD MF ORL face database (AT&T Laboratories
A A- D A- NN Cambridge, online resources)
Train Tes NN A NN
t
It is observed that using PCA with
2 8 71 75 78 80 84 Euclidean distance as classifier classifies with
3 7 73 76 82 84 90 85% accuracy. LDA with the same classifier
4 6 77 80 87 89 94 accuracy is improved to 89%. It is also observed
5 5 78 85 87 91 97 that the conventional algorithms with neural
6 4 89 90 93 93 98
7 3 92 94 95 95 99
classifiers show improvement on the recognition
Table 1. Proposed system recognition rates over the conventional LDA and PCA face
comparison with existing systems recognition systems that use Euclidean Distance
based classifier. Additionally, the recognition
The recognition performances increase performance of LDA-NN[2] is higher than the
due to the increase in face images in the training PCA-NN[5] among the proposed systems. PCA
set. This is obvious, because more sample using NN classified images 91% accurately,
images can characterize the classes of the whereas LDA using NN classified images with
subjects better in the face space. The result an accuracy of 93%.
clearly shows that the proposed recognition
Above all these existing methods the
system MFNN, outperforms, the existing PCA-
proposed MFNN face recognition achieved
NN and LDA-NN, and also outperforms the
95.5% accuracy. Following table shows the
conventional PCA and LDA based recognition
results.
systems. The MFNN shows the highest
recognition performance, where this performance

27
Journal of Computing, Volume 2, Issue 7, July 2010, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 28

Algorithm Average Rate of Success Future Scope


Face Recognition
Further work can be expanded in many
PCA+E.D. 85% directions. The algorithm can be extended to
LDA+E.D. 89% include other recognition procedures, such as
PCA + NN 91% detecting emotion from faces from a group of
LDA+NN 93% faces. This work may be extended to design
MFNN 95.50% different NNs with dynamic scaling that detect
Table 2 Average rate of success in proposed different types of facial features, so that fusing
and existing systems their results may produce more accurate
identifications and fewer misclassifications than
V DISCUSSION the proposed MFNN.
Performance Across databases REFERENCES
Proposed system is tested across various [1] C.C. Tsai, W.C. Cheng, J.S. Taur and C.W. Tao, “Face
bench mark databases. We used three databases Detection Using Eigenface and Neural Network”, IEEE
International Conference on 8-11, October 2006.
to test our proposed system, ORL database, [2] Hiroyuki Kobayashi and Qiangfu Zhao, “Face
Indian face database and AR face database. Test Detection with Clustering, LDA and NN”, IEEE
result of recognition is shown in the following International Conference on 1-8, July 2007.
table. [3] Juwei Lu, Kostantinos N. Plataniotis, and Anastasios
N. Venetsanopoulos, “Face Recognition Using LDA-
Based Algorithms”, IEEE transactions on Neural
Data- Number Correct % of Avg. Networks, Vol.14, No. 1, Jan 2003.
base of images
recogni Recogni- [4] M. Turk and A. P. Pentland, “Eigenfaces for
recognition,” J. Cognitive Neurosci., vol. 3, no. 1, pp.
-tion tion rate 71–86, 1991.
ORL 400 394 98.5 [5] Mohamed Rizon, Muhammad Firdaus Hashim, Puteh
Indian 600 586 97.76 Saad, Sazali Yaacob, “Face Recognition using
AR 800 760 95 Eigenfaces and Neural Networks”, American Journal of
Applied Sciences 2 (6): 1872-1875, 2006
Table 4 Comparison of recognition in three [6] Kresimir Delac , Mislav Grgic , Panos Liatsis,
databases. “Appearance-based Statistical Methods for Face
Recognition”, 47th International Symposium ELMAR-
2005, 08-10 June 2005
VI. CONCLUSION [7] Fenghua Wang, Jiuqiang Han, “Robust multimodal
In conclusion, we have proposed and biometric authentication integrating iris, face and
palmprint”, ISSN 1392 – 124X INFORMATION
shown that the Face Recognition using MFNN
TECHNOLOGY AND CONTROL, 2008
approach outperforms exiting approaches. PCA [8] Su Hongtao, David Dagan Feng, Zhao Rong-chun, “Face
extracts features that represent the class. LDA Recognition Using Multi-feature and Radial Basis
extracts the features that are required to separate Function Network”, Pan-Sydney Area Workshop on
Visual Information Processing (VIP2002), 2002
classes. Here in MFNN method both eigen faces
[9] Namrata vaswani, member, ieee, and rama chellappa,
and fisher faces are extracted to get the fellow, ieee, “ principal components null space
advantages of both PCA and LDA. We fused the analysis for image and video classification“,IEEE
features extracted from PCA and LDA using transactions on image processing, vol. 15, no. 7, july
2006.
wavefusion (wfusmat function) so that this
[10] Lin Guo, De-Shuang Huang, “Human Face Recognition
method of approach given better results. Based On Radial Basis Probabilistic Neural
Network”, Institute of Intelligent Machines, Chinese
The strength of the MFNN is in its
Academy of Sciences, IEEE, 2008
extraction layer. It enables the network to learn [11] Wankou Yang, Hui Yan, Jianguo Wang, Jingyu Yang,
complex patterns by extracting progressively “Face Recognition Using Complete Fuzzy LDA” IEEE
more meaningful features from the input patterns transactions on Machine Intelligence 2008
[12] Jahan Zeb, Muhammad Younus Javed, and Usman
of a face. The MFNN thus avoids being too
Qayyum, “Low Resolution Single Neural Network
restricted by mathematical metric in its Based Face Recognition”, proceedings of world
classification process. This increases its ability to academy of science, engineering and technology
generalize. volume 22 july 2007.
[13] Curtsey ORL, AR and Indian Face database

28

S-ar putea să vă placă și