Documente Academic
Documente Profesional
Documente Cultură
net/publication/274521637
CITATIONS READS
45 5,588
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Dr.M.Parisa Beham on 14 January 2018.
M. PARISA BEHAM
ECE Department, Vickram College of Engineering
Sivagangai District, TamilNadu, India
Parisaphd2011@gmail.com
Face recognition has become more signi¯cant and relevant in recent years owing to it potential
applications. Since the faces are highly dynamic and pose more issues and challenges to solve,
researchers in the domain of pattern recognition, computer vision and arti¯cial intelligence
have proposed many solutions to reduce such di±culties so as to improve the robustness
and recognition accuracy. As many approaches have been proposed, e®orts are also put in to
provide an extensive survey of the methods developed over the years. The objective of this paper
is to provide a survey of face recognition papers that appeared in the literature over the
past decade under all severe conditions that were not discussed in the previous survey and to
categorize them into meaningful approaches, viz. appearance based, feature based and soft
computing based. A comparative study of merits and demerits of these approaches have been
presented.
Keywords : Face recognition; feature based; appearance based; soft computing based; Gabor
patterns; fuzzy based; genetic algorithm; nontensor wavelets; sparse representation.
1. Introduction
Face recognition is one of the most popular applications of image analysis. In present
scenario, face recognition plays a major role in security, personal information
accesses, improved human machine interaction and personalized advertising. Hence
a recognition system that, is inexpensive to use at any location, performs quicker
matching, handles large database and do recognition in a varying environment is the
need of the hour. It is a true challenge to build an automated system which parallels
1356005-1
M. P. Beham & S. M. M. Roomi
1356005-2
A Review of Face Recognition Methods
1356005-3
M. P. Beham & S. M. M. Roomi
1356005-4
A Review of Face Recognition Methods
pixel intensity variations. One of the most widely used representations of the face
region is Eigen pictures, which are based on principal component analysis. The other
category contains hybrid approaches, just as the human perception system uses both
local features and the whole face region to recognize a face, a machine recognition
system should use both.
1356005-5
M. P. Beham & S. M. M. Roomi
Fig. 3. Eigenfaces.
approach was proposed by Turk and Pentland,67 PCA has emerged as a popular
technique in the computer vision community. Variants of PCA techniques have been
studied and used.57,77 Linear PCA is the simplest version. It decomposes the avail-
able data into uncorrelated directions, along which there exist maximum variations.
In other words, it tries to minimize the representation error jjWY Xjj. Towards this
goal, a total scatter matrix S ¼ XX T is de¯ned and the optimal matrix W is formed
by the eigenvectors corresponding to the m largest eigenvalues of S. In contrast to
PCA which makes decomposition into uncorrelated components, ICA26 decomposes
the data into statistically independent components. Usually a contrast function
measuring the statistical dependence of the new representation y1 . . . ym is de¯ned
and minimized. ICA turns out to be a nonlinear minimization problem which
requires a lot of computations. While component analysis is oriented towards
representing the data, discriminant analysis keeps in mind the classi¯cation task.48 It
attempts to maximize the between-class scatter while minimizing the within-class
scatter. However, PCA only uses the second-order statistical information in data. As
a result, it fails to perform well in nonlinear cases. In order to address the nonlinear
problems, Kernel PCA (KPCA)25 is able to capture the nonlinear correlations among
data points. Wang and Zhang70 propose a method of feature extraction for facial
recognition based on KPCA, and the nearest neighbor classi¯er making use of Eu-
clidean distance is adopted. Experimental results show a high recognition rate of
using KPCA. With the Cover's theorem, nonlinearly separable patterns in an input
space will become linearly separable with high probability if the input space is
transformed nonlinearly into a high-dimensional feature space. One can, therefore,
map an input variable into a high-dimensional feature space, and then perform PCA.
Performing PCA78 in the high-dimensional feature space can obtain high-order
statistics of the input variables, that is, also the initial motivation of the KPCA.
However, it is di±cult to directly compute both the covariance matrix and its cor-
responding eigenvectors and eigenvalues in the high-dimensional feature space. It is
computationally intensive to compute the dot products of vectors with a high-di-
mension. Fortunately, kernel tricks can be employed to avoid this di±culty, which
compute the dot products in the original low-dimensional input space by means of a
1356005-6
A Review of Face Recognition Methods
kernel function. Thus the KPCA method is better than conventional PCA obviously
because of getting higher accuracy out of lower principal components number.
Sharma and Paliwal60 proposed a Fast PCA, a computationally fast technique for
¯nding the desired number of leading eigenvectors without diagonalizing any sym-
metric matrix and it is free from matrix inverse computations. As a result, the
presented algorithm is computationally e±cient, consumes very small amount of
computation time and is very easy to implement. Fast PCA is used for e±cient
generation of eigenvalues which improves the computational e±ciency to O(n 2 Þ as
compared to normal decomposition method which gives the solution in O(n 3 Þ time
but FPCA have some limitation mainly in convergence when the images are of high
resolution and mean square error is high.
To improve the classi¯cation accuracy, LDA is proposed, which is based on Fisher
linear discriminant (FLD),47 is a popular face recognition technique. LDA ¯nds a
small number of features that di®erentiates individual faces but recognizes faces of
the same individual. A number of LDA-based methods have been proposed in face
recognition. In the last two decades, a great number of improvements to the classical
LDA have been proposed to enhance its performance and e±ciency. These
improvements can be roughly grouped into three categories. The ¯rst category fo-
cuses on addressing the small sample size (SSS) problem, which always occurs when
the data dimension exceeds the number of training samples. In order to overcome the
SSS problem, Chen et al.7 derived the most discriminant vectors from the null space
of the within-class scatter matrix by using the PCA and used these vectors rather
than the eigenvectors. Similarly, Yang et al.78 proposed an exponential discriminant
analysis technique to extract the most discriminant information that is contained in
the null space of the within-class scatter matrix and overcome the SSS problem.
Yang et al.83 proposed an optimization criterion for the LDA which employed gen-
eralized singular value decomposition. This criterion is applicable regardless of
whether the data dimension is larger than the number of training samples.
The algorithms of LDA89 usually perform well under the following two assump-
tions. The ¯rst assumption is that the global data structure is consistent with the
local data structure. The second assumption is that the input data classes are
Gaussian distributions. However, in real-world applications, these assumptions are
not always satis¯ed. Fan et al.12 proposed an improved LDA framework, the local
LDA (LLDA), which can perform well without the need to satisfy the above two
assumptions. The LLDA89 framework can e®ectively capture the local structure of
samples, as shown in Fig. 4, that according to di®erent types of local data structure,
the LLDA framework has several di®erent forms of linear feature extraction
approaches, such as the classical LDA, PCA, and general LLDA. Therefore, in a
sense, this algorithm framework is an adaptive feature extraction approach. This
algorithm needs to train only a small portion of the whole training set before testing a
sample. It is suitable for learning large-scale databases especially when the input data
dimensions are very high and can achieve high classi¯cation accuracy.
1356005-7
M. P. Beham & S. M. M. Roomi
Fig. 4. LLDA.
1356005-8
A Review of Face Recognition Methods
hard to determine the number of subsets, and in general, this number should be
manually determined. Second, the e®ectiveness of the c-means clustering is closely
related to the initialization. As per Refs. 64 and 70, PCA has become one of the most
successful appearance-based approaches in face recognition, which is a popular un-
supervised statistical method to ¯nd useful image representation.
The discriminative methods, such as LDA, are better suited for classi¯cation
tasks. However, discriminative methods are usually sensitive to corruption in signals
due to lacking crucial properties for signal reconstruction. Huang and Aviyente24
present a theoretical framework for signal classi¯cation with sparse representation.
This approach combines the discrimination power of the discriminative methods
with the reconstruction property and the sparsity of the sparse representation that
enables one to deal with image corruptions: noise, missing data and outliers. How-
ever, multi-subspace setting uses sparse representation which have not been su±-
ciently explored or have not been answered yet.
In the holistic approaches, the PCA usually give high similarities indiscriminately
for two images from a single person or from two di®erent persons and the LDA is also
complex as there is a lot of within-class variation due to di®ering facial expressions,
head orientations, lighting conditions, etc. Compared to the PCA and LDA pro-
jections, wavelet subband coe±cients can e±ciently capture substantial facial fea-
tures while keeping computational complexity low. It is well known to all that
wavelet transform has a robust multi-resolution capability which accords well with
human visual system. Moreover, it provides a spatial and a frequential decomposi-
tion of an image simultaneously. Subsequently, an appropriate wavelet transform
can result in robust representations with regard to lighting changes. Though wavelet
coe±cients have been popularly applied in face recognition, some detailed problems
are still unfathomed, such as which subband is the best and powerful. Empirical
studies show that it is di±cult to give a rule to de¯ne a certain subband that
performs best, especially for the databases with faces in various conditions. When
there is a change in human face, some frequency components will be a®ected.
To overcome the above said problem, You et al.84,86 suggested to represent facial
features by discrete nontensor product wavelet, which is, the corresponding scaling
function and associated wavelet function cannot be written in the form of products of
one-dimensional ones, can reveal more features than that of the commonly used
tensor product wavelet transform. Compared with the traditional tensor product
wavelet, the new nontensor product wavelet can detect more singular facial features
in the high-frequency components. Earlier studies show that the high-frequency
components are sensitive to facial expression variations and minor occlusions, while
the low-frequency component is sensitive to illumination changes. Therefore, there
are two advantages of using the new nontensor product wavelet compared with the
traditional tensor product one. First, the low-frequency component is more robust to
the expression variations and minor occlusions, which indicates that it is more e±-
cient in facial feature representation. Second, the corresponding high-frequency
1356005-9
M. P. Beham & S. M. M. Roomi
1356005-10
A Review of Face Recognition Methods
and Laplacian faces, as long as the dimension of the feature space surpasses certain
threshold, predicted by the theory of sparse representation. This framework can
handle errors due to occlusion and corruption uniformly by exploiting the fact that
these errors are often sparse with respect to the standard (pixel) basis. The theory of
sparse representation helps predict how much occlusion the recognition algorithm
can handle and how to choose the training images to maximize robustness to oc-
clusion. This paper exploits the discriminative nature of sparse representation to
perform classi¯cation. Instead of using the generic dictionaries, they represent the
test sample in an over complete dictionary whose base elements are the training
samples themselves. If su±cient training samples are available from each class, it will
be possible to represent the test samples as a linear combination of just those training
samples from the same class. This representation is naturally sparse, involving only a
small fraction of the overall training database. Seeking the sparsest representation
therefore automatically discriminates between the various classes present in the
training set. And they also proved that sparse representation provides a simple and
surprisingly e®ective means of rejecting invalid test samples not arising from any
class in the training database: these samples' sparsest representations tend to involve
many dictionary elements, spanning multiple classes. Recent research on manifold
learning46 shows that a sparse graph characterizing locality relation can convey the
valuable information for classi¯cation. Also for large-scale applications, a sparse
graph is the inevitable choice due to storage limitations.
1356005-11
M. P. Beham & S. M. M. Roomi
operator. The face representation based on Gabor wavelet has been well known as
one of the most successful methods.30 To reduce the high dimensionality of LGXP
descriptor, block-based Fisher's linear discriminant (BFLD) was proposed to extract
the discriminative low-dimensional features. The BFLD method is borrowed from
the previous work in Refs. 41 and 59, which divides the entire feature set into many
feature segments and applies FLD to each segment. Finally, by using BFLD, fuse
local patterns of Gabor magnitude and phase to utilize their complementary infor-
mation for face recognition. Brie°y speaking, for each face image, the SSS problem is
greatly weakened since the dimensionality of the input feature for each FLD is much
lower which in turn increase the recognition accuracy. However, the BFLD method is
very sensitive to facial changes. Therefore, global features extracted from the whole
image fail to cope with these variations. To address these problems, Chowdhury
et al.9 proposed a novel method, in which face images are divided into a number of
nonoverlapping sub-images and then G-2DFLD method is applied to each of these
sub-images as well as to the whole image to extract local and global discriminant
features, respectively. The G-2DFLD method is found to be superior to other ap-
pearance-based methods for feature extraction. All these extracted local and global
discriminant features are then fused to get a large feature vector, which may com-
plement their discriminative power. The fused feature vector is then further pro-
cessed to get lower dimensional feature vector. Its dimensionality is then reduced by
the PCA technique to decrease overall complexity of the system.
Thus holistic approaches represent global information of faces; the disadvantage
of this approach is the variances captured may not be relevant features of the face.
Therefore, one advantage of using feature-based approaches is that they attempt to
precisely capture relevant features from face images. In the next section, we shall
discuss feature-based approaches, which use a priori information to uniquely rec-
ognize persons by their facial features.
1356005-12
A Review of Face Recognition Methods
interconnected to form a graph-like data structure which is ¯tted to the shape of the
face as illustrated in Fig. 7.
An improvement to the Elastic Bunch Graph Matching method was proposed by
Kalocsai et al.31 In their investigation, they explored the e®ect of weighting Gabor
kernels to improve face recognition, where 40 Gabor kernels were produced from 48
feature points of the face. They found from using a dataset of Caucasian faces that
the most discriminatory face features were situated around the forehead and eyes. In
contrast, the least discriminatory face features were the mouth, nose, cheeks and the
lower outline of the face. They concluded the highest weighted kernels would provide
a more compact representation of faces and achieve higher recognition rates by using
the highest weighted kernels as compared to the lowest weighted kernels. Hjelmas23
introduced a gabor features for robust face recognition. According to his algorithm,
for the processing of face image (either for training or testing), the image was ¯ltered
with a set of Gabor ¯lters23 and multiply the ¯ltered image with a 2D Gaussian to
focus on the center of the face, and avoid extracting features at the face contour. This
Gabor ¯ltered and Gaussian weighted image is then searched for peaks, which are
de¯ned as interesting feature points for face recognition. At each peak, a feature
vector is extracted consisting of gabor coe±cients and also store the location and
class label. A visualized example from the testing algorithm is shown in Fig. 8.
Ramesha et al.56 proposed feature extraction-based face recognition with only
small training sets and it yields good results even with one image per person. The
geometric features of facial images like eyes, nose, mouth, etc. are located by using
canny edge operator and face recognition is performed. The geometric features from a
facial image are obtained based on the symmetry of human faces and the variation of
gray levels as shown in Fig. 9. Canny edge detection ¯nds edge by looking for local
maxima of the gradient of fðx; yÞ. In feature extraction, a combination of global and
grid features are used to extract features.
Klarea and Jain36 present a local feature-based method for matching the facial
sketch images to face photographs, which is the ¯rst known feature-based method for
performing such matching. The method proposed by the author di®ers signi¯cantly
from published approaches, which use a local feature-based representation to com-
pare sketches and photos. In order to compare the similarity between a sketch and a
photo, the author ¯rst represents each image using a SIFT-based feature descriptor
at uniformly sampled patches across the face. SIFT-based object matching is
1356005-13
M. P. Beham & S. M. M. Roomi
(a) Original image (b) Gabor ¯ltered image (c) 2-D Gaussian
1356005-14
A Review of Face Recognition Methods
Fig. 11. The SIFT sampling scheme. (a) The solid window. (b) and (c) Sampling the face with window
size s ¼ 16 and 32.
locations, and orientations. Signi¯cant improvements are also observed when the
Gabor ¯ltered images (as shown in Fig. 12) are used for feature extraction instead of
the original images. Gabor wavelets can be applied locally to extract local image
features, or applied to the whole image through a convolution/¯ltering process,
resulting in Gabor ¯ltered images. The e®ect of ¯ltering an image is to break down
the image content to di®erent scales, locations, and orientations that can be
extracted e®ectively for recognition. This method is robust to variations in both
illumination and facial expression. The robustness and discrimination ability was
improved by using the Gabor feature vectors. The performance of this type of
methods could be further improved by using Mahalanobis distance measure and the
Nearest Feature Line classi¯er80 and for di®erent databases. Figure 10 shows an
example of a corresponding sketch photo.
Xu et al.76 proposed a novel shape-based feature extraction technique for face
recognition. Unlike holistic face recognition algorithms, the feature-based algorithm is
relatively robust to variations of face expressions, illumination and pose due to in-
variance of its facial feature vector.16 Since shape-based facial features are relatively
1356005-15
M. P. Beham & S. M. M. Roomi
robust to scale, noise, light, and pose variations, shape features are used as major
features in the face representation. In this approach, the majority of features based on
the coordinate made by the reference points, must not be in deformable parts of a
face; therefore, they will be able to survive variations due to facial expression. The
appearance-based approach to face detection has seen great advances in the last
several years. In this approach, one can learn the image statistics describing the
texture pattern (appearance) of the object class one want to detect, e.g. the face.
However, this approach has had limited success in providing an accurate and
detailed description of the internal facial features, i.e. eyes, brows, nose, and mouth.
In general, this is due to the limited information carried by the learned statistical
model. While the face template is relatively rich in texture, facial features do not
carry enough discriminative information to tell them apart from all possible back-
ground images. This problem can be resolved by adding the context information of
each facial feature in the design of the statistical model. The algorithm is proposed by
Ding and Martinez.11 Learning to discriminate between similar classes is, however, a
challenging task, especially when the within-class variability is large. To resolve this
problem, we have taken advantage of the idea of subclass divisions. The context
information de¯nes the image statistics most correlated with the surroundings of
each facial component. Learning to discriminate between feature and context tem-
plates is di±cult, however, because the context and the texture of the facial features
vary widely under changing expression, pose, and illumination, and may even re-
semble one another. The author addressed this problem with the use of subclass
divisions. Each of the subclasses de¯nes a di®erent con¯guration of the feature or
context (e.g. open versus closed eyes). In the appearance-based approach, the
dimensions of the feature space correspond to the brightness of each of the pixels of
the image. Here, only three dimensions, representing the ones with largest variance,
are shown for illustration as in Fig. 13. The authors have shown 97.1% recognition
accuracy for the feature size of 50.
Intensity-based approaches such as template matching or eigen value analysis are
sensitive to changes in intensity which might be caused by local distortions and
changes in viewing angle as well as translation.66 Feature-based techniques are
usually computationally more expensive than template-based techniques, but are
more robust to variation in scale, size, head orientation, and location of the face in an
image. The geometrical feature-based approach performs successfully in accurate
facial feature detection scheme. However, it remains limited applications because of
its di±cult implementation and its unreliability in some cases. Most face recognition
approaches require a prior training where a given distribution of faces is assumed to
further predict the identity of test faces. Such an approach may experience di±culty
in identifying faces belonging to distributions di®erent from the one provided during
the training. A face recognition technique that performs well regardless of training is,
therefore, interesting to consider as a basis of more sophisticated methods. Chiachia
et al.8 in their work they applied a Census Transform (CT) to extract the basic
1356005-16
A Review of Face Recognition Methods
Fig. 13. The features (e.g. eyes) and their context are divided into subclasses.
features from the images. This technique has been successfully employed in many
practical applications, leading to a fast structural representation of the faces. The
illumination variance mitigation is also one of its advantages. Di®erent from linear
transforms, the CT is not related to intensity or similarity. Based on a scanning and
overlapping window which computes local histograms from census features, the
method performs a straight face feature extraction and matching. Despite being
e®ective, no training is required. A Census Histogram (CH) is a histogram built from
Census Features and expresses the structure kernel distribution. Some bene¯ts of
working with histograms are computation e±ciency and noise robustness. However,
as histograms do not have the ability to encode spatial information, a way to capture
such an aspect is to compute them from smaller image regions whose locations are
preserved. With this simple technique, 97.2% of the faces in the FERET datasets
were correctly recognized. In the next generation of computer vision, 3D face re-
construction is a popular area. 3D face reconstruction should ideally be achieved
easily and cost-e®ectively, without requiring specialized equipment to estimate 3D
shapes. As a result of this, many techniques for retrieving 3D shapes from 2D images
have been proposed. Lee et al.43 proposed a novel method for 3D face reconstruction
based on photometric stereo, which estimates the surface normal from shading in-
formation in multiple images, hence recovering the 3D shape of a face, is proposed. In
order to overcome the problems of previous approaches related to prior-knowledge
regarding lighting conditions and iterative algorithms, the exemplar is synthesized
1356005-17
M. P. Beham & S. M. M. Roomi
with known lighting conditions from at least three images, under arbitrary lighting
conditions and using an illumination-reference.
1356005-18
A Review of Face Recognition Methods
1356005-19
M. P. Beham & S. M. M. Roomi
supervised learning network.87 The general idea with the back propagation algorithm
is to use gradient descent to update the weights to minimize the squared error
between the network output values and the target output values. Thus the network
achieves higher recognition rate and better classi¯cation e±ciency when the feature
vectors have low dimensions. Application of a hybrid network (BAM and BPNN)
rather than BPNN takes less iteration to train and less time to recognize faces.
Recently, Jing and Zhang29 proposed an approach in which a similarity function is
learned describing the level of con¯dence that two images belong to the same person,
similar to Ref. 48. The facial features are selected by obtaining local binary pattern
(LBP)54 histograms of the sub-regions of the face image and the Chi-square distances
1356005-20
A Review of Face Recognition Methods
between the corresponding LBP histograms are chosen as the discriminative features.
The AdaBoost learning algorithm, introduced by Freund and Schapire,14 is then
applied to select the most e±cient LBP features as well as to obtain the similarity
function in the form of a linear combination of LBP feature-based weak learners.
1356005-21
M. P. Beham & S. M. M. Roomi
1356005-22
A Review of Face Recognition Methods
have a Gaussian distribution in the feature space. Therefore, the assumption can be a
problem if data do not follow a Gaussian distribution in the feature space. A di®erent
membership calculation method that does not assume any data distribution may
improve the performance of RKFDA. Another way of improvement may be achieved
by incorporating regularization and kernel learning into RKFDA.
1356005-23
M. P. Beham & S. M. M. Roomi
1356005-24
A Review of Face Recognition Methods
di®erent information, which is, one third of the complete database of the corre-
sponding measure. The idea of dividing the biometric databases is to improve the
performance of the MNN by the divide and conquer principle. The architecture of the
modular neural network for person recognition is shown in Fig. 17. Thus, GA has been
shown to be an e®ective method for feature selection. It is a robust technique and can
work in a large database.
1356005-25
M. P. Beham & S. M. M. Roomi
between the test and training images, and do not perform e®ectively under large
variations in pose, scale and illumination, etc.
Feature-based approaches ¯rst process the input image to identify and extract
unique facial features such as the eyes, mouth, nose, etc. and then compute the
geometric relationships among those facial points, thus tumbling the input facial
image to a vector of geometric features. Standard statistical pattern recognition
techniques are then employed to match faces using these measurements. The main
advantage obtained by the featured-based techniques is that since the extraction of
the feature points precedes the analysis done for matching the image to that of a
known individual, such methods are relatively robust to position variations in the
1356005-26
A Review of Face Recognition Methods
1356005-27
M. P. Beham & S. M. M. Roomi
1356005-28
A Review of Face Recognition Methods
1356005-29
M. P. Beham & S. M. M. Roomi
1356005-30
A Review of Face Recognition Methods
References
1. M. Agarwal, N. Jain, M. Kumar and H. Agrawal, Face recognition using eigen faces and
arti¯cial neural network, Int. J. Comput. Theor. Eng. 2(4) (2010) 624629.
2. D. Aradhana, K. Karibasappa and A. Chennakeshva Reddy, Face recognition using soft
computing tools: A survey, UbiCC J. 6(3) (2009) 854863.
3. R. Basri and D. Jacobs, Lambertian re°ection and linear subspaces, IEEE Trans. Pattern
Anal. Mach. Intell. 25(3) (2003) 218233.
4. A. Bhuiyan and C. H. Liu, On face recognition using gabor ¯lters, in Proc. World
Academy of Science, Engineering and Technology, Vol. 22 (2007), pp. 5156.
5. A. Bouzalmat, N. Belghini, A. Zarghili, J. Kharroubi and A. Majda, Face recognition
using neural network based fourier Gabor ¯lters & random projection, Int. J. Comput.
Sci. Secur. 5(3) (2011) 376.
6. R. Chellappa, C. L. Wilson and S. Sirohey, Human and machine recognition of faces:
A survey, Proc. IEEE 83(5) (1995) 705741.
7. L.-F. Chen, H.-Y. Liao, J.-C. Lin, M.-T. Ko and G.-J. Yu, A new LDA based face
recognition system which can solve the small sample size problem, Pattern Recogn.
33(10) (2000) 17131726.
8. G. Chiachia, A. N. Marana, T. Ruf and A. Ernst, Census histograms: A simple feature
extraction and matching approach for face recognition, Int. J. Pattern Recogn. Artif.
Intell. 25(4) (2011) 13371348.
9. S. Chowdhury, J. K. Sing, D. K. Basu and M. Nasipuri, Feature extraction by fusing local
and global discriminant features: An application to face recognition, Computational In-
telligence and Computing Research (ICCIC), IEEE International Conference (2010),
pp. 14.
10. D. Chu and G. S. Thye, A new and fast implementation for null space based linear
discriminant analysis, Pattern Recogn. 43(4) (2010) 13731379.
11. L. Ding and A. M. Martinez, Features versus Context: An approach for precise and
detailed detection and delineation of faces and facial features, IEEE Trans. Pattern Anal.
Mach. Intell. 32(11) (2010) 148157.
12. Z. Fan, Y. Xu and D. Zhang, Local linear discriminant analysis framework using sample
neighbours, IEEE Trans. Neural Netw. 22(7) (2011) 11191132.
13. I. R. Fasel, M. S. Bartlettt and J. R. Movellan, A comparison of Gabor ¯lters methods
for automatic detection of facial landmarks, in Proc. 5th IEEE Int. Conf. Automatic Face
and Gesture Recognition (2002).
14. Y. Freund and R. E. Schapire, A decision-theoretic generalization of on-line learning and
an application to boosting, J. Comput. Syst. Sci. 55 (1997) 119139.
15. S. T. Gandhe, K. T. Talele and A. G. Keskar, Intelligent face recognition techniques:
A comparative study, GVIP J. 7(2) (2007) 5360.
16. A. Georghiades, P. Belhumeur and D. Kriegman, From few to many: Illumination cone
models for face recognition under variable lighting and pose, IEEE Trans. Pattern Anal.
Mach. Intell. 23(6) (2001) 643660.
17. A. Ghosh, S. K. Meher and B. U. Shankar, A novel fuzzy classi¯er based on product
aggregation operator, Pattern Recogn. 41(3) (2008) 961971.
18. A. Ghosh, B. U. Shankar and S. K. Meher, A novel approach to neuro-fuzzy classi¯cation,
Neural Netw. 22 (2009) 100109.
19. B. G€okberk, M. Okan Irfanoglu, L. Akarun and E. Alpaydin, Learning the best subset of
local features for face recognition, Pattern Recogn. 40(5) (2007) 15201532.
20. D. Goldberg, Genetic Algorithms in Search, Optimization & Machine Learning (Addison-
Wesley, 1989).
21. R. C. Gonzalez and R. E. Woods, Digital Image Processing (Pearson Education, 2006).
1356005-31
M. P. Beham & S. M. M. Roomi
22. G. Heo and P. Gader, Robust kernel discriminant analysis using fuzzy memberships,
Pattern Recogn. 44 (2011) 716723.
23. E. Hjelmås, Feature-based face recognition, in NOBIM Proceedings (Norwegian Image
Processing and Pattern Recognition Conf. (2000).
24. K. Huang and S. Aviyente, Sparse representation for signal classi¯cation, Neural Infor-
mation Processing System, 59(7) (2006) 30863098.
25. G. H. Huang and H. H. Shao, Kernel principal component analysis and application in
face recognition, Computing Engineering 30(13) (2004) 1314.
26. A. Hyvarinen, Survey on independent component analysis, Neural Comput. Surv. 2
(1999) 135.
27. N. Intrator, D. Reisfeld and Y. Yeshurun, Face recognition using a hybrid supervised/
unsupervised neural network, Pattern Recogn. Lett. 17 (1995) 6776.
28. R. Jafri and H. R. Arabnia, A survey of face recognition techniques, J. Inf. Process. Syst.
5(2) (2009) 4168.
29. X. Jing and D. Zhang, Face recognition based on linear classi¯ers combination, Neuro-
computing 50 (2003) 485488.
30. J. P. Jones and L. A. Palmer, An evaluation of the two-dimensional Gabor ¯lter model
of simple receptive ¯elds in cat striate cortex, J. Neurophysiol. 58(6) (1987) 12331258.
31. P. Kalocsai, C. von der Malsburg and J. Horn, Face recognition by statistical analysis of
feature detectors, Image Vis. Comput. 18 (2000) 273278.
32. J. M. Keller, M. R. Gray and J. A. Givens, A fuzzy k-nearest neighbour algorithm, IEEE
Trans. Syst., Man, Cybern. 15(4) (1985) 580585.
33. A. Khatun and Md. Al-Amin Bhuiyan, Neural network based face recognition with Gabor
¯lters, Int. J. Comput. Sci. Netw. Secur. 3(5) (2011) 376386.
34. T. K. Kim and J. Kittler, Locally linear discriminant analysis for multimodal distributed
classes for face recognition with a single model image, IEEE Trans. Pattern Anal. Mach.
Intell. 27(3) (2005) 318327.
35. M. Kirby and L. Sirovich, Application of the Karhunen-Lo`eve procedure for the char-
acterization of human faces, IEEE Trans. Pattern Anal. Mach. Intell. 12(1) (1990)
103108.
36. B. Klarea and A. K. Jain, Sketch to photo matching: A feature-based approach, World
Class University, Science and Technology (R31-2008-000-10008-0) (2010).
37. J. Klir and B. Yuan, Fuzzy Sets and Fuzzy Logic: Theory and Application (Prentice Hall,
New Jersey, 1995).
38. T. Kohonen, Self-Organizing Maps (Springer-Verlag, Berlin, Germany, 1995).
39. K.-C. Kwak and W. Pedrycz, Face recognition using fuzzy integral and wavelet decom-
position method, IEEE Trans. Syst., Man, Cybern. B, Cybern. 34(4) (2004) 16661675.
40. K. C. Kwak and W. Pedrycz, Face recognition using a fuzzy ¯sherface classi¯er, Pattern
Recogn. 38 (2005) 17171732.
41. J. Lai, P. C. Yuen and G. Feng, Face recognition using holistic Fourier invariant features,
Pattern Recogn. 34(1) (2001) 95109.
42. S. Lawrence, C. L. Giles, A. C. Tsoi and A. D. Back, Face recognition: A convolutional
neural network approach, IEEE Trans. Neural Netw. 8 (1997) 98113.
43. S.-W. Lee, P. S. P. Wang, S. N. Yanushkevich and S.-W. Lee, Non iterative 3D- face
reconstruction based on photometric stereo, Int. J. Pattern Recogn. Artif. Intell. 22(3)
(2008) 389410.
44. P. Li, T. J. Hastie and K. W. Church, Very sparse random projections, in KDD '06: Proc.
12th ACM SIGKDD. Int. Conf. Knowledge Discovery and Data Mining (2006).
45. M. Li and B. Yuan, 2-D-LDA: A novel statistical linear discriminant analysis for image
matrix, Pattern Recogn. Lett. 26(5) (2005) 527532.
1356005-32
A Review of Face Recognition Methods
46. C. Liu, Capitalize on dimensionality increasing techniques for improving face recognition
grand challenge performance, IEEE Trans. Pattern Anal. Mach. Intell. 28(5) (2006)
725737.
47. J. Liu, S. Chen and X. Tan, A study on three linear discriminant analysis based methods
in small sample size problem, Pattern Recogn. 41(1) (2008) 102116.
48. J. Li, S. Zhou and C. Shekhar, A comparison of subspace analysis for face recognition,
in Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing (2003), pp. 121124.
49. A. M. Martinez and A. C. Kak, PCA versus LDA, IEEE Trans. Pattern Anal. Mach.
Intell. 23(2) (2001) 228233.
50. P. Melin, D. S anchez and O. Castillo, Genetic optimization of modular neural networks
with fuzzy response integration for human recognition, Inf. Sci. 197 (2012) 119.
51. M. Mitchell, An Introduction to Genetic Algorithms (MIT Press, 1996).
52. A. Ne¯an and M. Hayes, Hidden markov models for face recognition, in Proc. IEEE Int.
Conf. Acoustics, Speech, and Signal Processing, ICASSP'98, Vol. 5, Washington, USA,
May 1998, pp. 27212724.
53. Nie, S. Xiang, Y. Song and C. Zhang, Extracting the optimal dimensionality for local
tensor discriminant analysis, Pattern Recogn. 42(1) (2009) 105114.
54. T. Ojala, M. Pietikäinen and T. Mäenpää, Multiresolution gray-scale and rotation
invariant texture classi¯cation with local binary patterns, IEEE Trans. Pattern Anal.
Mach. Intell. 24(7) (2002) 971987.
55. C. A. Perez, L. A. Cament and L. E. Castillo, Methodological improvement on local
Gabor face recognition based on feature selection and enhanced Borda count, Pattern
Recogn. 44 (2011) 951963.
56. K. Ramesha, K. B. Raja, K. R. Venugopal and L. M. Patnaik, Feature extraction based
face recognition, gender and age classi¯cation, Int. J. Comput. Sci. Eng. 2(01S) (2010)
1423.
57. B. Schelkopf, A Smola and K. Muller, Nonlinear component analysis as a kernel eigen-
value problem, Neural Comput. 10 (1998) 12991319.
58. A. Serrano, I. M. de Diego, C. Conde and E. Cabello, Recent advances in face biometrics
with Gabor wavelets: A review, Pattern Recogn. Lett., doi:10.1016/j.patrec.2009.
59. S. Shan, W. Zhang, Y. Su, X. Chen and W. Gao, Ensemble of piecewise FDA based
on spatial histograms of local (Gabor) binary patterns for face recognition, in Proc. Int.
Conf. Pattern Recognition (2006), pp. 590593.
60. A. Sharma and K. K. Paliwal, Fast principal component analysis using ¯xed-point
algorithm, Pattern Recogn. Lett. 28 (2007) 11511155.
61. L. L. Shen and L. Bai, Gabor feature based face recognition using kernel methods, Sixth
IEEE Int. Conf. 2004 IEEE 10(8) (2010) 235249.
62. A. Sinha and K. Singh, The design of a composite wavelet matched ¯lter for face rec-
ognition using breeder genetic algorithm, Opt. Lasers Eng. 43 (2005) 12771291.
63. X.-N. Song, Y.-J. Zheng, X.-J. Wu, X.-B. Yang and J.-Y. Yang, A complete fuzzy dis-
criminant analysis approach for face recognition, Appl. Soft Comput. 10(1) (2010)
208214.
64. S. Asadi, Ch. D. V. Subba Rao, V. Saikrishna A comparative study of face recognition
with principal component analysis and cross correlation technique, International Journal
of Computer Applications 10(8) (2010) 1721.
65. X. Tan and B. Triggs, Fusing Gabor and LBP feature sets for kernel based face recog-
nition, Proceedings of the 3rd Int. Conf. on Analysis and Modeling of Faces and Gestures,
Vol. 4778 (2007), pp. 235249.
66. S. Tseng, Comparison of holistic and feature based approaches to face recognition, MBC
Thesis, Royal Melbourne Institute of Technology University, July 2003.
1356005-33
M. P. Beham & S. M. M. Roomi
67. M. Turk and A. Pentland, Eigen faces for face recognition, J. Cognit. Neurosci. 3(1)
(1991) 7186.
68. V. P. Vishwakarma, S. Pandey and M. N. Gupta, Fuzzy based pixel wise information
extraction for face recognition, Int. J. Eng. Technol. 2(1) (2010) 117123.
69. V. P. Vishwakarma, S. Pandey and M. N. Gupta, An illumination invariant accurate face
recognition with down scaling of DCT coe±cients, J. Comput. Inf. Technol.
70. Y. Wang and Y. Zhang, The facial expression recognition based on KPCA, Int. Conf.
Intelligent Control and Information Processing, China, 1315 August 2010.
71. J. Weng, N. Ahuja and T. S. Huang, Learning recognition and segmentation of
3-D objects from 3-D images, in Proc. Int. Conf. Computer Vision (ICCV 93), Berlin,
Germany (1993).
72. L. Wiskott, J. Fellous, N. Krüger and C. von der Malsburg, Face recognition by elastic
bunch graph matching, in Intelligent Biometric Techniques in Fingerprint and Face
Recognition, eds. L. C. Jain et al., Chapter 11 (CRC Press, 1999), pp. 355396.
73. J. Wright, Y. Ma, J. Mairal, G. Sapiro, T. Huang and S. Yan, Sparse representation for
computer vision and pattern recognition, Proc. IEEE 98(6) (2010) 10311044.
74. J. Wright, A. Yang, A. Ganesh, S. Sastry and Y. Ma, Robust face recognition via sparse
representation, IEEE Trans. Pattern Anal. Mach. Intell. 31(2) (2009) 210227.
75. S. Xie, S. Shan, X. Chen and J. Chen, Fusing local patterns of Gabor magnitude and
phase for face recognition, IEEE Trans. Image Process. 15(11) (2010) 36083614.
76. Z. Xu, H. R. Wu, X. Yu, K. Horadam and B. Qiu, Robust shape-feature-vector-based face
recognition system, IEEE Trans. Instrum. Meas. 60(12) (2011) 16131631.
77. M.-H. Yang, Kernel eigenfaces vs. kernel ¯sherfaces: Face recognition using kernel
methods, in Proc. of the IEE Int. Conf. on Automatic Face and Gesture Recognition
(2002), pp. 215220.
78. J. Yang, A. F. Frangi, D. Zhang and Z. Jin, KCPA plus LDA: A complete kernel ¯sher
discriminant framework for feature extraction and recognition, IEEE Trans. Pattern
Anal. Mach. Intell. 27(2) (2005) 230244.
79. Q. Yang and X. Tang, Recent advances in subspace analysis for face recognition,
SINOBIOMETRICS (2004), pp. 275287.
80. S. Yan, D. Xu, Q. Yang and L. Zhang, Multilinear discriminant analysis of face recog-
nition, IEEE Trans. Image Process. 16(1) (2007) 212220.
81. W. Yang, H. Yan, J. Wang and J. Yang, Face recognition using complete Fuzzy LDA,
face recognition using complete fuzzy LDA, in Proc. 19th Int. Conf. Pattern Recognition
2008, December 2008, pp. 14.
82. J. Yang, J. Y. Yang and A. F. Frangi, Combined Fisherfaces framework, Image Vis.
Comput. 21(12) (2003) 10371044.
83. J. Yang, D. Zhang, Y. Xu and J. Y. Yang, 2-D discriminant transform for face recogni-
tion, Pattern Recogn. 38(7) (2005) 11251129.
84. X. You, Q. Chen, P. Wang and D. Zhang, Nontensor-product-wavelet-based facial fea-
ture representation, in Image Pattern Recognition Synthesis and Analysis in Biometrics
(World Scienti¯c Publishing Company, 2007), pp. 207224.
85. L. A. Zadeh, Fuzzy sets, Inf. Control 8 (1965) 338353.
86. D. Zhang, X. You, P. Wang, S. N. Yanushkevich and Y. Y. Tang, Facial biometrics using
nontensor product wavelet and 2d discriminant techniques, Int. J. Pattern Recogn. Artif.
Intell. 23(3) (2009) 521543.
87. J. Zhao, Combined weighted eigenface sand BP-based networks for face recognition, 5th
International Conference on Visual Information Engineering, (2008), pp. 298302.
88. W. Zhao, R. Chellappa, P. J. Phillips and A. Rosenfeld, Face recognition: A literature
survey, ACM Comput. Surv. 35(4) (2003) 399458.
1356005-34
A Review of Face Recognition Methods
1356005-35