Documente Academic
Documente Profesional
Documente Cultură
I. I NTRODUCTION
biometric patterns in uncontrolled conditions such as illumination changes, deformation, occlusions, pose/view changes, etc.
should be minimized via robust feature analysis. Therefore it
is a challenging problem to achieve a good balance between
inter-class distinctiveness and intra-class robustness.
Generally the problem of feature analysis can be divided
into two sub-problems, i.e. feature representation and
feature selection. Feature representation aims to computationally characterize the visual features of biometric images.
Local image descriptors such as Gabor filters, Local Binary
Patterns and ordinal measures are popular methods for feature
representation of texture biometrics [1]. However, variations
of the tunable parameters in local image filters (e.g. location,
scale, orientation, and inter-component distance) can generate
a large and over-complete feature pool. Therefore feature
selection is usually necessary to learn a compact and effective
feature set for efficient identity authentication. In addition,
feature selection can discover the knowledge related to the
pattern recognition problem of texture biometrics, such as the
importance of various image structures in iris and palmprint
images and the most suitable image operators for identity
authentication.
Our previous work has demonstrated that ordinal measures
(OM) [2] provide a good feature representation for iris [3],
palmprint [4] and face recognition [5]. Ordinal measures are
defined as the relative ordering of a number of regional image
features (e.g. average intensity, Gabor wavelet coefficients,
etc.) in the context of visual image analysis. The basic idea
of OM is to characterize the qualitative image structures
of texture-like biometric patterns. The success of ordinal
representation comes from the texture-like visual biometric
patterns where sharp and frequent intensity variations between
image regions provide abundant ordinal measures for robust
and discriminating description of individual features. Detailed
information on ordinal measures in the context of biometrics,
including its definition and properties of invariance, robustness, distinctiveness, compactness and efficiency can be found
in [2][5].
Multi-lobe Ordinal Filter (MOF) with a number of tunable
parameters is proposed to analyze the ordinal measures of
biometric images (Fig. 1) [3]. MOF has a number of positive
and negative lobes which are specially designed in terms of
distance, scale, orientation, number, and location so that the
filtering result of MOF with the biometric images can measure
the ordinal relationship between image regions covered by
the positive and negative lobes. From Fig. 1 we can see that
variations of the parameters in multi-lobe ordinal filter can
lead to an extremely huge feature set of ordinal measures.
For example, each basic Gaussian lobe in MOF has five
1057-7149 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
SUN et al.: ORDINAL FEATURE SELECTION FOR IRIS AND PALMPRINT RECOGNITION
Fig. 1.
3923
(1)
3924
Fig. 2.
SUN et al.: ORDINAL FEATURE SELECTION FOR IRIS AND PALMPRINT RECOGNITION
3925
N
N
+ +
+
k
j
N+
N
j =1
Objective function:
min
D
N
N
+ +
+
P
w
i i
j
k
N+
N
+
j=1
k=1
(2)
i=1
Subject to:
k=1
D
i=1
D
wi x i+j + j+ ,
wi x ik
k ,
i=1
+
j 0,
j = 1, 2, , N +
k = 1, 2, , N
(3)
(4)
j = 1, 2, , N +
(5)
(6)
k 0,
k = 1, 2, , N
wi 0,
i = 1, 2, , D
(7)
N
N
+ +
+
k
j
N+
N
j =1
k=1
Pi wi
i=1
3926
Pi wi
i=1
Pi wi
i=1
wi = N
i=1
SUN et al.: ORDINAL FEATURE SELECTION FOR IRIS AND PALMPRINT RECOGNITION
3927
3928
Pi wi
i=1
Pi wi
i=1
Fig. 4. Sparsity analysis of feature selection methods. (a) The learning result
of linear programming. (b) Iris recognition performance as a function of the
number of selected ordinal feature units.
intra-class variations in CASIA-Iris-Thousand include illumination changes, motion blur, eyeglasses, specular reflections,
and JPEG compression. Since CASIA-Iris-Thousand is the
largest iris image dataset in the public domain, it is wellsuited for studying the uniqueness of iris features and practical
performance of iris recognition algorithms.
The iris images of the first 25 subjects are used as the
training dataset and the remained 19,500 iris images of 975
subjects are used to test the performance of various feature selection methods. There are totally 500 iris images of
50 eyes in the training dataset and they are used to generate
2,250 intra-class and 4,900 inter-class matching samples. We
do not use all possible inter-class matching samples due to
three reasons. Firstly, it can keep the balance between the
number of positive and negative samples. Secondly, the use of
a subset of inter-class comparisons can minimize the number
of linear constraints in linear programming so that the solution
of optimization problem is simplified. Thirdly, it can reduce
the redundancy among negative samples. Five iris recognition
methods namely LP-OM, Boost-OM, Lasso-OM, mRMR-OM
and ReliefF-OM are performed on the training dataset to
obtain the most effective feature set of ordinal measures
respectively. And then the selected ordinal feature units are
evaluated on the testing dataset.
Firstly we investigate the feature selection results of the
proposed LP-OM method. The weights of 47,042 ordinal
feature units as the feature selection output are shown in
Fig. 4a. There are only 26 non-zero components and almost
D
Pi wi
i=1
SUN et al.: ORDINAL FEATURE SELECTION FOR IRIS AND PALMPRINT RECOGNITION
Fig. 5.
Fig. 6.
3929
3930
TABLE I
C OMPARISON OF P ERFORMANCE OF I RIS R ECOGNITION M ETHODS O N T HE C ASIA -I RIS -T HOUSAND
Fig. 8. Some typical ordinal feature units selected by LP, Lasso, Boost and
mRMR. (a) LP-OM. (b) Lasso-OM. (c) Boost-OM. (d) mRMR-OM.
SUN et al.: ORDINAL FEATURE SELECTION FOR IRIS AND PALMPRINT RECOGNITION
3931
Fig. 9. Illustration of di-lobe and tri-lobe ordinal filters for palmprint image
analysis. (a) Examples of di-lobe ordinal filters. (b) Examples of tri-lobe
ordinal filters.
3932
Fig. 10.
PolyU 2.0. It is usually suggested to use independent training and testing datasets in pattern recognition experiments.
However, this paper still uses PolyU 1.0 for training and PolyU
2.0 for testing due to the following reasons.
Almost all public palmprint databases including PolyU
and CASIA do not have a division of training set and
testing set like face biometrics. So most palmprint recognition researchers usually report the best results which
are tuned on the whole database. We think it is fair
to compare our methods with state-of-the-art palmprint
recognition methods considering PolyU 1.0 is only related
to 7.7% palmprint images of PolyU 2.0. It is better to
report the palmprint recognition accuracy on the full
PolyU 2.0 for performance evaluation of the existing
methods.
Our previous work [4] has demonstrated that it is easy to
achieve 100% accuracy in PolyU 1.0 for both competitive
code and ordinal code. So the performance of state-ofthe-art palmprint recognition methods on the independent
version of PolyU 2.0 (excluding all related images in
PolyU 1.0) can be measured and compared with the
testing results on PolyU 2.0.
The generalization capability of LP-OM will be
demonstrated on the CASIA database using the ordinal
features trained on PolyU 1.0 (Appendix D [37]). So it
is unnecessary to emphasize the independence between
PolyU 1.0 and PolyU 2.0.
Since PolyU Palmprint Database is collected using highquality sensor and PolyU-Palmprint Ver 1.0 is small in size,
our previous work based on hand crafted di-lobe ordinal
filters [4] can achieve zero EER on PolyU-Palmprint Ver 1.0.
To learn a robust feature set of ordinal measures, a more
challenging training dataset is constructed by adding some
noise and perturbations into PolyU-Palmprint Ver 1.0 (Fig. 10).
Finally the synthetic training dataset includes 4,200 palmprint
images of 100 classes.
Firstly 5,000 tri-lobe ordinal filters are generated with
random parameter setting of location, scale, and orientation.
They are tested on the training dataset. The top 500 trilobe ordinal filters with the smallest EER are selected as
the candidate feature pool. Some tri-lobe ordinal filters in
the feature pool are shown in Fig. 9b. We can see that the
ordinal filters are significantly different to the filters used for
Fig. 11. Illustration of selected tri-lobe ordinal filters for palmprint image
analysis. (a) The top 5 tri-lobe ordinal filters selected by linear programming
from 500 ordinal filter pool in the first round of feature selection. (b) The top
2 ordinal filters in the second round of feature selection.
SUN et al.: ORDINAL FEATURE SELECTION FOR IRIS AND PALMPRINT RECOGNITION
TABLE II
C OMPARISON OF P ERFORMANCE OF PALMPRINT R ECOGNITION
M ETHODS O N P OLYU PALMPRINT I MAGE D ATABASE
and di-lobe OM. Moreover, the LP-OM after the second round
of feature selection achieves the highest accuracy (EER=
6.19 105 ) with the smallest feature template (256 Bytes)
on the PolyU Ver 2.0 to the best of our knowledge.
The experimental results on the CASIA Palmprint Image
Database are reported in Appendix D [37].
VI. D ISCUSSION AND C ONCLUSIONS
This paper has proposed a novel feature selection method to
learn the most effective ordinal features for iris and palmprint
recognition based on linear programming. The success of
LP feature selection comes from the incorporation of the
large margin principle and weighted sparsity rules into the
LP formulation. The feature selection model based on LP is
flexible to integrate the prior information of each feature unit
related to biometric recognition such as DI, EER and AUC
into the optimization procedure. The experimental results have
demonstrated that the proposed LP feature selection method
outperforms mRMR, ReliefF, Boosting and Lasso.
A number of conclusions can be drawn from the study.
The identity information of visual biometric patterns
comes from the unique structure of ordinal measures. The
optimal setting of parameters in local ordinal descriptors
varies from biometric modality to modality, subject to
subject and even region to region. So it is impossible
to develop a common set of ordinal filters to achieve the
best performance for all visual biometric patterns. Ideally
it is better to select the optimal ordinal filters to encode
individually specific ordinal measures via machine
learning. However, such a personalized solution is inefficient in large-scale personal identification applications.
So the task of this paper turns to a suboptimal solution,
learning a common ordinal feature set for each biometric modality, which is expected to work well for most
subjects.
A main contribution of this paper is a novel optimization
formulation for feature selection based on linear programming (LP). Our expectations on the feature selection
3933
3934